Blob Storage
For storage of record data, metadata, and file binaries, GRAX uses industry standard blob storage technologies. This is a reliable, durable, scalable, and cost effective way to retain your Salesforce data.
Never Directly Modify a GRAX Storage Bucket
GRAX stores record data in a proprietary format that is neither human readable nor organized in a straightforward way. Attempts to rename, remove, or modify blobs within the storage bucket will cause data loss and GRAX availability issues. GRAX isn't responsible for partial or complete loss of your backup dataset in the event of tampering by users.
For targeted record deletion (like GDPR compliance), see our related documentation.
How It Works
Metadata, files, records, and cache data are written to the bucket as part of Backup.
The dataset is compacted and compressed as more data is added.
The app maintains proprietary indexes on top of the dataset for performance.
The GRAX API, Data Lake, Search, and Restore read from the dataset on demand.
How Much it Costs
Backed up record data will consume less blob storage than reported by Salesforce. Combined with the low price of blob storage services, storage costs for GRAX almost always total a small fraction of what Salesforce would charge to store the data.
Below are some breakdowns of real-world costs of data-at-rest in AWS S3. These don't include storage consumed by the binary component of Attachments, ContentDocuments, or EventLogFiles. Storage usage for those binary components can currently be assumed to be 1:1 with Salesforce and estimated with published storage rates.
The extreme outlier described here is an international support organization backing up over 40,000,000,000 record versions in the last five years.
Average Case
216 GB
$4.97
High End
5 TB
$117
Extreme Outliers
25 TB
$589
Data Transfer Fees
The calculations here only consider data-at-rest rates. All cloud providers bill for data-transfer or data-access operations. Those are considered variable costs based on usage of GRAX features and overall processing load. Our published AWS estimate contains estimates for "normal" GET, LIST, PUT, and DELETE requests as well as overall data-transfer expectations.
Technologies
AWS S3 (Simple Storage Service)
GRAX supports S3's standard storage class. Intelligent Tiering, Glacier, Outposts, Versioning, and Replication are not supported.
Authentication
GRAX supports the following authentication patterns for AWS S3:
Static Access Keys
Instance Roles
Assume Role via Static Access Keys
Assume Role via Instance Roles
Authorization
Regardless of authentication pattern, the following permissions must be granted at the bucket scope (arn:aws:s3:::example):
s3:ListBucket
Additionally, the following permissions must be granted for all objects within the bucket (arn:aws:s3:::example/*):
s3:GetObject
s3:PutObject
s3:DeleteObject
If using KMS encryption, the following permissions must be granted on the KMS key scope:
kms:DescribeKey
kms:Decrypt
kms:Encrypt
kms:GenerateDataKey
kms:ReEncrypt*
Azure Blob / Data Lake (gen2) Storage
GRAX supports standard Azure Blob Storage and Azure Data Lake (gen2) Storage accounts, including hierarchical namespaces. Premium storage accounts and gen1 Data Lake accounts are not supported. All objects must be stored in the "Hot" tier.
Authentication
GRAX supports the following authentication patterns for Azure Storage Accounts:
Storage Account Access Keys
System or Managed Identities
Authorization
If using a system or managed identity, the following permissions must be granted at the container scope:
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/read
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/write
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/delete
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/add/action
Microsoft.Storage/storageAccounts/blobServices/containers/blobs/move/action
The "Storage Blob Data Contributor" role is sufficient, but includes permission to delete the container. Custom roles are recommended for granting minimum necessary permissions.
Double Check the Scope
Make sure you assign storage permissions at container level, not the storage account level. These permissions at the storage account level allow the identity to interact with all data in all containers.
GCP Cloud Storage
GRAX supports standard tier GCP Cloud Storage accounts. Nearline, Coldline, and Archive tiers are unsupported.
Authentication
GRAX supports the following authentication patterns for GCP Cloud Storage:
Service Account Keys
Authorization
The following permissions must be granted on the bucket scope:
storage.objects.create
storage.objects.delete
storage.objects.get
storage.objects.list
storage.objects.update
The built-in "Storage Object User" role grants these permissions safely.
How It's Connected
For help connecting your GRAX application to a blob store, see our documentation for Connecting Storage.
Frequently Asked Questions
What are the data prefixes/folders written by GRAX?
With the exception of the parquet folder, these storage locations contain data stored in a proprietary format and are designed to be read/written solely by the connected GRAX Application.
grax
Metadata and binary components of Salesforce Files
table
Primary location for object and record backups
internal
Data generated by the GRAX application for its own use
parquet
Parquet files generated by GRAX's Data Lake feature
Can I use Data Lake with GRAX-Hosted storage?
No. For security reasons, customers must use their own storage buckets for Data Lake.
How can I clean deleted data from a bucket that had versioning enabled?
First, ensure that Versioning is now disabled or suspended indefinitely in the bucket. Next, use provider-specific tools to automatically remove the "non-current versions" for deleted objects from the bucket. AWS S3 supports Lifecycle Rules that can be used to automatically remove old versions of objects, and clean up delete markers left behind. A rule needs to be created to do the following:
Non-current Version Expiration - This removes the non-current versions of objects after a specified number of days. The rule should be configured to remove non-current versions after 1 day.
Remove Expired Object Delete Markers - This removes the delete markers left behind after the non-current versions are removed.
An example of a Lifecycle Rule that will remove non-current versions and delete markers after 1 day is shown below:
These examples are not filtered. They will apply to the entire bucket. If you share your GRAX bucket with other applications or data, these rules may delete non-current versions of storage objects that are not related to GRAX. Proceed with caution.
XML
JSON
Last updated
Was this helpful?

