Longterm Storage
The GRAX Application is built for longterm, near-infinite retention of your Salesforce object data. To facilitate fast, reliable, and scalable storage of this data, GRAX uses AWS S3 and recommends comparable options across Azure and GCP. To understand more about how GRAX uses this storage, storage growth expectations, why we don't recommend cross-cloud usage, and supported lifecycle processes, see the sections below.
Supported Cloud Blob-Storage Platforms
AWS S3
Azure Blob Storage
GCP Cloud Storage
Supported Storage Types
On AWS, GRAX only supports the standard storage class. GRAX won't work with Intelligent Tiers, Glacier, or Outposts.
On Azure, GRAX only supports the standard (GPv2) storage account tier. Both Azure Blob Storage and Azure Data Lake (gen2) Storage are supported. Premium storage accounts or containers won't work with GRAX.
On GCP, GRAX only supports standard storage. Nearline
, Coldline
, or Archive
storage plans/containers won't work with GRAX.
Unsupported Activities
Direct access, modification, or removal of data from the GRAX bucket isn't supported. Attempts to rename, remove, or modify blobs within the storage bucket cause data loss and GRAX availability issues. GRAX isn't responsible for partial or complete loss of your backup dataset if this restriction is violated.
In conjunction with the above, we also don't support the following:
Lifecycle rules triggering blob deletion
Lifecycle rules moving blobs to alternative storage tiers/classes
Restoration via blob versioning
Required Permissions
AWS S3 Permissions
At a minimum, GRAX requires the s3:ListBucket
, s3:GetObject
, s3:PutObject
, and s3:DeleteObject
permissions on the bucket. If you're using KMS encryption, GRAX also requires the kms:DescribeKey
, kms:Decrypt
, kms:Encrypt
, kms:GenerateDataKey
, and kms:ReEncrypt*
permissions on the KMS key resource.
Azure Blob Storage Permissions
GRAX only supports Storage Account Access Keys for Azure Blob Storage. As such, granular permissions aren't currently available.
GCP Cloud Storage Permissions
At a minimum, GRAX requires the storage.objects.create
, storage.objects.delete
, storage.objects.get
, storage.objects.list
, and storage.objects.update
permissions on the bucket. The best practice is to use the Storage Object User
role, as it grants all necessary permissions (plus a small number of extras) without the ability to modify access controls.
How GRAX Stores Your Data
GRAX's primary storage layer consists of compressed blobs in a proprietary format. Based on leading big-data storage practices, our storage layer provides write-optimized and scalable performance useful for data backup.
To facilitate this write-optimized performance, GRAX initially writes data with minimal compression and de-duplication. That data is then processed asynchronously to compress, de-duplicate, and sort the contained records over the hours following initial backup. We'll call this "compaction." This process is transparent to the GRAX user, as access to the data isn't limited during this process.
Compaction produces immutable storage blobs. As data is compacted past the original written state, new blobs are written to represent that data and the source blobs are marked for deletion. 14 days after being marked as such, those non-compacted blobs are deleted from storage as the data within them is represented elsewhere in "better" blobs. This process repeats as your dataset grows, meaning that compaction is a permanent and recurring background process that maintains your dataset.
Also for consideration, the vast majority of load on the GRAX Application occurs in the first few weeks of operation in which you connect GRAX and we capture a snapshot of the entire exposed Salesforce dataset. This means that data can build up temporarily until compaction and then deletion catches up.
With the above process in consideration, we can plot the expected data storage usage on a generalized curve (specific values, units, and time frames depend on your environment):
How Much GRAX Storage Costs
Data stored in GRAX takes up significantly less space than the same data would take up in Salesforce. Cloud storage is also vastly cheaper than Salesforce storage; most users hardly notice the cost of data-at-rest in a GRAX environment. Here are some breakdowns of real-world costs of data-at-rest in AWS S3.
The calculations below don't include file/binary storage consumed by Attachments, ContentDocuments, or EventLogFiles. Storage Usage for those binary components can currently be assumed to be 1:1 with Salesforce and cost-estimated with published storage rates.
Average Case:
Production environments, on average, consume a low amount of storage
216 GB consumed by average environment
$4.97 per month at standard S3 data-at-rest rates
High Estimate:
Published cost estimates contain a very high recommended S3 cap to allow for longterm growth
5 TB total storage space recommendation
$117 per month at standard S3 data-at-rest rates
Largest:
International support organization with high record turnover rate
3+ year relationship with consistent full-org backups
6,000,000,000+ record versions backed up with GRAX
6 TB of total storage consumed
$141 per month at standard S3 data-at-rest rates
Frequently Asked Questions
Where can I get more information?
Can I use Azure Premium Storage for my storage container?
No. GRAX does not support the premium blob storage tier or API from Microsoft Azure. Use the standard storage tier instead.
What are the data prefixes/folders written by GRAX?
With the exception of the parquet folder, these storage locations contain data stored in a proprietary format and are designed to be read/written solely by the connected GRAX Application.
grax
Metadata and binary components of Salesforce Files
table
Primary location for object and record backups
internal
Data generated by the GRAX application for its own use
parquet
xArG
Deprecated
Can Versioning be enabled in my storage bucket/container?
No. GRAX does not support versioning in the storage bucket or container. GRAX stores data in a proprietary format that keeps track of all record versions internally. Storage blobs/objects are not updated in place - they're rewritten as a new blob when replaced. The old blobs are then removed once no longer representative of the dataset. Enabling versioning thus has no affect on data safety but keeps these deleted blobs around indefinitely as a "non-current version." This will increase storage consumption and costs significantly beyond the documented expectations.
How can I clean deleted data from a bucket that had versioning enabled?
Non-current Version Expiration - This removes the non-current versions of objects after a specified number of days. The rule should be configured to remove non-current versions after 1 day.
Remove Expired Object Delete Markers - This removes the delete markers left behind after the non-current versions are removed.
An example of a Lifecycle Rule that will remove non-current versions and delete markers after 1 day is shown below:
These examples are not filtered. They will apply to the entire bucket. If you share your GRAX bucket with other applications or data, these rules may delete non-current versions of storage objects that are not related to GRAX. Proceed with caution.
XML
JSON
Can replication be enabled in my storage bucket/container?
GRAX does not natively support S3 bucket replication, nor does the app support failover to a replicated bucket.
While replication is technically possible on any bucket if you set it up yourself, bucket replication introduces unsupported configuration and significant complexity and cost, especially due to S3 Versioning and Delete Marker behavior.
Replicated data is not reliable for failover, is not guaranteed to be consistent or recoverable in the event of failure and offers no immediate operational value in disaster recovery scenarios.
What are the limitations of GRAX-hosted Storage?
GRAX-hosted storage supports backup, archive, and restore on AWS only. It does not support Data Lake or Data Lake House. It does not offer access to the underlying storage account and configuration. It is subject to storage, request, and bandwidth limits and may incur overage fees or require purchasing additional blocks of storage.
In all scenarios, bringing your own storage account is better for data security, cost, and data reuse.
Last updated
Was this helpful?