Technical Requirements
What You Need to Bring
GRAX requires network accessibility (domain and certificate), compute resources, a PostgreSQL database, and a storage bucket at a bare-minimum.
Network Accessibility
Your GRAX service must either be reachable publicly via a registered domain name with a valid certificate or via a Salesforce-configured VPN connection to your private network.
GRAX offers subdomains under https://secure.grax.io
that you can use with no configuration; see the Networking Requirements for full details.
You can also bring any domain that you own/manage. Salesforce - as well as your end users - communicates with the GRAX service with this domain (and VPN if applicable).
Compute
To provide the processing power for your GRAX service, we recommend major cloud providers like AWS or Azure. Only a single GRAX application may be running at any given time. Violation of this constraint may cause data and service failures. The instance may be ephemeral and doesn't contain meaningful state information. The instance may be containerized. For high availability, we recommend multi-Availability-Zone (or non-AWS equivalent) deployments and auto-replacement policies.
GRAX isn't compatible with "bursting" instances such as AWS's t3
or Azure's B
series; Resource usage by the GRAX app is continuous. Automatic hibernation of instances when CPU usage is high directly counteracts the ability of GRAX to operate at times when your org most needs a backup.
Minimum technical specifications:
- x86_64 Linux distribution
- 4 vCPUs
- 16 GB RAM
- 500 GB reserved disk cache (see below)
- 5+ Gbps network connection
Cache Space
GRAX caches data on disk to prevent excessive network traffic and improve performance. This cache is located under /tmp
by default on Linux systems. It must be backed by a persistent storage medium and may not use memory filesystems like tmpfs
. To isolate the GRAX cache for ease of persistence, replacement, and maintenance, it's recommended to use a separate disk mounted at /tmp
.
If disk configurations dictate a path other than /tmp
be used, the TMPDIR
environment variable can be used to point to a new path for the cache. This setting can be provided via the GRAX .env
file.
The disk holding the GRAX cache must be an SSD (no HDDs) and must meet or exceed published performance targets of AWS's gp2 EBS volumes.
It isn't recommended to persist the GRAX cache between instance replacements.
AWS Compute Recommendations
- Minimum m6a.xlarge instance type running an Amazon Linux 2 AMI
- Minimum of gp2 EBS volume for cache
Azure Compute Recommendations
- Minimum Standard D4s v3 (4 vcpus, 16 GiB memory) instance type running RedHat 8
- Minimum Premium SSD with Read/Write Host Caching enabled for cache
GCP Compute Recommendations
- Minimum n2-standard-4 instance type running RedHat 8
- Minimum of SSD Persistent Disk for cache
PostgreSQL
For longterm metadata and search index storage, GRAX uses a PostgreSQL relational database. This database only needs to be accessed by the instance and should not be publicly accessible. This database isn't ephemeral and data loss or availability issues halt/crash the GRAX app. Authentication with the database happens via username/password credentials provided in the connection string. For high availability, we recommend multi-Availability-Zone (or non-AWS equivalent) deployments. The PostgreSQL major version must be at least 14.0
. We recommend you snapshot/backup the database daily with a retention of more than a month.
Please note that GRAX does not use SSL with Certificate Verification on Postgres databases; no action is needed regarding Root Certificates or Certificate Authorities as this will not impact the application.
GRAX isn't compatible with "bursting" instances such as AWS's t3
or Azure's B
series; Resource usage by the GRAX app is continuous. Automatic hibernation of instances when CPU usage is high directly counteracts the ability of GRAX to operate at times when your org most needs a backup.
Minimum technical specifications:
- 2 vCPUs
- 4 GB RAM
- 75 GB persistent disk storage
- PostgreSQL v14+
Extensions:
uuid-ossp
(v1.1+)pg_stat_statements
Permissions:
The PostgreSQL credentials used in the final setup of your GRAX app must have total/complete permissions to the database and all data within it. GRAX uses many database primitives to optimize performance and provide reliable service, thus access may be higher than a traditional app.
AWS DB Recommendations
- RDS Aurora PostgreSQL v13+ with minimum instance type db.r6g.xlarge
- Graviton instances (
g
) offer up to 40% savings over comparable Intel based instances on AWS
- Graviton instances (
Azure DB Recommendations
- Azure Database for PostgreSQL Flexible Server v14+
- Compute Gen 5
- Minimum 4 vCPUs, 20 GiB Memory
- 75 GB to start
uuid-ossp
installed/enabled (Documentation)pg_stat_statements
installed/enabled
GCP DB Recommendations
- Cloud SQL for PostgreSQL v14+
- Minimum db-n1-highmem-4 instance type
- 75 GB to start
uuid-ossp
installed/enabled (Documentation)pg_stat_statements
installed/enabled
Storage Bucket
For longterm Salesforce data storage, GRAX supports AWS S3 and the Azure and GCP equivalents. GRAX doesn't support S3 Versioning, Object Lock, or Glacier (nor any equivalent features in other providers). This bucket only needs to be accessed by the instance and should not be publicly accessible. The GRAX service, in maintenance of our proprietary data format, creates and deletes objects from the selected bucket over the lifetime of the app. No data-containing objects are updated in place.
The total amount of data stored in your bucket depends on many factors that produce an unpredictable final size. Thus, we can only suggest that up to 5TB of total storage volume be available/planned in these buckets for the average customer.
Permissions:
The bucket credentials used in the final setup of your GRAX app must have access to delete, get, update, and create all contents of the bucket. If you choose to use any encryption-at-rest mechanism (AWS KMS), you must also grant permission to use the keys in question and perform encryption operations on all objects in the bucket.
What You May Want to Bring
In addition to the above, customers also like to use autoscaling, load balancing, and traffic filtering services for security and reliability. For high-availability and quality-of-service, we recommend an ALB positioned in front of the instance for ingress traffic. For handling of TLS encryption/certs with custom domains, see here. We recommend the use of an autoscaling group with a size of 1
for the sake of auto-replacement and recovery. Additionally, most enterprise cloud teams use ingress and egress filtering (for example a web app firewall and/or VPC gateway) for all resources. You can find more details about the minimum requirements for networking here.
Updated about 1 month ago