Overview
This documentation is intended for developers who want to operate their own Atlas RPC node.
Reach out to the Atlas team on Discord and open a help ticket to request an API key.
Atlas RPC Node
This section provides the available public RPC endpoints for the Atlas testnet and instructions for interacting with them.
What are Atlas RPC Nodes?
Atlas RPC nodes are bridges between end-users and our Atlas sequencer. It provides a Solana-compatible JSON-RPC and websocket endpoint. All transactions sent to the RPC node will be forwarded to the sequencer.
Once the sequencer processes those transactions, they get sent back to the RPC node via a message bus (Redis for now), and the RPC node will replay the transaction to make sure that the sequencer is not malicious.
Machine specifications
At least 32 cores, 64GB memory, 100GB disk space on linux/amd64
or linux/arm64
architecture.
We support machines with lower specs, but recommend that you run the node in low CPU mode.
Upstream connections/External dependencies
- Redis (message bus):
redis://redis-testnet.atlas.xyz:6379
- Postgres (historial data):
postgresql://public_access:cfbea91fe55e79be93c69c7552d8c8114e1@postgres-testnet.atlas.xyz:5432/svm_node
- Sequencer:
https://testnet.atlas.xyz:3002
(API key required)
Starting the node from a binary
atlas-replay-node --mode=rpc \
--log-level=info \
--redis-url='redis://redis-testnet.atlas.xyz:6379/' \
--server-url='https://testnet.atlas.xyz:3002/' \
--postgres-url='postgresql://public_access:cfbea91fe55e79be93c69c7552d8c8114e12@postgres-testnet.atlas.xyz:5432/svm_node' \
--api-key=<YOUR API KEY>
Low CPU mode
This will use up to 4 CPU cores.
atlas-replay-node --mode=rpc \
--log-level=info \
--low-cpu \
--redis-url='redis://redis-testnet.atlas.xyz:6379/' \
--server-url='https://testnet.atlas.xyz:3002/' \
--postgres-url='postgresql://public_access:cfbea91fe55e79be93c69c7552d8c8114e12@postgres-testnet.atlas.xyz:5432/svm_node' \
--api-key=<YOUR API KEY>
To save logs elsewhere, pass the argument --log-dir
. The log files on disk will be rotated daily.
Starting from docker
docker run -d \
--name atlas-rpc \
-p 8899:8899 -p 8900:8900 \
ghcr.io/ellipsis-labs/atlas-replay-node:latest \
--log-level=info \
--mode=rpc \
--redis-url='redis://redis-testnet.atlas.xyz:6379/' \
--server-url='https://testnet.atlas.xyz:3002/' \
--postgres-url='postgresql://public_access:cfbea91fe55e79be93c69c7552d8c8114e12@postgres-testnet.atlas.xyz:5432/svm_node' \
--api-key=<YOUR API KEY>
Low CPU Mode
docker run -d \
--name atlas-rpc \
-p 8899:8899 -p 8900:8900 \
ghcr.io/ellipsis-labs/atlas-replay-node:latest \
--log-level=info \
--mode=rpc \
--low-cpu \
--redis-url='redis://redis-testnet.atlas.xyz:6379/' \
--server-url='https://testnet.atlas.xyz:3002/' \
--postgres-url='postgresql://public_access:cfbea91fe55e79be93c69c7552d8c8114e12@postgres-testnet.atlas.xyz:5432/svm_node' \
--api-key=<YOUR API KEY>
Kubernetes template
apiVersion: apps/v1
kind: Deployment
metadata:
name: atlas-rpc
labels:
app: atlas-rpc
spec:
replicas: 3
selector:
matchLabels:
app: atlas-rpc
template:
metadata:
labels:
app: atlas-rpc
spec:
containers:
- name: atlas-rpc
image: ghcr.io/ellipsis-labs/atlas-replay-node:latest
ports:
- containerPort: 8899
- containerPort: 8900
env:
args:
- --mode=rpc
- --log-level=info
- --redis-url=redis://redis-testnet.atlas.xyz:6379/
- --server-url=https://testnet.atlas.xyz:3002/
- --api-key=<YOUR API KEY>
- --postgres-url=postgresql://public_access:cfbea91fe55e79be93c69c7552d8c8114e12@postgres-testnet.atlas.xyz:5432/svm_node
- --num-async-threads=2
resources:
requests:
cpu: 32
memory: 64Gi
limits:
cpu: 32
memory: 64Gi
livenessProbe:
httpGet:
path: /liveness
port: 8899
initialDelaySeconds: 5
periodSeconds: 3
readinessProbe:
httpGet:
path: /readiness
port: 8899
initialDelaySeconds: 5
periodSeconds: 1
Misc.
Historical state storage
Right now, the historical state is saved in our Postgres database.
In the future, once we make our transaction ledger public, anyone can replay the ledger, rebuild the historical state, and store it in a private database.
External dependencies
Some dependencies may change, the historical storage layer hasn't been finalized yet.