Skip to main content

Deploying on GCP

danger

Manual deployment is not recommended, and this guide is not actively maintained. See the DSS CI/CD guide for the most up-to-date deployment steps.

The Econia DSS is portable infrastructure that can be run locally or on cloud compute.

This guide will show you how to run the DSS on Google Gloud Platform (GCP), assuming you have admin privileges.

tip

See the gcloud CLI reference for more information on the commands used in this walkthrough.

Initial setup

Follow the steps in this section in order, making sure to keep the relevant shell variables stored in your active shell session.

tip

Use a scratchpad text file to store shell variable assignment statements that you can copy-paste into your shell:

ORGANIZATION_ID=123456789012
BILLING_ACCOUNT_ID=ABCDEF-GHIJKL-MNOPQR
REGION=a-region
ZONE=a-zone

Configure project

  1. Create a GCP organization, try GCP for free, or otherwise get access to GCP.

  2. Install the Google Cloud CLI.

  3. List the organizations that you are a member of:

    gcloud organizations list
  4. Store your preferred organization ID in a shell variable:

    ORGANIZATION_ID=<YOUR_ORGANIZATION_ID>
    echo $ORGANIZATION_ID
  5. Choose a project ID (like fast-15) that complies with the GCP project ID rules and store it in a shell variable:

    PROJECT_ID=<YOUR_PROJECT_ID>
    echo $PROJECT_ID
  6. Create a new project with the name econia-dss:

    gcloud projects create $PROJECT_ID \
    --name econia-dss \
    --organization $ORGANIZATION_ID
  7. List your billing account ID:

    gcloud alpha billing accounts list
    tip

    As of the time of this writing, some billing commands were still in alpha release.

    If you prefer a stable command release, you might not need to use the alpha keyword.

  8. Store the billing account ID in a shell variable:

    BILLING_ACCOUNT_ID=<YOUR_BILLING_ACCOUNT_ID>
    echo $BILLING_ACCOUNT_ID
  9. Link the billing account to the project:

    gcloud alpha billing projects link $PROJECT_ID \
    --billing-account $BILLING_ACCOUNT_ID
  10. Set the project as default:

    gcloud config set project $PROJECT_ID

Grant project permissions

  1. Download the project IAM policy:

    gcloud projects get-iam-policy $PROJECT_ID > policy.yaml
  2. In policy.yaml add the email address of a user in your Google Workspace under the member binding with roles/owner.

  3. Set the IAM policy:

    gcloud projects set-iam-policy $PROJECT_ID policy.yaml
  4. Instruct the user to install the Google Cloud CLI and set the project ID as default before continuing:

    PROJECT_ID=<PROJECT_ID>
    echo $PROJECT_ID
    gcloud config set project $PROJECT_ID

Configure locations

  1. List available build regions:

    gcloud artifacts locations list
  2. Pick a preferred region and store it in a shell variable:

    REGION=<PREFERRED_REGION>
  3. List available deployment zones:

    gcloud compute zones list
  4. Pick a preferred zone and store it in a shell variable:

    ZONE=<PREFERRED_ZONE>
  5. Store values as defaults:

    echo $REGION
    echo $ZONE
    gcloud config set artifacts/location $REGION
    gcloud config set compute/zone $ZONE
    gcloud config set run/region $REGION

Build images

  1. Create a GCP Artifact Registry Docker repository named images:

    gcloud artifacts repositories create images \
    --repository-format docker
  2. Set the repository as default:

    gcloud config set artifacts/repository images
  3. Clone the Econia repository:

    git clone https://github.com/econia-labs/econia.git
  4. Build the DSS images from source:

    gcloud builds submit econia \
    --config econia/src/docker/gcp-tutorial-config.yaml \
    --substitutions _REGION=$REGION
    tip

    This will take a while, since it involves the compilation of several binaries from source.

Create bootstrapper

  1. Create a GCP Compute Engine instance for bootstrapping config files, with two attached persistent disks:

    gcloud compute instances create bootstrapper \
    --create-disk "$(printf '%s' \
    auto-delete=no,\
    name=postgres-disk,\
    size=100GB\
    )" \
    --create-disk "$(printf '%s' \
    auto-delete=no,\
    name=processor-disk,\
    size=1GB\
    )"
  2. Create an SSH key pair and use it to upload PostgreSQL configuration files to the bootstrapper:

    mkdir ssh
    ssh-keygen -t rsa -f ssh/gcp -C bootstrapper -b 2048 -q -N ""
    gcloud compute scp \
    econia/src/docker/database/configs/pg_hba.conf \
    econia/src/docker/database/configs/postgresql.conf \
    bootstrapper:~ \
    --ssh-key-file ssh/gcp
  3. Connect to the bootstrapper instance:

    gcloud compute ssh bootstrapper --ssh-key-file ssh/gcp
  4. Check connected disks:

    sudo lsblk
    tip

    The device name for the postgres disk will probably be sbd, and the device name for the processor will probably be sdc (check the disk sizes if you are unsure).

  5. Store the device names in shell variables:

    POSTGRES_DISK_DEVICE_NAME=<PROBABLY_sdb>
    PROCESSOR_DISK_DEVICE_NAME=<PROBABLY_sdc>
    echo "PostgreSQL disk device name: $POSTGRES_DISK_DEVICE_NAME"
    echo "Processor disk device name: $PROCESSOR_DISK_DEVICE_NAME"
  6. Format and mount the disks with read/write permissions:

    sudo mkfs.ext4 \
    -m 0 \
    -E lazy_itable_init=0,lazy_journal_init=0,discard \
    /dev/$POSTGRES_DISK_DEVICE_NAME
    sudo mkfs.ext4 \
    -m 0 \
    -E lazy_itable_init=0,lazy_journal_init=0,discard \
    /dev/$PROCESSOR_DISK_DEVICE_NAME
    sudo mkdir -p /mnt/disks/postgres
    sudo mkdir -p /mnt/disks/processor
    sudo mount -o \
    discard,defaults \
    /dev/$POSTGRES_DISK_DEVICE_NAME \
    /mnt/disks/postgres
    sudo mount -o \
    discard,defaults \
    /dev/$PROCESSOR_DISK_DEVICE_NAME \
    /mnt/disks/processor
    sudo chmod a+w /mnt/disks/postgres
    sudo chmod a+w /mnt/disks/processor
  7. Create a PostgreSQL data directory and move the config files into it:

    mkdir /mnt/disks/postgres/data
    mv pg_hba.conf /mnt/disks/postgres/data/pg_hba.conf
    mv postgresql.conf /mnt/disks/postgres/data/postgresql.conf
  8. End the connection with the bootstrapper:

    exit
  9. Detach postgres-disk from the bootstrapper:

    gcloud compute instances detach-disk bootstrapper --disk postgres-disk

Deploy database

  1. Create an administrator username and password and store them in shell variables:

    ADMIN_NAME=<YOUR_ADMIN_NAME>
    ADMIN_PASSWORD=<YOUR_ADMIN_PW>
    echo "Admin name: $ADMIN_NAME"
    echo "Admin password: $ADMIN_PASSWORD"
  2. Deploy the postgres image as a Compute Engine Container with the postgres disk as a data volume:

    gcloud compute instances create-with-container postgres \
    --container-env "$(printf '%s' \
    POSTGRES_USER=$ADMIN_NAME,\
    POSTGRES_PASSWORD=$ADMIN_PASSWORD\
    )" \
    --container-image \
    $REGION-docker.pkg.dev/$PROJECT_ID/images/postgres \
    --container-mount-disk "$(printf '%s' \
    mount-path=/var/lib/postgresql,\
    name=postgres-disk\
    )" \
    --disk "$(printf '%s' \
    auto-delete=no,\
    device-name=postgres-disk,\
    name=postgres-disk\
    )"
  3. Store the instance's internal and external IP addresses as well your public IP address in shell variables:

    POSTGRES_EXTERNAL_IP=$(gcloud compute instances list \
    --filter name=postgres \
    --format "value(networkInterfaces[0].accessConfigs[0].natIP)" \
    )
    POSTGRES_INTERNAL_IP=$(gcloud compute instances list \
    --filter name=postgres \
    --format "value(networkInterfaces[0].networkIP)" \
    )
    MY_IP=$(curl --silent http://checkip.amazonaws.com)
    echo "\n\nPostgreSQL external IP: $POSTGRES_EXTERNAL_IP"
    echo "PostgreSQL internal IP: $POSTGRES_INTERNAL_IP"
    echo "Your IP: $MY_IP"
  4. Promote the instance's external and internal addresses from ephemeral to static:

    gcloud compute addresses create postgres-external \
    --addresses $POSTGRES_EXTERNAL_IP \
    --region $REGION
    gcloud compute addresses create postgres-internal \
    --addresses $POSTGRES_INTERNAL_IP \
    --region $REGION \
    --subnet default
  5. Allow incoming traffic on port 5432 from your IP address:

    gcloud compute firewall-rules create pg-admin \
    --allow tcp:5432 \
    --direction INGRESS \
    --source-ranges $MY_IP
  6. Store the PostgreSQL public connection string as an environment variable:

    export DATABASE_URL="$(printf '%s' postgres://\
    $ADMIN_NAME:\
    $ADMIN_PASSWORD@\
    $POSTGRES_EXTERNAL_IP:5432/econia
    )"
    echo $DATABASE_URL
  7. Install diesel if you don't already have it, then check that the database has an empty schema:

    diesel print-schema
    tip

    You might not be able to connect to the database until a minute or so after you've first created the instance.

  8. Run the database migrations then check the schema again:

    cd econia/src/rust/dbv2
    diesel migration run
    diesel print-schema
    cd ../../../..

Deploy REST API

  1. Create a connector for your project's default Virtual Private Cloud (VPC) network:

    gcloud compute networks vpc-access connectors create \
    postgrest \
    --range 10.8.0.0/28 \
    --region $REGION
  2. Verify that the connector is ready:

    STATE=$(gcloud compute networks vpc-access connectors describe \
    postgrest \
    --region $REGION \
    --format "value(state)"
    )
    echo "Connector state is: $STATE"
  3. Construct the PosgREST connection URL to connect to the postgres instance:

    DB_URL_PRIVATE="$(printf '%s' postgres://\
    $ADMIN_NAME:\
    $ADMIN_PASSWORD@\
    $POSTGRES_INTERNAL_IP:5432/econia
    )"
    echo $DB_URL_PRIVATE
  4. Determine a max number of rows per PostgREST query:

    PGRST_DB_MAX_ROWS=<MAX_ROWS_FOR_FETCH>
    echo $PGRST_DB_MAX_ROWS
  5. Deploy PostgREST on GCP Cloud Run with public access:

    gcloud run deploy postgrest \
    --allow-unauthenticated \
    --image \
    $REGION-docker.pkg.dev/$PROJECT_ID/images/postgrest \
    --port 3000 \
    --set-env-vars "$(printf '%s' \
    PGRST_DB_ANON_ROLE=web_anon,\
    PGRST_DB_SCHEMA=api,\
    PGRST_DB_URI=$DB_URL_PRIVATE,\
    PGRST_DB_MAX_ROWS=$PGRST_DB_MAX_ROWS\
    )" \
    --vpc-connector postgrest
  6. Store the service URL in a shell variable:

    export REST_URL=$(
    gcloud run services describe postgrest \
    --format "value(status.url)"
    )
    echo $REST_URL
  7. Verify that you can query the PostgREST API from the public URL:

    curl $REST_URL

Deploy processor

  1. Create a config at econia/src/docker/processor/config.yaml per the general DSS guidelies:

    tip

    For postgres_connection_string use the same one that the postgrest service uses:

    echo $DB_URL_PRIVATE
  2. Upload the processor config to the bootstrapper:

    gcloud compute scp \
    econia/src/docker/processor/config.yaml \
    bootstrapper:~ \
    --ssh-key-file ssh/gcp
  3. Connect to the bootstrapper:

    gcloud compute ssh bootstrapper --ssh-key-file ssh/gcp
  4. Create a processor data directory and move the config file into it:

    mkdir /mnt/disks/processor/data
    mv config.yaml /mnt/disks/processor/data/config.yaml
  5. End the connection with the bootstrapper:

    exit
  6. Stop the bootstrapper:

    gcloud compute instances stop bootstrapper
  7. Detach processor-disk from the bootstrapper:

    gcloud compute instances detach-disk bootstrapper --disk processor-disk
  8. Deploy the processor image:

    gcloud compute instances create-with-container processor \
    --container-env HEALTHCHECK_BEFORE_START=false \
    --container-image \
    $REGION-docker.pkg.dev/$PROJECT_ID/images/processor \
    --container-mount-disk "$(printf '%s' \
    mount-path=/config,\
    name=processor-disk\
    )" \
    --disk "$(printf '%s' \
    auto-delete=no,\
    device-name=processor-disk,\
    name=processor-disk\
    )"
  9. Give the processor a minute or so to start up, then view the container logs:

    PROCESSOR_ID=$(gcloud compute instances describe processor \
    --zone $ZONE \
    --format="value(id)"
    )
    gcloud logging read "resource.type=gce_instance AND \
    logName=projects/$PROJECT_ID/logs/cos_containers AND \
    resource.labels.instance_id=$PROCESSOR_ID" \
    --limit 5
  10. Once the processor has had enough time to sync, check some of the events from one of the REST endpoints:

    curl $REST_URL/<AN_ENDPOINT>
    tip

    For immediate results (but with missed events and a corrupted database) during testing, use a testnet config with the following:

    • econia_address: 0xc0de11113b427d35ece1d8991865a941c0578b0f349acabbe9753863c24109ff
    • starting_version: 683453241

    Then try curl $REST_URL/balance_updates, since this starting version immediately precedes a series of balance update operations on tesnet.

Deploy aggregator

  1. Deploy an aggregator instance using the private connection string:

    echo $DB_URL_PRIVATE
    gcloud compute instances create-with-container aggregator \
    --container-env DATABASE_URL=$DB_URL_PRIVATE \
    --container-image \
    $REGION-docker.pkg.dev/$PROJECT_ID/images/aggregator
  2. Wait a minute or two then check logs:

    AGGREGATOR_ID=$(gcloud compute instances describe aggregator \
    --zone $ZONE \
    --format="value(id)"
    )
    gcloud logging read "resource.type=gce_instance AND \
    logName=projects/$PROJECT_ID/logs/cos_containers AND \
    resource.labels.instance_id=$AGGREGATOR_ID" \
    --limit 5
  3. Once the aggregator has had enough time to aggregate events, check some aggregated data. For example on testnet:

    echo $REST_URL
    curl "$(printf '%s' \
    "$REST_URL/"\
    "limit_orders?"\
    "order=price.desc,"\
    "last_increase_stamp.asc&"\
    "market_id=eq.3&"\
    "side=eq.ask&"\
    "order_status=eq.closed&"\
    "limit=3"\
    )"

Deploy WebSockets API

  1. Create a connector:

    gcloud compute networks vpc-access connectors create \
    websockets \
    --range 10.64.0.0/28 \
    --region $REGION
  2. Verify connector readiness:

    STATE=$(gcloud compute networks vpc-access connectors describe \
    websockets \
    --region $REGION \
    --format "value(state)"
    )
    echo "Connector state is: $STATE"
  3. Construct WebSockets connection string:

    PGWS_DB_URI="$(printf '%s' postgres://\
    $ADMIN_NAME:\
    $ADMIN_PASSWORD@\
    $POSTGRES_INTERNAL_IP/econia
    )"
    echo $PGWS_DB_URI
  4. Deploy the websockets service:

    gcloud run deploy websockets \
    --allow-unauthenticated \
    --image \
    $REGION-docker.pkg.dev/$PROJECT_ID/images/websockets \
    --port 3000 \
    --set-env-vars "$(printf '%s' \
    PGWS_DB_URI=$PGWS_DB_URI,\
    PGWS_JWT_SECRET=econia_0000000000000000000000000,\
    PGWS_CHECK_LISTENER_INTERVAL=1000,\
    PGWS_LISTEN_CHANNEL=econiaws\
    )" \
    --vpc-connector websockets
  5. Store service URL:

    WS_HTTPS_URL=$(
    gcloud run services describe websockets \
    --format "value(status.url)"
    )
    export WS_URL=$(echo $WS_HTTPS_URL | sed 's/https/wss/')
    echo $WS_URL
  6. Monitor events using the WebSockets listening script:

    echo $WS_URL
    echo $REST_URL
    echo $WS_CHANNEL
    cd econia/src/python/sdk
    poetry install
    poetry run event
    # To quit
    <Ctrl+C>
    cd ../../../..

Redeployment

Once you have the DSS running you might want to redeploy within the same GCP project, for example using a different chain or with new image binaries.

Whenever you redeploy, follow the below steps in order so that you do not break startup dependencies or generate any corrupted data:

  1. Delete images in the existing images registry:

    echo $REGION
    echo $PROJECT_ID
    gcloud artifacts docker images delete \
    $REGION-docker.pkg.dev/$PROJECT_ID/images/aggregator
    gcloud artifacts docker images delete \
    $REGION-docker.pkg.dev/$PROJECT_ID/images/postgres
    gcloud artifacts docker images delete \
    $REGION-docker.pkg.dev/$PROJECT_ID/images/postgrest
    gcloud artifacts docker images delete \
    $REGION-docker.pkg.dev/$PROJECT_ID/images/processor
    gcloud artifacts docker images delete \
    $REGION-docker.pkg.dev/$PROJECT_ID/images/websockets
    gcloud artifacts docker images list
    tip

    You only need to delete images that you wish to redeploy newer versions of. For images that you are sure haven't changed, you can comment them out of the build file in the next step.

  2. Rebuild images in the existing images registry.

  3. Delete postgrest and websockets services:

    gcloud run services delete postgrest --quiet
    gcloud run services delete websockets --quiet
    tip

    When these are redeployed, they will have the same endpoint URL as before.

  4. Delete aggregator and processor instances:

    gcloud compute instances delete aggregator --quiet
    gcloud compute instances delete processor --quiet
  5. Clear all container images from postgres:

    gcloud compute ssh postgres \
    --command "$(printf '%s' \
    "docker ps -aq | xargs docker stop | xargs docker rm && "\
    "docker image prune -af"\
    )" \
    --ssh-key-file ssh/gcp \
    --verbosity=debug
    tip

    You'll need to create more SSH keys if you deleted the ones you were previously using.

    note

    Unlike the aggregator and processor instances, postgres has static IP addresses, so it is updated with a new container, unlike the other instances which are deleted then recreated.

  6. Update postgres container and restart:

    echo $ADMIN_NAME
    echo $ADMIN_PASSWORD
    gcloud compute instances update-container postgres \
    --container-env "$(printf '%s' \
    POSTGRES_USER=$ADMIN_NAME,\
    POSTGRES_PASSWORD=$ADMIN_PASSWORD\
    )" \
    --container-image \
    $REGION-docker.pkg.dev/$PROJECT_ID/images/postgres \
    --container-mount-disk "$(printf '%s' \
    mount-path=/var/lib/postgresql,\
    name=postgres-disk\
    )"
  7. Reset database:

    POSTGRES_EXTERNAL_IP=$(gcloud compute instances list \
    --filter name=postgres \
    --format "value(networkInterfaces[0].accessConfigs[0].natIP)" \
    )
    export DATABASE_URL="$(printf '%s' postgres://\
    $ADMIN_NAME:\
    $ADMIN_PASSWORD@\
    $POSTGRES_EXTERNAL_IP:5432/econia
    )"
    echo $DATABASE_URL
    cd econia/src/rust/dbv2
    tip

    Give the instance a minute or so to start up before trying to connect.

    diesel database reset
    cd ../../../..
  8. Get the private connection string:

    POSTGRES_INTERNAL_IP=$(gcloud compute instances list \
    --filter name=postgres \
    --format "value(networkInterfaces[0].networkIP)" \
    )
    DB_URL_PRIVATE="$(printf '%s' postgres://\
    $ADMIN_NAME:\
    $ADMIN_PASSWORD@\
    $POSTGRES_INTERNAL_IP:5432/econia
    )"
    echo $DB_URL_PRIVATE
  9. Update your local processor config at econia/src/docker/processor/config.yaml with DB_URL_PRIVATE for postgres_connection_string.

  10. Start the bootstrapper:

    gcloud compute instances start bootstrapper
  11. Upload the config:

    gcloud compute scp \
    econia/src/docker/processor/config.yaml \
    bootstrapper:~ \
    --ssh-key-file ssh/gcp
    tip

    It may take a bit for the bootstrapper to start up.

  12. Attach the config disk to the bootstrapper:

    gcloud compute instances attach-disk bootstrapper --disk processor-disk
  13. Connect to the bootstrapper:

    gcloud compute ssh bootstrapper --ssh-key-file ssh/gcp
  14. Mount the disk:

    sudo lsblk
    PROCESSOR_DISK_DEVICE_NAME=<NEW_NAME>
    echo $PROCESSOR_DISK_DEVICE_NAME
    sudo mount -o \
    discard,defaults \
    /dev/$PROCESSOR_DISK_DEVICE_NAME \
    /mnt/disks/processor
    sudo chmod a+w /mnt/disks/processor
    tip

    See the bootstrapper creation process for a recapitulation of this process.

  15. Replace the old config:

    mv config.yaml /mnt/disks/processor/data/config.yaml
    echo "New config:"
    cat /mnt/disks/processor/data/config.yaml
    echo
  16. Disconnect from the bootstrapper:

    exit
  17. Stop the bootstrapper:

    gcloud compute instances stop bootstrapper
  18. Detach the processor-disk from the bootstrapper:

    gcloud compute instances detach-disk bootstrapper --disk processor-disk
  19. Redeploy processor using the gcloud compute instances create-with-container command from initial deployment.

  20. Redeploy the aggregator.

  21. Redeploy postgrest using the gcloud run deploy command from initial deployment, after setting a max number of rows:

    PGRST_DB_MAX_ROWS=<MAX_ROWS_FOR_FETCH>
    echo $PGRST_DB_MAX_ROWS
  22. Redeploy websockets using the gcloud run deploy command from initial deployment, after reconstructing the WebSockets connection string:

    PGWS_DB_URI="$(printf '%s' postgres://\
    $ADMIN_NAME:\
    $ADMIN_PASSWORD@\
    $POSTGRES_INTERNAL_IP/econia
    )"
    echo $PGWS_DB_URI

Diagnostics

Check instance container status

  1. Connect to an instance:

    gcloud compute ssh <INSTANCE_NAME> --ssh-key-file <SSH_KEY_FILE>
  2. Check Docker status:

    docker ps
    tip

    If your container restarts every minute or so, you've got a problem.

  3. Exit instance connection:

    exit

Check instance container logs

  1. Set instance name and number of logs to pull:

    INSTANCE_NAME=<INSTANCE_NAME>
    N_LOGS=<HOW_MANY_LOGS>
    echo $PROJECT_ID
    echo $INSTANCE_NAME
    echo $N_LOGS
  2. Get instance ID:

    INSTANCE_ID=$(gcloud compute instances describe $INSTANCE_NAME \
    --zone $ZONE \
    --format="value(id)"
    )
    echo $INSTANCE_ID
  3. Pull the logs:

    gcloud logging read "resource.type=gce_instance AND \
    logName=projects/$PROJECT_ID/logs/cos_containers AND \
    resource.labels.instance_id=$INSTANCE_ID" \
    --limit $N_LOGS