Skip to main content

Using Terraform

danger

This guide is not actively maintained. See the DSS CI/CD guide for the most up-to-date deployment steps.

If you have already finished the Google Cloud Platform (GCP) tutorial and are looking for a more programmatic deployment process, this guide will show you how to use Terraform to deploy the Econia DSS via declarative configurations.

This guide is for a specific use case, the Econia testnet trading competition leaderboard backend, but you can adapt as needed for your particular use case.

Configure project

  1. Install (if you don't already have):

    1. Terraform.

    2. Diesel for postgres.

    3. psql.

  2. Clone the Econia repository and navigate to the leaderboard-backend project directory:

    git clone https://github.com/econia-labs/econia.git
    cd econia
    git submodule update --init --recursive
    cd src/terraform/leaderboard-backend
  3. Configure a billable GCP project:

    PROJECT_NAME=leaderboard-backend
    echo $PROJECT_ID
    echo $PROJECT_NAME
    echo $ORGANIZATION_ID
    echo $BILLING_ACCOUNT_ID
    gcloud projects create $PROJECT_ID \
    --name $PROJECT_NAME \
    --organization $ORGANIZATION_ID
    gcloud alpha billing projects link $PROJECT_ID \
    --billing-account $BILLING_ACCOUNT_ID
    gcloud config set project $PROJECT_ID
  4. Pick a database root password:

    DB_ROOT_PASSWORD=<DB_ROOT_PASSWORD>
    echo $DB_ROOT_PASSWORD
    tip

    Avoid using the special characters @, /, ., or :, which are used in connection strings.

  5. Store your public IP address:

    MY_IP=$(curl --silent http://checkip.amazonaws.com)
    echo $MY_IP
  6. Generate keys for a service account:

    gcloud iam service-accounts create terraform
    SERVICE_ACCOUNT_NAME=terraform@$PROJECT_ID.iam.gserviceaccount.com
    echo $SERVICE_ACCOUNT_NAME
    gcloud iam service-accounts keys create gcp-key.json \
    --iam-account $SERVICE_ACCOUNT_NAME
  7. Generate SSH keys:

    rm -rf ssh
    mkdir ssh
    ssh-keygen -t rsa -f ssh/gcp -C bootstrapper -b 2048 -q -N ""
  8. Store variables in a Terraform variable file, then format and initialize the directory:

    echo "project = \"$PROJECT_ID\"" > terraform.tfvars
    echo "db_admin_public_ip = \"$MY_IP\"" >> terraform.tfvars
    echo "db_root_password = \"$DB_ROOT_PASSWORD\"" >> terraform.tfvars
    terraform fmt
    echo "\n\nContents of terraform.tfvars:\n\n"
    cat terraform.tfvars
    terraform init

Build infrastructure

  1. Update /src/docker/processor/config.yaml.

    tip

    Don't worry about postgres_connection_string, this will be automatically handled later.

  2. Create competition-metadata.json and competition-additional-exclusions.json in /src/rust/dbv2 per the README.

  3. Apply the configuration:

    terraform apply --parallelism 20
  4. View outputs:

    terraform output
  5. Set up load balancing with a custom domain, then update your DNS records for the custom domain.

    CUSTOM_DOMAIN=<MY_CUSTOM_DOMAIN>
    echo $CUSTOM_DOMAIN
    gcloud beta run integrations create \
    --parameters set-mapping=$CUSTOM_DOMAIN:postgrest \
    --type custom-domains
    gcloud beta run integrations describe custom-domains
    tip

    Compared with the more complex generic load balancing setup process, this streamlined process is a GCP Cloud Run beta feature that is not yet supported by Terraform.

    If you want to instead use the generic public run.app URL, then before you run terraform apply remove the following line from the postgrest service in main.tf, then skip this and all remaining steps:

    ingress = "INGRESS_TRAFFIC_INTERNAL_ONLY"
  6. Create a security policy for the load balancer:

    gcloud compute backend-services list
    BACKEND_SERVICE=<custom-domains-x-y-postgrest-z-be>
    echo $BACKEND_SERVICE
    gcloud compute backend-services update $BACKEND_SERVICE \
    --global \
    --security-policy public-traffic

Take down infrastructure

  1. Destroy project resources:

    terraform destroy
    tip

    This might not destroy quite everything, since GCP has a Cloud SQL deletion waiting period that blocks the deletion of private service networking. This issue was supposed to be resolved as of the Google Provider 5.0.0 release for Terraform, but it appears not to be resolved per https://github.com/hashicorp/terraform-provider-google/issues/16275.

    If terraform destroy gets stuck on deleting the network connection, you can manually delete the network connection in the GCP console then run terraform destroy again.

    Or you can simply delete the project even if Terraform has not destroyed all resources.

  2. Delete GCP project:

    gcloud projects delete $PROJECT_ID

Deploy second parallel project

  1. Clear cache:

    rm *tfstate*
    rm -rf .terraform
    rm .terraform*
    tip

    If you delete *tfstate* files, then you will lose configuration state and will only be able to modify the primary project via gcloud commands.

    If you want to be able to do more than just delete the primary project once you've started a parallel one, keep backups of your *tfstate* files.

  2. After creating a new project, use a different credentials filename and add your credentials_file to terraform.tfvars (for example credentials_file = gcp-key-2.json).

    tip

    .gitignore ignores any files of pattern gcp-key-*.json.

  3. Use the same SSH keys as the main deployment (no need to recreate).

Diagnostics

Connect to PostgreSQL

  1. Connect:

    psql $(terraform output -raw db_conn_str_admin)

Target a specific resource

  1. Apply:

    terraform apply -target <RESOURCE_NAME>
  2. Destroy:

    terraform destroy -target <RESOURCE_NAME>

Generate a dependency graph

  1. Check that you have dot:

    which dot
  2. Generate graph:

    terraform graph | dot -Tsvg > graph.svg

Check resource metadata

  1. Show state:

    terraform show
  2. List state:

    terraform state list
  3. Show state for a resource

    terraform state show <RESOURCE_TYPE.RESOURCE_NAME>