Skip to content

Cloud SQL for Postgres

Runway supports connecting to unmanaged Cloud SQL for Postgres instances from your workloads. This allows you to connect workloads to an existing database in a different project provided the following requirements are met.

Requirements

  1. Cloud SQL instance must have a private IP

    We use PSC (Private Service Connect) to connect to the database so it must have a private IP.

    We recommend disabling public IP on the Cloud SQL instance unless your particular use case for leaving it enabled has been reviewed/approved.

  2. Must use the Cloud SQL proxy to connect to your database

    Runway will add a Cloud SQL proxy sidecar container to your workloads so that you can connect to your database. Cloud SQL proxy is configured to talk to your database over the PSC.

  3. Must use gcloud CLI version >= 416.0.0

    Support for managing PSC with a Cloud SQL instance is available for gcloud CLI versions 416.0.0 and later.

Enable PSC

You need to use the gcloud CLI to enable PSC on your database with a command similar to:

Terminal window
gcloud beta --project <DB Project ID> sql instances patch <Instance ID> \
--enable-private-service-connect \
--allowed-psc-projects=<Service Project ID>
  • <DB Project ID> = project ID where your Cloud SQL instance lives
  • <DB Instance ID> = Cloud SQL instance ID
  • <Service Project ID> = project ID where your code runs that needs to talk to the database (e.g., gitlab-runway-staging)

Obtain PSC details

Once the PSC has been enabled, you need to obtain the following details about the PSC:

  1. DNS Name

    Terminal window
    gcloud beta --project <DB Project ID> \
    sql instances describe <DB Instance ID> \
    --format="value(dnsName)"

    This should return a .sql.goog. address and it will be used in the provisioner’s inventory.yml.

  2. Service Attachment Link

    Terminal window
    gcloud beta --project <DB Project ID> \
    sql instances describe <DB Instance ID> \
    --format="value(pscServiceAttachmentLink)"

    This should return a string that starts with projects/ it will be used in the provisioner’s inventory.yml.

  3. Connection Name

    Terminal window
    gcloud beta --project <DB Project ID> \
    sql instances describe <DB Instance ID> \
    --format="value(connectionName)"

    This should return a string in the format <project ID>:<region>:<DB instance ID> and it will be used in your runway.yml file.

Provisioner

File an MR to update the unmanaged_cloudsql_instances section of the provisioner’s inventory.yml with a config stanza like this:

inventory.yml
unmanaged_cloudsql_instances:
<region>:
<env>:
- name: <name>
psc_service_attachment_link: projects/...
psc_dns_name: XXXXXXX.XXXXXX.<region>.sql.goog.

See the previous section for instructions on how to obtain the PSC details for DNS Name and Service Attachment Link.

Runtimes

Cloud Run

Grant role to SA

In the project where your Cloud SQL instance lives, you need to grant the Cloud SQL Client role to your workload’s service account. This will allow the Cloud SQL Proxy sidecar the necessary access to your DB instance.

Your workload’s service account will have the following format:

crun-<Workload ID>@<Runway Project ID>.iam.gserviceaccount.com

If you are not sure what your workload ID is, see this section.

For example:

crun-gsgl-dev-jobs-bqtr6x@gitlab-runway-staging.iam.gserviceaccount.com

Using the gcloud CLI, you could grant access using the following command:

Terminal window
gcloud projects add-iam-policy-binding <DB Project ID> \
--member=serviceAccount:crun-<Runway workload ID>@<Runway Project ID>.iam.gserviceaccount.com \
--role=roles/cloudsql.client \
--condition="expression=resource.name == 'projects/<DB Project ID>/instances/<DB Instance ID>',title=access_specific_db"

Terraform example:

resource "google_project_iam_member" "runway-workload-cloudsql-client" {
project = var.project
role = "roles/cloudsql.client"
member = "serviceAccount:crun-<Runway workload ID>@<Runway Project ID>.iam.gserviceaccount.com"
condition {
title = "Cloud SQL client access for Runway workload"
expression = "resource.name == \"projects/${var.project}/instances/<DB instance ID>\""
}
}

Update your runway.yml

You now need to add the following to your workload’s runway.yml file:

runway.yml
apiVersion: runway/v1
kind: RunwayService # or RunwayJob
metadata:
<omitted for brevity>
spec:
...
cloud_providers:
gcp:
cloudsql_instances:
- instance_connection_name: ...
psc_enabled: true

For instance_connection_name, see Connection Name.

Once you merge and deploy this change, your workload should have everything in place to allow your code to talk to your Cloud SQL instance.

Connecting to Cloud SQL

Runway assigns two ports for each instance listed in spec.cloud_providers.gcp.cloudsql_instances in your runway.yml:

  • Cloud SQL Proxy - Proxy Port
  • Cloud SQL Proxy - Admin Port

The proxy port starts at 5000 and the admin port starts at 5010:

  • The first instance will use ports 5000 (proxy) and 5010 (admin)
  • The second instance will use ports 5001 (proxy) and 5011 (admin)

For this example, let’s assume you only have a single Cloud SQL instance. Your code will need to use the following details to connect:

  • Host: localhost
  • Port: 5000
  • Username and/or password accessed as environment variables (see secrets management)
Using Cloud SQL from a RunwayJob

Problem

Jobs are different to RunwayService in that when jobs are triggered, the container is supposed to run until it has finished doing its work and then exit. Because we have a sidecar container running the Cloud SQL Proxy process as a daemon, it will never exit so the Runway Job will keep running until it eventually times out and the job is marked as failed.

Solution

You need to tell Cloud SQL Proxy to exit once your code has finished running. This is done by sending a HTTP GET request to the Cloud SQL Proxy admin port:

http://localhost:5010/quitquitquit

In this example, 5010 refers to the first Cloud SQL instance so adjust accordingly if you have more than one instance defined.