Skip to content

Redis

You can deploy a Memorystore Redis instance with Runway to a single region or multiple regions.

Single-region Redis instance

Single region Redis diagram

This is ideal when you want to share the data on the Redis instance across your workload containers, and your workload is deployed to the same region as the Redis instance OR you are not too concerned about the network latency penalty from/to Redis from your multi-region workload.

Multi-region Redis instance

Multi region Redis diagram

In this configuration, your workload must be deployed to the same regions as the linked Redis instance. Each Redis instance is isolated so no data is shared between Redis instances.

Security

Firewall rules

Runway deploys VPC firewall rules to restrict network traffic between workloads and Redis instances. This means that workload A cannot talk to the Redis instance linked to workload B.

No in-transit encryption

Since traffic between workloads and Redis is private (VPC + VPC peering), and we have VPC firewall rules in place to limit cross-workload Redis traffic, Runway disables in-transit encryption as it would add relatively little value and negatively affect the user experience plus added maintenance due to certificate management.

Authentication

Clients must authenticate in order to connect. The auth string is provided as an environment variable to your workloads.

Using Memorystore Redis

Add Redis instance

In the provisioner repository, you will need to add your new Redis instance to the redis_instances key in the inventory.yml. For example:

inventory.yml
redis_instances:
- <omitted for brevity>
- name: your-redis
identifier: CACHE
regions:
- us-east1
- us-west1
instance_type: CACHE
provider: GCP
engine: REDIS
tier: BASIC
memory_size_gb: 1

See the schema documentation for more information on all the available options.

Connect workload(s) to the Redis instance

Connecting your workload(s) to the Redis instance is just a matter of referencing the name of your Redis instance under redis_instances for your workload(s). For example:

inventory.yml
inventory:
- name: your-workload
project_id: 123456
regions:
- us-east1
- us-west1
redis_instances:
- your-redis # must match Redis instance name defined in the previous step

Regions

Important points to consider when configuring the regions:

  1. If your workload(s) is/are deployed to multiple regions, you need to ensure that the regions for your workload(s) matches the regions for the Redis instance.

  2. Ensure the configured regions are defined in the networks section of the file. Failure to do will result in an inventory validation error when you file an MR due to the missing networking configuration for the region(s).

Enable VPC access (Cloud Run)

The above steps will deploy the Redis instance, configure the VPC firewall rules and automatically set the secrets so that your workload(s) have the endpoint, auth string, etc. defined on the next deploy. No further action is required for setting and accessing secrets. However, the Cloud Run workload is not wired up yet to send internal traffic via the VPC so this is the last piece of the puzzle.

You need to edit the runway.yml file for your workload(s) and set the following:

runway.yml
apiVersion: runway/v1
kind: RunwayService
metadata:
<omitted for brevity>
spec:
<omitted for brevity>
regions:
- us-east1
- us-west1
vpc_access:
enabled: true

This will configure your workload to have a network interface on the runway-<region> subnetwork within the runway-<environment> VPC.

Connecting to Redis

Once the above change is deployed to your workload(s), you should be ready to go!

Your workload(s) should now have access to several useful environment variables including:

  • RUNWAY_REDIS_HOST_<identifier>
  • RUNWAY_REDIS_PORT_<identifier>
  • RUNWAY_REDIS_PASSWORD_<identifier>

In the example above, the identifier is CACHE so you should be able to connect to Redis at ${RUNWAY_REDIS_HOST_CACHE}:${RUNWAY_REDIS_PORT_CACHE} authenticating with the password ${RUNWAY_REDIS_PASSWORD_CACHE}.

Limitations

  1. Only Memorystore for Redis standalone is supported. Other variants (cluster, valkey, memcached, etc) are not supported.
  2. Persistence is not supported (see non-goals in the blueprint).

Observability

To create a Runbook service for the Redis instance for monitoring, alerting and capacity planning:

  1. Add an entry into the service-catalog.yml file (like this one).
  2. Create a service definition file under the metrics-catalog/services folder. You can refer to the metrics-catalog service definition of the runway-redis-example.
  3. Run make generate to generate mimir recording rules, alerts and dashboards.
  4. File an MR with your changes to the runbooks repository

Feedback

We welcome your feedback!

Tell us what works well, what could be improved, what features are missing for your use case(s), etc. Thank you!