Running on GCP
This guide will walk through running a connector on Google Cloud Platform.
To run a connector on GCP, we'll make use of the following GCP products:
  • Cloud SQL (postgres)
  • Cloud Memory Store (redis)
  • KMS
  • GKE
  • Cloud Run
This allow us to run a highly available an ILP Connector that is publicly accessible.
Whenever possible, gcloud cli will be used to provision GCP resources. Before getting started, you must have gcloud cli installed and run gcloud auth login to connect to your account.

Creating Cloud SQL database

The connector uses a SQL database to store account settings and routes.
We'll create a Cloud SQL Postgres database. There's no strict requirements on what region, cpu, memory to use. A typical connector's load on the database is light compared to most applications. One exceptions is that Java uses connection pooling so can have many connections open. The default max_connections for Cloud SQL of 100 can be too small if running multiple instances of the connector.
We'll also enabled Private IP address via the --network default configuration flag. This will allow us to configure the connector to connect to our Cloud SQL address via a private IP.
1
gcloud beta sql instances create connector --cpu=1 --memory=4096MiB \
2
--region=us-west1 --database-version=POSTGRES_11 --network default \
3
--database-flags max-connections=500
4
Copied!
Once the command has completed, you should see your database in Cloud console. Note the private IP address as we'll need that later on.

Setting password for the postgres user

1
gcloud sql users set-password postgres -i connector --password=<password>
Copied!

Creating a connector database

While not mandatory, it is recommended to create a separate database schema for the connector instead of using the default postgres schema. By default, the connector will try to connect using the schema connector. To create a connector user, run:
1
gcloud sql databases create connector -i connector
Copied!

Creating a connector user

While not mandatory, it is recommended to create a separate account for the connector instead of using the postgres admin user. By default, the connector will try to connect using the username connector. To create a connector user, run:
1
gcloud sql users create connector -i connector --password <password>
Copied!

Creating Cloud Memory Store

The connector uses Redis for tracking balances and for pub/sub messaging between connectors (if running multiple connector instances). A typical connector will not need a large amount of Redis storage so we'll create one with the minimum 1GB.
1
gcloud redis instances create connector --size=1 --region=us-west1 --redis-version=redis_4_0
Copied!
Once created note, the IP address on the Memorystore dashboard

Creating KMS keys

Java connector uses encryption keys to encrypt things like auth tokens and shared secrets for account, as well as other internally secured data. Java Keystore (JKS) and KMS are both supported. KMS is recommended because it is easier to manage and configure.
First we need to create a keyring to store connector keys:
1
gcloud kms keyrings create connector --location global
Copied!
Now we can generate a key for the connector to use. The default key alias that the connector will use is secret0. We'll create a key with that alias:
1
gcloud kms keys create secret0 --location global --keyring connector \
2
--purpose encryption
Copied!

Creating GCP Service Account

In order for the connector to be able to use KMS, it will need a GCP service account with KMS encrypt/decrypt permissions.
First we create the service account:
1
gcloud iam service-accounts create connector --display-name connector
Copied!
Then we grant the cloudkms.cryptoKeyEncrypterDecrypter to the service account:
1
gcloud projects add-iam-policy-binding <gcp-project-id> \
2
--member serviceAccount:[email protected]<gcp-project-id>.iam.gserviceaccount.com \
3
--role roles/cloudkms.cryptoKeyEncrypterDecrypter
Copied!
Note: you must replace both instances of <gcp-project-id> in the command above with your GCP project id.

Exporting Service Account JSON in Base64

Later on when you configure your connector, you'll need to provide the GCP service account credentials as a base64 encoded string. This will be used by the connector to authenticate to GCP.
The following command will generate this value:
1
gcloud iam service-accounts keys create /dev/stdout --iam-account \
3
--no-user-output-enabled | base64 && echo
Copied!

Creating Kubernetes Cluster

A docker image is published for the Java ILPv4 connector so the easiest way to run the connector is via docker. For running multiple instances of a connector behind a public load-balancer, Kubernetes with Cloud Run for Anthos provides a convenient setup. Note this will not be the cheapest option as the VM requirements for running Kubernetes are higher than a DIY setup.
For this example, we'll run 2 instances of a connector on Kubernetes and deploy using Cloud Run. We'll size the Kubernetes cluster with 2 nodes, each node using the 2 cpu + 2 gb (high-cpu) e2 machine type.
Big gcloud command incoming....
1
gcloud beta container clusters create <connector-name> --zone "us-west1-a" \
2
--no-enable-basic-auth --cluster-version "1.13.12-gke.25" \
3
--machine-type "e2-highcpu-4" --image-type "COS" --disk-type "pd-standard" \
4
--disk-size "10" --scopes "https://www.googleapis.com/auth/devstorage.read_only","https://www.googleapis.com/auth/logging.write","https://www.googleapis.com/auth/monitoring","https://www.googleapis.com/auth/servicecontrol","https://www.googleapis.com/auth/service.management.readonly","https://www.googleapis.com/auth/trace.append" \
5
--num-nodes "2" --enable-stackdriver-kubernetes --enable-ip-alias \
6
--network default --subnetwork default \
7
--addons HorizontalPodAutoscaling,HttpLoadBalancing,CloudRun \
8
--enable-autoupgrade --enable-autorepair
Copied!

Deploying the Java Connector via Cloud Run

To deploy the Java connector docker image, we'll use GCP's Cloud Run for Anthos. This will deploy the connector and configure networking so that it can be publicly accessible.
First we need to create a k8 yaml file to define the connector and configuration:
connector-cloudrun.yaml
1
apiVersion: serving.knative.dev/v1alpha1
2
kind: Service
3
metadata:
4
name: connector
5
namespace: default
6
spec:
7
template:
8
metadata:
9
annotations:
10
autoscaling.knative.dev/maxScale: '2'
11
autoscaling.knative.dev/minScale: '2'
12
run.googleapis.com/client-name: cloud-console
13
spec:
14
containerConcurrency: 80
15
containers:
16
- env:
17
- name: spring_profiles_active
18
value: migrate,jks,postgres,gcp-kms
19
- name: redis_host
20
value: <CLOUD_MEMORY_STORE_IP>
21
- name: spring_datasource_url
22
value: jdbc:postgresql://<CLOUD_SQL_PRIVATE_IP>:5432/connector
23
- name: spring_datasource_username
24
value: connector
25
- name: spring_datasource_password
26
value: <DB_PASSWORD>
27
- name: interledger_connector_adminPassword
28
value: <ADMIN_PASSWORD>
29
- name: interledger_connector_nodeIlpAddress
30
value: test.<YOUR_CONNECTOR_NAME>
31
- name: interledger_connector_globalPrefix
32
value: test
33
- name: interledger_connector_enabledFeatures_require32ByteSharedSecrets
34
value: 'false'
35
- name: spring_cloud_gcp_project_id
36
value: <GCP_PROJECT_ID>
37
- name: _JAVA_OPTIONS
38
value: -Xmx512m
39
- name: spring_cloud_gcp_credentials_encoded_key
40
value: <BASE64_ENCODED_SERVICE_ACCOUNT_JSON>
41
image: docker.io/interledger4j/java-ilpv4-connector:0.2.0
42
name: user-container
43
ports:
44
- containerPort: 8080
45
readinessProbe:
46
successThreshold: 1
47
resources:
48
limits:
49
cpu: 768m
50
memory: 768Mi
51
timeoutSeconds: 300
52
traffic:
53
- latestRevision: true
54
percent: 100
55
56
Copied!
Save the file above as connector-cloudrun.yaml and replace the following placeholders:
  • <CLOUD_MEMORY_STORE_IP> - replace with the IP address of your Cloud Memorystore instance
  • <DB_PASSWORD> - replace with the password you provided when creating the connector database on your Cloud SQL instance
  • <CLOUD_SQL_PRIVATE_IP> - replace with the PRIVATE ip address shown for your Cloud SQL instance
  • <ADMIN_PASSWORD> - replace with a password of your choosing. This will be used to authenticate as an admin to the REST API on your connector. This password does not have to be the same as your db password.
  • <YOUR_CONNECTOR_NAME> - the name by which your connector will be known on the ILP network. This will be the sub root of your connector's ILP addresses.
  • <GCP_PROJECT_ID> - your GCP project id
  • <BASE64_ENCODED_SERVICE_ACCOUNT_JSON> - replace a base64-encoded json for the service account you created above. See Exporting Service Account JSON in Base64 section above for how to obtain this value. It should be 1 really long line of text.
Once you've created and edited connector-cloudrun.yaml, you'll deploy the connector using:
1
gcloud beta run services replace connector-cloudrun.yaml --platform gke --cluster-location us-west1-a --cluster connector
Copied!

Configuring DNS and SSL for the connector

If everything has gone well, you should now have connector running but in order to access it, you'll need to set up DNS. To keep things simple, we'll set up DNS to use a free DNS provider xip.io. This will get us up and running quickly without needing to set up buy domain name and configure DNS entries.
The following sets of commands require using Kubernetes. If you already have Kubernetes installed, you can use that, otherwise you can use GCP Cloud Shell. To launch Cloud Shell, navigate to https://console.cloud.google.com/kubernetes and click on the Connect button, then click on the Run in Cloud Shell button in the popup modal. This should launch a shell terminal modal in your browser.
In order to configure xip.io with a DNS mapping, we need to know the external IP address of your Kubernetes cluster. Run the following command to obtain thisL
1
kubectl get service -n gke-system istio-ingress
Copied!
Replace "1.2.3.4" with your External IP address in following command:
1
kubectl -n knative-serving patch configmap config-domain \
2
--patch '{"data": {"example.com": null, "1.2.3.4.xip.io": ""}}'
Copied!
Now we will create a subdomain mapping for the connector (again replacing 1.2.3.4 with your kubernetes external IP address):
1
gcloud beta run domain-mappings create --service connector --platform gke \
2
--cluster connector --cluster-location us-west1-a \
3
--domain connector.1.2.3.4.xip.io
Copied!
Lastly we will configure auto TLS/SSL certs to be generated:
1
kubectl patch cm config-domainmapping -n knative-serving \
2
-p '{"data":{"autoTLS":"Enabled"}}'
Copied!

The End

At this point your connector should be up and running and accessible via a URL like https://connector.1.2.3.4.xip.io/
Last modified 1yr ago