Running on GCP
This guide will walk through running a connector on Google Cloud Platform.
To run a connector on GCP, we'll make use of the following GCP products:
Cloud SQL (postgres)
Cloud Memory Store (redis)
KMS
GKE
Cloud Run
This allow us to run a highly available an ILP Connector that is publicly accessible.
Whenever possible, gcloud
cli will be used to provision GCP resources. Before getting started, you must have gcloud cli installed and run gcloud auth login to connect to your account.
Creating Cloud SQL database
The connector uses a SQL database to store account settings and routes.
We'll create a Cloud SQL Postgres database. There's no strict requirements on what region, cpu, memory to use. A typical connector's load on the database is light compared to most applications. One exceptions is that Java uses connection pooling so can have many connections open. The default max_connections for Cloud SQL of 100 can be too small if running multiple instances of the connector.
We'll also enabled Private IP address via the --network default
configuration flag. This will allow us to configure the connector to connect to our Cloud SQL address via a private IP.
Once the command has completed, you should see your database in Cloud console. Note the private IP address as we'll need that later on.
Setting password for the postgres user
Creating a connector database
While not mandatory, it is recommended to create a separate database schema for the connector instead of using the default postgres schema. By default, the connector will try to connect using the schema connector
. To create a connector user, run:
Creating a connector user
While not mandatory, it is recommended to create a separate account for the connector instead of using the postgres admin user. By default, the connector will try to connect using the username connector
. To create a connector user, run:
Creating Cloud Memory Store
The connector uses Redis for tracking balances and for pub/sub messaging between connectors (if running multiple connector instances). A typical connector will not need a large amount of Redis storage so we'll create one with the minimum 1GB.
Once created note, the IP address on the Memorystore dashboard
Creating KMS keys
Java connector uses encryption keys to encrypt things like auth tokens and shared secrets for account, as well as other internally secured data. Java Keystore (JKS) and KMS are both supported. KMS is recommended because it is easier to manage and configure.
First we need to create a keyring to store connector keys:
Now we can generate a key for the connector to use. The default key alias that the connector will use is secret0
. We'll create a key with that alias:
Creating GCP Service Account
In order for the connector to be able to use KMS, it will need a GCP service account with KMS encrypt/decrypt permissions.
First we create the service account:
Then we grant the cloudkms.cryptoKeyEncrypterDecrypter to the service account:
Note: you must replace both instances of <gcp-project-id>
in the command above with your GCP project id.
Exporting Service Account JSON in Base64
Later on when you configure your connector, you'll need to provide the GCP service account credentials as a base64 encoded string. This will be used by the connector to authenticate to GCP.
The following command will generate this value:
Creating Kubernetes Cluster
A docker image is published for the Java ILPv4 connector so the easiest way to run the connector is via docker. For running multiple instances of a connector behind a public load-balancer, Kubernetes with Cloud Run for Anthos provides a convenient setup. Note this will not be the cheapest option as the VM requirements for running Kubernetes are higher than a DIY setup.
For this example, we'll run 2 instances of a connector on Kubernetes and deploy using Cloud Run. We'll size the Kubernetes cluster with 2 nodes, each node using the 2 cpu + 2 gb (high-cpu) e2 machine type.
Big gcloud command incoming....
Deploying the Java Connector via Cloud Run
To deploy the Java connector docker image, we'll use GCP's Cloud Run for Anthos. This will deploy the connector and configure networking so that it can be publicly accessible.
First we need to create a k8 yaml file to define the connector and configuration:
Save the file above as connector-cloudrun.yaml and replace the following placeholders:
<CLOUD_MEMORY_STORE_IP> - replace with the IP address of your Cloud Memorystore instance
<DB_PASSWORD> - replace with the password you provided when creating the connector database on your Cloud SQL instance
<CLOUD_SQL_PRIVATE_IP> - replace with the PRIVATE ip address shown for your Cloud SQL instance
<ADMIN_PASSWORD> - replace with a password of your choosing. This will be used to authenticate as an admin to the REST API on your connector. This password does not have to be the same as your db password.
<YOUR_CONNECTOR_NAME> - the name by which your connector will be known on the ILP network. This will be the sub root of your connector's ILP addresses.
<GCP_PROJECT_ID> - your GCP project id
<BASE64_ENCODED_SERVICE_ACCOUNT_JSON> - replace a base64-encoded json for the service account you created above. See Exporting Service Account JSON in Base64 section above for how to obtain this value. It should be 1 really long line of text.
Once you've created and edited connector-cloudrun.yaml, you'll deploy the connector using:
Configuring DNS and SSL for the connector
If everything has gone well, you should now have connector running but in order to access it, you'll need to set up DNS. To keep things simple, we'll set up DNS to use a free DNS provider xip.io. This will get us up and running quickly without needing to set up buy domain name and configure DNS entries.
The following sets of commands require using Kubernetes. If you already have Kubernetes installed, you can use that, otherwise you can use GCP Cloud Shell. To launch Cloud Shell, navigate to https://console.cloud.google.com/kubernetes and click on the Connect button, then click on the Run in Cloud Shell button in the popup modal. This should launch a shell terminal modal in your browser.
In order to configure xip.io with a DNS mapping, we need to know the external IP address of your Kubernetes cluster. Run the following command to obtain thisL
Replace "1.2.3.4" with your External IP address in following command:
Now we will create a subdomain mapping for the connector (again replacing 1.2.3.4 with your kubernetes external IP address):
Lastly we will configure auto TLS/SSL certs to be generated:
The End
At this point your connector should be up and running and accessible via a URL like https://connector.1.2.3.4.xip.io/
Last updated