Vault Credential Injection

This guide is adapted from the Vault on Minikube and Vault Kubernetes Sidecar guides.

Most Crossplane providers support supplying credentials from at least the following sources:

  • Kubernetes Secret
  • Environment Variable
  • Filesystem

A provider may optionally support additional credentials sources, but the common sources cover a wide variety of use cases. One specific use case that is popular among organizations that use Vault for secrets management is using a sidecar to inject credentials into the filesystem. This guide will demonstrate how to use the Vault Kubernetes Sidecar to provide credentials for provider-gcp and provider-aws.

Note: in this guide we will copy GCP credentials and AWS access keys into Vault’s KV secrets engine. This is a simple generic approach to managing secrets with Vault, but is not as robust as using Vault’s dedicated cloud provider secrets engines for AWS, Azure, and GCP.

Setup

Note: this guide walks through setting up Vault running in the same cluster as Crossplane. You may also choose to use an existing Vault instance that runs outside the cluster but has Kubernetes authentication enabled.

Before getting started, you must ensure that you have installed Crossplane and Vault and that they are running in your cluster.

  1. Install Crossplane
1kubectl create namespace crossplane-system
2
3helm repo add crossplane-stable https://charts.crossplane.io/stable
4helm repo update
5
6helm install crossplane --namespace crossplane-system crossplane-stable/crossplane
  1. Install Vault Helm Chart
1helm repo add hashicorp https://helm.releases.hashicorp.com
2helm install vault hashicorp/vault
  1. Unseal Vault Instance

In order for Vault to access encrypted data from physical storage, it must be unsealed.

1kubectl exec vault-0 -- vault operator init -key-shares=1 -key-threshold=1 -format=json > cluster-keys.json
2VAULT_UNSEAL_KEY=$(cat cluster-keys.json | jq -r ".unseal_keys_b64[]")
3kubectl exec vault-0 -- vault operator unseal $VAULT_UNSEAL_KEY
  1. Enable Kubernetes Authentication Method

In order for Vault to be able to authenticate requests based on Kubernetes service accounts, the Kubernetes authentication backend must be enabled. This requires logging in to Vault and configuring it with a service account token, API server address, and certificate. Because we are running Vault in Kubernetes, these values are already available via the container filesystem and environment variables.

 1cat cluster-keys.json | jq -r ".root_token" # get root token
 2
 3kubectl exec -it vault-0 -- /bin/sh
 4vault login # use root token from above
 5vault auth enable kubernetes
 6
 7vault write auth/kubernetes/config \
 8        token_reviewer_jwt="$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
 9        kubernetes_host="https://$KUBERNETES_PORT_443_TCP_ADDR:443" \
10        kubernetes_ca_cert=@/var/run/secrets/kubernetes.io/serviceaccount/ca.crt
  1. Exit Vault Container

The next steps will be executed in your local environment.

1exit

Create GCP Service Account

In order to provision infrastructure on GCP, you will need to create a service account with appropriate permissions. In this guide we will only provision a CloudSQL instance, so the service account will be bound to the cloudsql.admin role. The following steps will setup a GCP service account, give it the necessary permissions for Crossplane to be able to manage CloudSQL instances, and emit the service account credentials in a JSON file.

 1# replace this with your own gcp project id and the name of the service account
 2# that will be created.
 3PROJECT_ID=my-project
 4NEW_SA_NAME=test-service-account-name
 5
 6# create service account
 7SA="${NEW_SA_NAME}@${PROJECT_ID}.iam.gserviceaccount.com"
 8gcloud iam service-accounts create $NEW_SA_NAME --project $PROJECT_ID
 9
10# enable cloud API
11SERVICE="sqladmin.googleapis.com"
12gcloud services enable $SERVICE --project $PROJECT_ID
13
14# grant access to cloud API
15ROLE="roles/cloudsql.admin"
16gcloud projects add-iam-policy-binding --role="$ROLE" $PROJECT_ID --member "serviceAccount:$SA"
17
18# create service account keyfile
19gcloud iam service-accounts keys create creds.json --project $PROJECT_ID --iam-account $SA

You should now have valid service account credentials in creds.json.

Store Credentials in Vault

After setting up Vault, you will need to store your credentials in the [kv secrets engine].

Note: the steps below involve copying credentials into the container filesystem before storing them in Vault. You may also choose to use Vault’s HTTP API or UI by port-forwarding the container to your local environment (kubectl port-forward vault-0 8200:8200).

  1. Copy Credentials File into Vault Container

Copy your credentials into the container filesystem so that your can store them in Vault.

1kubectl cp creds.json vault-0:/tmp/creds.json
  1. Enable KV Secrets Engine

Secrets engines must be enabled before they can be used. Enable the kv-v2 secrets engine at the secret path.

1kubectl exec -it vault-0 -- /bin/sh
2
3vault secrets enable -path=secret kv-v2
  1. Store GCP Credentials in KV Engine

The path of your GCP credentials is how the secret will be referenced when injecting it into the provider-gcp controller Pod.

1vault kv put secret/provider-creds/gcp-default @tmp/creds.json
  1. Clean Up Credentials File

You no longer need our GCP credentials file in the container filesystem, so go ahead and clean it up.

1rm tmp/creds.json

Create AWS IAM User

In order to provision infrastructure on AWS, you will need to use an existing or create a new IAM user with appropriate permissions. The following steps will create an AWS IAM user and give it the necessary permissions.

Note: if you have an existing IAM user with appropriate permissions, you can skip this step but you will still need to provide the values for the ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY environment variables.

 1# create a new IAM user
 2IAM_USER=test-user
 3aws iam create-user --user-name $IAM_USER
 4
 5# grant the IAM user the necessary permissions
 6aws iam attach-user-policy --user-name $IAM_USER --policy-arn arn:aws:iam::aws:policy/AmazonS3FullAccess
 7
 8# create a new IAM access key for the user
 9aws iam create-access-key --user-name $IAM_USER > creds.json
10# assign the access key values to environment variables
11ACCESS_KEY_ID=$(jq -r .AccessKey.AccessKeyId creds.json)
12AWS_SECRET_ACCESS_KEY=$(jq -r .AccessKey.SecretAccessKey creds.json)

Store Credentials in Vault

After setting up Vault, you will need to store your credentials in the [kv secrets engine].

  1. Enable KV Secrets Engine

Secrets engines must be enabled before they can be used. Enable the kv-v2 secrets engine at the secret path.

1kubectl exec -it vault-0 -- env \
2  ACCESS_KEY_ID=${ACCESS_KEY_ID} \
3  AWS_SECRET_ACCESS_KEY=${AWS_SECRET_ACCESS_KEY} \
4  /bin/sh
5
6vault secrets enable -path=secret kv-v2
  1. Store AWS Credentials in KV Engine

The path of your AWS credentials is how the secret will be referenced when injecting it into the provider-aws controller Pod.

vault kv put secret/provider-creds/aws-default access_key="$ACCESS_KEY_ID" secret_key="$AWS_SECRET_ACCESS_KEY"

Create a Vault Policy for Reading Provider Credentials

In order for our controllers to have the Vault sidecar inject the credentials into their filesystem, you must associate the Pod with a policy. This policy will allow for reading and listing all secrets on the provider-creds path in the kv-v2 secrets engine.

1vault policy write provider-creds - <<EOF
2path "secret/data/provider-creds/*" {
3    capabilities = ["read", "list"]
4}
5EOF

Create a Role for Crossplane Provider Pods

  1. Create Role

The last step is to create a role that is bound to the policy you created and associate it with a group of Kubernetes service accounts. This role can be assumed by any (*) service account in the crossplane-system namespace.

1vault write auth/kubernetes/role/crossplane-providers \
2        bound_service_account_names="*" \
3        bound_service_account_namespaces=crossplane-system \
4        policies=provider-creds \
5        ttl=24h
  1. Exit Vault Container

The next steps will be executed in your local environment.

1exit

Install provider-gcp

You are now ready to install provider-gcp. Crossplane provides a ControllerConfig type that allows you to customize the deployment of a provider’s controller Pod. A ControllerConfig can be created and referenced by any number of Provider objects that wish to use its configuration. In the example below, the Pod annotations indicate to the Vault mutating webhook that we want for the secret stored at secret/provider-creds/gcp-default to be injected into the container filesystem by assuming role crossplane-providers. There is also so template formatting added to make sure the secret data is presented in a form that provider-gcp is expecting.

{% raw %}

 1echo "apiVersion: pkg.crossplane.io/v1alpha1
 2kind: ControllerConfig
 3metadata:
 4  name: vault-config
 5spec:
 6  metadata:
 7    annotations:
 8      vault.hashicorp.com/agent-inject: \"true\"
 9      vault.hashicorp.com/role: "crossplane-providers"
10      vault.hashicorp.com/agent-inject-secret-creds.txt: "secret/provider-creds/gcp-default"
11      vault.hashicorp.com/agent-inject-template-creds.txt: |
12        {{- with secret \"secret/provider-creds/gcp-default\" -}}
13         {{ .Data.data | toJSON }}
14        {{- end -}}
15---
16apiVersion: pkg.crossplane.io/v1
17kind: Provider
18metadata:
19  name: provider-gcp
20spec:
21  package: xpkg.upbound.io/crossplane-contrib/provider-gcp:v0.22.0
22  controllerConfigRef:
23    name: vault-config" | kubectl apply -f -

{% endraw %}

Configure provider-gcp

One provider-gcp is installed and running, you will want to create a ProviderConfig that specifies the credentials in the filesystem that should be used to provision managed resources that reference this ProviderConfig. Because the name of this ProviderConfig is default it will be used by any managed resources that do not explicitly reference a ProviderConfig.

Note: make sure that the PROJECT_ID environment variable that was defined earlier is still set correctly.

 1echo "apiVersion: gcp.crossplane.io/v1beta1
 2kind: ProviderConfig
 3metadata:
 4  name: default
 5spec:
 6  projectID: ${PROJECT_ID}
 7  credentials:
 8    source: Filesystem
 9    fs:
10      path: /vault/secrets/creds.txt" | kubectl apply -f -

To verify that the GCP credentials are being injected into the container run the following command:

1PROVIDER_CONTROLLER_POD=$(kubectl -n crossplane-system get pod -l pkg.crossplane.io/provider=provider-gcp -o name --no-headers=true)
2kubectl -n crossplane-system exec -it $PROVIDER_CONTROLLER_POD -c provider-gcp -- cat /vault/secrets/creds.txt

Provision Infrastructure

The final step is to actually provision a CloudSQLInstance. Creating the object below will result in the creation of a Cloud SQL Postgres database on GCP.

 1echo "apiVersion: database.gcp.crossplane.io/v1beta1
 2kind: CloudSQLInstance
 3metadata:
 4  name: postgres-vault-demo
 5spec:
 6  forProvider:
 7    databaseVersion: POSTGRES_12
 8    region: us-central1
 9    settings:
10      tier: db-custom-1-3840
11      dataDiskType: PD_SSD
12      dataDiskSizeGb: 10
13  writeConnectionSecretToRef:
14    namespace: crossplane-system
15    name: cloudsqlpostgresql-conn" | kubectl apply -f -

You can monitor the progress of the database provisioning with the following command:

1kubectl get cloudsqlinstance -w

Install provider-aws

You are now ready to install provider-aws. Crossplane provides a ControllerConfig type that allows you to customize the deployment of a provider’s controller Pod. A ControllerConfig can be created and referenced by any number of Provider objects that wish to use its configuration. In the example below, the Pod annotations indicate to the Vault mutating webhook that we want for the secret stored at secret/provider-creds/aws-default to be injected into the container filesystem by assuming role crossplane-providers. There is also some template formatting added to make sure the secret data is presented in a form that provider-aws is expecting.

{% raw %}

 1echo "apiVersion: pkg.crossplane.io/v1alpha1
 2kind: ControllerConfig
 3metadata:
 4  name: aws-vault-config
 5spec:
 6  args:
 7    - --debug
 8  metadata:
 9    annotations:
10      vault.hashicorp.com/agent-inject: \"true\"
11      vault.hashicorp.com/role: \"crossplane-providers\"
12      vault.hashicorp.com/agent-inject-secret-creds.txt: \"secret/provider-creds/aws-default\"
13      vault.hashicorp.com/agent-inject-template-creds.txt: |
14        {{- with secret \"secret/provider-creds/aws-default\" -}}
15          [default]
16          aws_access_key_id="{{ .Data.data.access_key }}"
17          aws_secret_access_key="{{ .Data.data.secret_key }}"
18        {{- end -}}
19---
20apiVersion: pkg.crossplane.io/v1
21kind: Provider
22metadata:
23  name: provider-aws
24spec:
25  package: xpkg.upbound.io/crossplane-contrib/provider-aws:v0.33.0
26  controllerConfigRef:
27    name: aws-vault-config" | kubectl apply -f -

{% endraw %}

Configure provider-aws

Once provider-aws is installed and running, you will want to create a ProviderConfig that specifies the credentials in the filesystem that should be used to provision managed resources that reference this ProviderConfig. Because the name of this ProviderConfig is default it will be used by any managed resources that do not explicitly reference a ProviderConfig.

1echo "apiVersion: aws.crossplane.io/v1beta1
2kind: ProviderConfig
3metadata:
4  name: default
5spec:
6  credentials:
7    source: Filesystem
8    fs:
9      path: /vault/secrets/creds.txt" | kubectl apply -f -

To verify that the AWS credentials are being injected into the container run the following command:

1PROVIDER_CONTROLLER_POD=$(kubectl -n crossplane-system get pod -l pkg.crossplane.io/provider=provider-aws -o name --no-headers=true)
2kubectl -n crossplane-system exec -it $PROVIDER_CONTROLLER_POD -c provider-aws -- cat /vault/secrets/creds.txt

Provision Infrastructure

The final step is to actually provision a Bucket. Creating the object below will result in the creation of a S3 bucket on AWS.

 1echo "apiVersion: s3.aws.crossplane.io/v1beta1
 2kind: Bucket
 3metadata:
 4  name: s3-vault-demo
 5spec:
 6  forProvider:
 7    acl: private
 8    locationConstraint: us-east-1
 9    publicAccessBlockConfiguration:
10      blockPublicPolicy: true
11    tagging:
12      tagSet:
13        - key: Name
14          value: s3-vault-demo
15  providerConfigRef:
16    name: default" | kubectl apply -f -

You can monitor the progress of the bucket provisioning with the following command:

1kubectl get bucket -w