Ratify on AWS
This guide will explain how to get up and running with Ratify on AWS using EKS and ECR. This will involve setting up necessary AWS resources, installing necessary components, and configuring them properly. Once everything is set up we will walk through a simple scenario of verifying the signature on a container image at deployment time.
By the end of this guide you will have a public ECR repository, an EKS cluster with Gatekeeper and Ratify installed, and have validated that only images signed with a particular key can be deployed.
This guide assumes you are starting from scratch, but portions of the guide can be skipped if you have an existing EKS cluster or ECR repository.
Table of Contents
Prerequisites
There are a couple tools you will need locally to complete this guide:
- awscli: This is used to interact with AWS and provision necessary resources
- eksctl: This is used to easily provision EKS clusters
- kubectl: This is used to interact with the EKS cluster we will create
- helm: This is used to install ratify components into the EKS cluster
- docker: This is used to build the container image we will deploy in this guide
- cosign: This is used to sign the container image we will deploy in this guide
- notation: This is used to sign the container image we will deploy in this guide
- ratify: This is used to check images from ECR locally
- jq: This is used to capture variables from json returned by commands
If you have not done so already, configure awscli to interact with your AWS account by following these instructions.
Set Up ECR
We need to provision a public container repository to make our container images and their associated artifacts available to our EKS cluster. We will do this using awscli. For this guide we will be provisioning a public ECR repository to keep things simple.
export REPO_NAME=ratifydemo
export REPO_URI=$(aws ecr-public create-repository --repository-name $REPO_NAME --region us-east-1 | jq -r ."repository"."repositoryUri" )
We will use the repository URI returned by the create command later to build and tag the images we create.
For more information on provisioning ECR repositories check the documentation.
Set Up EKS
We will need to provision a Kubernetes cluster to deploy everything on. We will do this using the eksctl
command line
utility. Before provisioning our EKS cluster we will need to create a key pair for the nodes:
aws ec2 create-key-pair --region us-east-1 --key-name ratifyDemo
Save the output to your local machine, then run the following to create the cluster:
eksctl create cluster \
--name ratify-demo \
--region us-east-1 \
--zones us-east-1c,us-east-1d \
--with-oidc \
--ssh-access \
--ssh-public-key ratifyDemo
The template will provision a basic EKS cluster with default settings.
Additional information on EKS deployment can be found in the EKS documentation.
Prepare Container Image
For this guide we will create a basic container image we can use to simulate deployments of a service. We will start by building the container image:
docker build -t $REPO_URI:v1 https://github.com/wabbit-networks/net-monitor.git#main
After the container is built we need to push it to the repository:
aws ecr-public get-login-password --region us-east-1 | docker login --username AWS --password-stdin $REPO_URI
docker push $REPO_URI:v1
Using Cosign
Once the container is built and pushed, we will use cosign to create a key and sign the container image:
cosign generate-key-pair
cosign sign --key cosign.key $REPO_URI:v1
Both the container image and the signature should now be in the public ECR repository. We can use cosign to verify the signature and image are present and valid:
docker rmi $REPO_URI:v1
cosign verify --key cosign.pub $REPO_URI:v1
Using Notation
We can also use notation to generate a test key and sign the container image:
notation cert generate-test --default "wabbit-networks.io"
aws ecr-public get-login-password --region us-east-1 | notation login --username AWS --password-stdin $REPO_URI
notation sign $REPO_URI:v1
Another signature should be present in the public ECR repository now. We can use notation to verify signatures associated with the image:
notation verify $REPO_URI:v1
Configure Ratify
Ratify
We need to ensure that Ratify is properly configured to find signature artifacts for our container image. This is done using a json configuration file. The Ratify configuration file for the guide is created and deployed by the helm chart, but we can look at what will be generated below:
{
"store": {
"version": "1.0.0",
"plugins": [
{
"name": "oras",
"cosignEnabled": true,
"localCachePath": "./local_oras_cache"
}
]
},
"policy": {
"version": "1.0.0",
"plugin": {
"name": "configPolicy",
"artifactVerificationPolicies": {
"application/vnd.dev.cosign.artifact.sig.v1+json": "any"
}
}
},
"verifier": {
"version": "1.0.0",
"plugins": [
{
"name": "cosign",
"artifactTypes": "application/vnd.dev.cosign.artifact.sig.v1+json",
"key": "/usr/local/ratify-certs/cosign/cosign.pub"
}
]
}
}
This configuration file does the following:
- Enables the built-in
oras
referrer store with cosign support which will retrieve the necessary manifests and signature artifacts from the container registry - Enables the
cosign
verifier that will validate cosign signatures on container images
The configuration file and cosign public key will be mounted into the Ratify container via the helm chart.
Gatekeeper
The Ratify container will perform the actual validation of images and their artifacts, but Gatekeeper is used as the policy controller for Kubernetes. The helm chart for this guide has a basic Gatekeeper rego that checks for the string "false" in the results from the Ratify container.
This rego is kept simple to demonstrate the capability of Ratify. More complex combinations of regos and Ratify verifiers can be used to accomplish many types of checks. See the Gatekeeper docs for more information on rego authoring.
Deploy Ratify
We first need to install Gatekeeper into the cluster. We will use the Gatekeeper helm chart with some customizations:
helm repo add gatekeeper https://open-policy-agent.github.io/gatekeeper/charts
helm install gatekeeper/gatekeeper \
--name-template=gatekeeper \
--namespace gatekeeper-system --create-namespace \
--set enableExternalData=true \
--set validatingWebhookTimeoutSeconds=5 \
--set mutatingWebhookTimeoutSeconds=2 \
--set externaldataProviderResponseCacheTTL=10s
Once Gatekeeper has been deployed into the cluster, we can deploy Ratify with the provided helm chart. For validating cosign signatures, we can deploy Ratify configured with the public key we created earlier:
helm install ratify ratify/ratify --atomic \
--namespace gatekeeper-system \
--set-file cosign.key=cosign.pub
For validating notation signatures, we can deploy Ratify configured with the notation certificate generated earlier. We can get the path to the certificate generated by notation using the notation cert list
command:
notation cert list
helm install ratify ratify/ratify --atomic \
--namespace gatekeeper-system \
--set-file notationCerts={path/to/wabbit-networks.io.crt}
After deploying Ratify, we can apply the default Gatekeeper policy and constraint:
kubectl apply -f ./library/default/template.yaml
kubectl apply -f ./library/default/samples/constraint.yaml
We can then confirm all pods are running:
kubectl get po -A
We should see a ratify pod and some gatekeeper pods running
Deploy Container Image
Now that the signed container image is in the registry and Ratify is installed into the EKS cluster we can deploy our container image:
kubectl create ns demo
kubectl run demosigned -n demo --image $REPO_URI:v1
We should be able to see from the Ratify and Gatekeeper logs that the container signature was validated. The pod for the container should also be running.
kubectl logs deployment/ratify
We can also test that an image without a valid signature is not able to run:
kubectl run demounsigned -n demo --image hello-world
The command should fail with an error and we should be able to see from the Ratify and Gatekeeper logs that the signature validation failed.
kubectl logs deployment/ratify
Other AWS Integrations
IAM Roles for Service Accounts
Ratify can be configured to use IAM credentials to authenticate for any requests made to AWS services, such as for running and validating images stored in ECR Private Repositories. This can be done by configuring IAM Roles for Service Accounts (IRSA). For more information, read here.
AWS Signer
Ratify can be configured to use notation to verify signatures generated by AWS Signer. AWS Signer manages the code-signing certificate's public and private keys, and enables central management of the code-signing lifecycle. For more information, read here.
Cleaning Up
We can use awscli and eksctl to delete our ECR repository and EKS cluster:
aws ecr-public delete-repository --region us-east-1 --repository-name $REPO_NAME
eksctl delete cluster --region us-east-1 --name ratify-demo