Using Containers Directly on GCE
Summary
GCP, and in particular its components GCE has a nice feature to run containers directly on a VM, without the need to spin up a GKE cluster or to provision a machine by yourself. It leverages the use of Container Optimized OS maintained by google.
I’m a big fan of Terraform (and most Hashicorp products) but sadly it does not support the aforementioned feature out-of-the-box. Let’s see how to make this happen easily in a simple manner
How does this work ?
A container-only instance is a regular instance, and the specifics lay down in the instance metadata. Let’s take a look of what it looks like.
First, let’s launch a instance :
gcloud compute instances create-with-container test-vm --container-image gcr.io/cloud-marketplace/google/nginx1:latest --zone=zzz --network=yyy --subnet=xxx --container-env="foo=bar,spam=baz"
Then let’s look into its metadata :
gcloud compute instances describe test-vm --format=json | jq
And in the output
"metadata": {
"items": [
{
"key": "google-logging-enabled",
"value": "true"
},
{
"key": "gce-container-declaration",
"value": "spec:\n containers:\n - name: test-vm\n image: 'gcr.io/cloud-marketplace/google/nginx1:latest'\n env:\n - name: foo\n value: bar\n - name: spam\n value: baz\n stdin: false\n tty: false\n restartPolicy: Always\n\n# This container declaration format is not public API and may change without notice. Please\n# use gcloud command-line tool or Google Cloud Console to run Containers on Google Compute Engine."
}
],
"kind": "compute#metadata"
},
Note the warning embedded, but also that the metadata is simply YAML
The terraform version
After seeing this, all we need is to reproduce the work done by the gcloud command.
# this is maintained by GCP -> Container Optimized OS
data "google_compute_image" "cos" {
family = "cos-stable"
project = "cos-cloud"
}
data "google_compute_default_service_account" "default" {}
resource "google_compute_instance" "container_vm" {
name = "container_vm"
machine_type = "f1-micro"
allow_stopping_for_update = true
network_interface {
# snip
}
boot_disk {
initialize_params {
image = data.google_compute_image.cos.self_link
}
}
service_account {
email = data.google_compute_default_service_account.default.email
scopes = [
"https://www.googleapis.com/auth/devstorage.read_only",
"https://www.googleapis.com/auth/logging.write",
"https://www.googleapis.com/auth/monitoring.write",
"https://www.googleapis.com/auth/pubsub",
"https://www.googleapis.com/auth/service.management.readonly",
"https://www.googleapis.com/auth/servicecontrol",
"https://www.googleapis.com/auth/trace.append"
]
}
metadata = {
# This container declaration format is not a public API and may change without notice
# Use gcloud command-line tool or Google Cloud Console to create a new one and dump metadata if it breaks
gce-container-declaration =<<EOT
spec:
containers:
- image: your_image_here:latest
name: containervm
securityContext:
privileged: false
env:
- name: foo
value: bar
- name: meh
value: spam
stdin: false
tty: false
volumeMounts: []
restartPolicy: Always
volumes: []
EOT
google-logging-enabled = "true"
}
}
And that’s it ! You now have a hassle free dedicated image running.
If you get your metadata wrong then the instance will be created but won’t launch the container; and the gce-container-declaration
field will appear in the metadata.