This module can operate in two distinct modes:
- instance creation, with optional unmanaged group
- instance template creation
In both modes, an optional service account can be created and assigned to either instances or template. If you need a managed instance group when using the module in template mode, refer to the compute-mig
module.
- Examples
- Variables
- Outputs
- Fixtures
The simplest example leverages defaults for the boot disk image and size, and uses a service account created by the module. Multiple instances can be managed via the instance_count
variable.
module "simple-vm-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
}
# tftest modules=1 resources=1 inventory=defaults.yaml e2e
VM service accounts can be managed in four different ways:
- in its default configuration, the module uses the Compute default service account with a basic set of scopes (
devstorage.read_only
,logging.write
,monitoring.write
) - a custom service account can be used by passing its email in the
service_account.email
variable - a custom service account can be created by the module and used by setting the
service_account.auto_create
variable totrue
- the instance can be created with no service account by setting the
service_account
variable tonull
Scopes for custom service accounts are set by default to cloud-platform
and userinfo.email
, and can be further customized regardless of which service account is used by directly setting the service_account.scopes
variable.
module "vm-managed-sa-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test1"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
}
# tftest inventory=sa-default.yaml e2e
module "vm-managed-sa-example2" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test2"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
service_account = {
email = module.iam-service-account.email
}
}
# tftest inventory=sa-custom.yaml fixtures=fixtures/iam-service-account.tf e2e
module "vm-managed-sa-example2" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test2"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
service_account = {
auto_create = true
}
}
# tftest inventory=sa-managed.yaml e2e
module "vm-managed-sa-example2" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test2"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
service_account = null
}
# tftest inventory=sa-none.yaml e2e
Attached disks can be created and optionally initialized from a pre-existing source, or attached to VMs when pre-existing. The source
and source_type
attributes of the attached_disks
variable allows several modes of operation:
source_type = "image"
can be used with zonal disks in instances and templates, setsource
to the image name or self linksource_type = "snapshot"
can be used with instances only, setsource
to the snapshot name or self linksource_type = "attach"
can be used for both instances and templates to attach an existing disk, set source to the name (for zonal disks) or self link (for regional disks) of the existing disk to attach; no disk will be createdsource_type = null
can be used where an empty disk is needed,source
becomes irrelevant and can be left null
This is an example of attaching a pre-existing regional PD to a new instance:
module "vm-disks-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
attached_disks = [{
name = "repd-1"
size = 10
source_type = "attach"
source = "regions/${var.region}/disks/repd-test-1"
options = {
replica_zone = "${var.region}-c"
}
}]
service_account = {
auto_create = true
}
}
# tftest modules=1 resources=2
And the same example for an instance template (where not using the full self link of the disk triggers recreation of the template)
module "vm-disks-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
attached_disks = [{
name = "repd"
size = 10
source_type = "attach"
source = "https://www.googleapis.com/compute/v1/projects/${var.project_id}/regions/${var.region}/disks/repd-test-1"
options = {
replica_zone = "${var.region}-c"
}
}]
service_account = {
auto_create = true
}
create_template = true
}
# tftest modules=1 resources=2
The attached_disks
variable exposes an option
attribute that can be used to fine tune the configuration of each disk. The following example shows a VM with multiple disks
module "vm-disk-options-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
attached_disks = [
{
name = "data1"
size = "10"
source_type = "image"
source = "image-1"
options = {
auto_delete = false
replica_zone = "${var.region}-c"
}
},
{
name = "data2"
size = "20"
source_type = "snapshot"
source = "snapshot-2"
options = {
type = "pd-ssd"
mode = "READ_ONLY"
}
}
]
service_account = {
auto_create = true
}
}
# tftest inventory=disk-options.yaml
To create the boot disk as an independent resources instead of as part of the instance creation flow, set boot_disk.use_independent_disk
to true
and optionally configure boot_disk.initialize_params
.
This will create the boot disk as its own resource and attach it to the instance, allowing to recreate the instance from Terraform while preserving the boot disk.
module "simple-vm-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test"
boot_disk = {
initialize_params = {}
use_independent_disk = true
}
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
service_account = {
auto_create = true
}
}
# tftest inventory=independent-boot-disk.yaml e2e
By default VNs are create with an automatically assigned IP addresses, but you can change it through the addresses
and nat
attributes of the network_interfaces
variable:
module "vm-internal-ip" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "vm-internal-ip"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
addresses = { internal = "10.0.0.2" }
}]
}
module "vm-external-ip" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "vm-external-ip"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
nat = true
addresses = { external = "8.8.8.8" }
}]
}
# tftest inventory=ips.yaml
This example shows how to add additional Alias IPs to your VM. alias_ips
is a map of subnetwork additional range name into IP address.
module "vm-with-alias-ips" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
alias_ips = {
services = "100.71.1.123/32"
}
}]
}
# tftest inventory=alias-ips.yaml e2e
This example shows how to enable gVNIC on your VM by customizing a cos
image. Given that gVNIC needs to be enabled as an instance configuration and as a guest os configuration, you'll need to supply a bootable disk with guest_os_features=GVNIC
. SEV_CAPABLE
, UEFI_COMPATIBLE
and VIRTIO_SCSI_MULTIQUEUE
are enabled implicitly in the cos
, rhel
, centos
and other images.
Note: most recent Google-provided images do enable GVNIC
and no custom image is necessary.
resource "google_compute_image" "cos-gvnic" {
project = var.project_id
name = "my-image"
source_image = "https://www.googleapis.com/compute/v1/projects/cos-cloud/global/images/cos-89-16108-534-18"
guest_os_features {
type = "GVNIC"
}
guest_os_features {
type = "SEV_CAPABLE"
}
guest_os_features {
type = "UEFI_COMPATIBLE"
}
guest_os_features {
type = "VIRTIO_SCSI_MULTIQUEUE"
}
}
module "vm-with-gvnic" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test"
boot_disk = {
initialize_params = {
image = google_compute_image.cos-gvnic.self_link
type = "pd-ssd"
}
}
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
nic_type = "GVNIC"
}]
service_account = {
auto_create = true
}
}
# tftest inventory=gvnic.yaml
Private Service Connect interfaces can be configured via the network_attached_interfaces
variable, which is a simple list of network attachment ids, one per interface. PSC interfaces will be defined after regular interfaces.
# create the network attachment from a service project
module "net-attachment" {
source = "./fabric/modules/net-address"
project_id = var.project_id
network_attachments = {
svc-0 = {
subnet_self_link = module.vpc.subnet_self_links["${var.region}/ipv6-internal"]
producer_accept_lists = [var.project_id]
}
}
}
module "vm-psc-interface" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "vm-internal-ip"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
network_attached_interfaces = [
module.net-attachment.network_attachment_ids["svc-0"]
]
}
# tftest fixtures=fixtures/net-vpc-ipv6.tf e2e
You can define labels and custom metadata values. Metadata can be leveraged, for example, to define a custom startup script.
module "vm-metadata-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "nginx-server"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
labels = {
env = "dev"
system = "crm"
}
metadata = {
startup-script = <<-EOF
#! /bin/bash
apt-get update
apt-get install -y nginx
EOF
}
service_account = {
auto_create = true
}
}
# tftest inventory=metadata.yaml e2e
Like most modules, you can assign IAM roles to the instance using the iam
variable.
module "vm-iam-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "webserver"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
iam = {
"roles/compute.instanceAdmin" = [
"group:${var.group_email}",
]
}
}
# tftest inventory=iam.yaml e2e
Spot VMs are ephemeral compute instances suitable for batch jobs and fault-tolerant workloads. Spot VMs provide new features that preemptible instances do not support, such as the absence of a maximum runtime.
module "spot-vm-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test"
options = {
spot = true
termination_action = "STOP"
}
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
}
# tftest inventory=spot.yaml e2e
You can enable confidential compute with the confidential_compute
variable, which can be used for standalone instances or for instance templates.
module "vm-confidential-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "confidential-vm"
confidential_compute = true
instance_type = "n2d-standard-2"
boot_disk = {
initialize_params = {
image = "projects/debian-cloud/global/images/family/debian-12"
}
}
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
}
module "template-confidential-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "confidential-template"
confidential_compute = true
create_template = true
instance_type = "n2d-standard-2"
boot_disk = {
initialize_params = {
image = "projects/debian-cloud/global/images/family/debian-12"
}
}
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
}
# tftest inventory=confidential.yaml e2e
This example shows how to control disk encryption via the the encryption
variable, in this case the self link to a KMS CryptoKey that will be used to encrypt boot and attached disk. Managing the key with the ../kms
module is of course possible, but is not shown here.
module "project" {
source = "./fabric/modules/project"
name = "gce"
billing_account = var.billing_account_id
prefix = var.prefix
parent = var.folder_id
services = [
"cloudkms.googleapis.com",
"compute.googleapis.com",
]
}
module "kms" {
source = "./fabric/modules/kms"
project_id = module.project.project_id
keyring = {
location = var.region
name = "${var.prefix}-keyring"
}
keys = {
"key-regional" = {
}
}
iam = {
"roles/cloudkms.cryptoKeyEncrypterDecrypter" = [
module.project.service_agents.compute.iam_email
]
}
}
module "vpc" {
source = "./fabric/modules/net-vpc"
project_id = module.project.project_id
name = "my-network"
subnets = [
{
ip_cidr_range = "10.0.0.0/24"
name = "production"
region = var.region
},
]
}
module "kms-vm-example" {
source = "./fabric/modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "kms-test"
network_interfaces = [{
network = module.vpc.self_link
subnetwork = module.vpc.subnet_self_links["${var.region}/production"]
}]
attached_disks = [{
name = "attached-disk"
size = 10
}]
service_account = {
auto_create = true
}
encryption = {
encrypt_boot = true
kms_key_self_link = module.kms.keys.key-regional.id
}
}
# tftest inventory=cmek.yaml e2e
Advanced machine features can be configured via the options.advanced_machine_features
variable.
module "simple-vm-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
options = {
advanced_machine_features = {
enable_nested_virtualization = true
enable_turbo_mode = true
threads_per_core = 2
}
}
}
# tftest modules=1 resources=1
This example shows how to use the module to manage an instance template that defines an additional attached disk for each instance, and overrides defaults for the boot disk image and service account.
module "cos-test" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
attached_disks = [
{
name = "disk-1"
size = 10
}
]
service_account = {
email = module.iam-service-account.email
}
create_template = true
}
# tftest inventory=template.yaml fixtures=fixtures/iam-service-account.tf e2e
If an instance group is needed when operating in instance mode, simply set the group
variable to a non null map. The map can contain named port declarations, or be empty if named ports are not needed.
locals {
cloud_config = "my cloud config"
}
module "instance-group" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "ilb-test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
service_account = {
email = var.service_account.email
scopes = ["https://www.googleapis.com/auth/cloud-platform"]
}
metadata = {
user-data = local.cloud_config
}
group = { named_ports = {} }
}
# tftest inventory=group.yaml e2e
Instance start and stop schedules can be defined via an existing or auto-created resource policy. This functionality requires additional permissions on Compute Engine Service Agent
To use an existing policy pass its id to the instance_schedule
variable:
module "instance" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "schedule-test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
instance_schedule = {
resource_policy_id = "projects/${var.project_id}/regions/${var.region}/resourcePolicies/test"
}
}
# tftest inventory=instance-schedule-id.yaml
To create a new policy set its configuration in the instance_schedule
variable. When removing the policy follow a two-step process by first setting active = false
in the schedule configuration, which will unattach the policy, then removing the variable so the policy is destroyed.
module "project" {
source = "./fabric/modules/project"
name = var.project_id
project_reuse = {
use_data_source = false
project_attributes = {
name = var.project_id
number = var.project_number
services_enabled = ["compute.googleapis.com"]
}
}
iam_bindings_additive = {
compute-admin-service-agent = {
member = module.project.service_agents["compute"].iam_email
role = "roles/compute.instanceAdmin.v1"
}
}
}
module "instance" {
source = "./fabric/modules/compute-vm"
project_id = module.project.project_id
zone = "${var.region}-b"
name = "schedule-test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
}
instance_schedule = {
create_config = {
vm_start = "0 8 * * *"
vm_stop = "0 17 * * *"
}
}
depends_on = [module.project] # ensure that grants are complete before creating schedule / instance
}
# tftest inventory=instance-schedule-create.yaml e2e
Snapshot policies can be attached to disks with optional creation managed by the module.
module "instance" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "schedule-test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
boot_disk = {
initialize_params = {
image = "projects/cos-cloud/global/images/family/cos-stable"
}
snapshot_schedule = ["boot"]
}
attached_disks = [
{
name = "disk-1"
size = 10
snapshot_schedule = ["boot"]
}
]
snapshot_schedules = {
boot = {
schedule = {
daily = {
days_in_cycle = 1
start_time = "03:00"
}
}
}
}
}
# tftest inventory=snapshot-schedule-create.yaml e2e
Resource manager tags bindings for use in IAM or org policy conditions are supported via three different variables:
network_tag_bindings
associates tags to instances after creation, and is meant for use with network firewall policiestag_bindings
associates tags to instances and zonal disks after creation, and is meant for use with IAM or organization policy conditionstag_bindings_immutable
associates tags to instances and disks created as part of the instance, or instance templates; the binding is applied at creation time and triggers resource recreation on change
The non-immutable variables follow our usual interface for tag bindings, and support specifying a map with arbitrary keys mapping to tag key or value ids. To prevent a provider permadiff also pass in the project number in the project_number
variable.
The immutable variable uses a different format enforced by the Compute API, where keys need to be tag key ids, and values tag value ids.
This is an example of setting non-immutable tag bindings:
module "simple-vm-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
project_number = 12345678
zone = "${var.region}-b"
name = "test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
tag_bindings = {
dev = "tagValues/1234567890"
}
}
# tftest modules=1 resources=2
This example uses immutable tag bindings, and will trigger recreation if those are changed.
module "simple-vm-example" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
name = "test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
tag_bindings_immutable = {
"tagKeys/1234567890" = "tagValues/7890123456"
}
}
# tftest inventory=tag-bindings.yaml
You can add node affinities (and anti-affinity) configurations to allocate the VM on sole tenant nodes.
module "sole-tenancy" {
source = "./fabric/modules/compute-vm"
project_id = var.project_id
zone = "${var.region}-b"
instance_type = "n1-standard-1"
name = "test"
network_interfaces = [{
network = var.vpc.self_link
subnetwork = var.subnet.self_link
}]
options = {
node_affinities = {
workload = {
values = ["frontend"]
}
cpu = {
in = false
values = ["c3"]
}
}
}
}
# tftest inventory=sole-tenancy.yaml
name | description | type | required | default |
---|---|---|---|---|
name | Instance name. | string |
✓ | |
network_interfaces | Network interfaces configuration. Use self links for Shared VPC, set addresses to null if not needed. | list(object({…})) |
✓ | |
project_id | Project id. | string |
✓ | |
zone | Compute zone. | string |
✓ | |
attached_disk_defaults | Defaults for attached disks options. | object({…}) |
{…} |
|
attached_disks | Additional disks, if options is null defaults will be used in its place. Source type is one of 'image' (zonal disks in vms and template), 'snapshot' (vm), 'existing', and null. | list(object({…})) |
[] |
|
boot_disk | Boot disk properties. | object({…}) |
{…} |
|
can_ip_forward | Enable IP forwarding. | bool |
false |
|
confidential_compute | Enable Confidential Compute for these instances. | bool |
false |
|
create_template | Create instance template instead of instances. | bool |
false |
|
description | Description of a Compute Instance. | string |
"Managed by the compute-vm Terraform module." |
|
enable_display | Enable virtual display on the instances. | bool |
false |
|
encryption | Encryption options. Only one of kms_key_self_link and disk_encryption_key_raw may be set. If needed, you can specify to encrypt or not the boot disk. | object({…}) |
null |
|
gpu | GPU information. Based on https://cloud.google.com/compute/docs/gpus. | object({…}) |
null |
|
group | Define this variable to create an instance group for instances. Disabled for template use. | object({…}) |
null |
|
hostname | Instance FQDN name. | string |
null |
|
iam | IAM bindings in {ROLE => [MEMBERS]} format. | map(list(string)) |
{} |
|
instance_schedule | Assign or create and assign an instance schedule policy. Either resource policy id or create_config must be specified if not null. Set active to null to dtach a policy from vm before destroying. | object({…}) |
null |
|
instance_type | Instance type. | string |
"f1-micro" |
|
labels | Instance labels. | map(string) |
{} |
|
metadata | Instance metadata. | map(string) |
{} |
|
min_cpu_platform | Minimum CPU platform. | string |
null |
|
network_attached_interfaces | Network interfaces using network attachments. | list(string) |
[] |
|
network_tag_bindings | Resource manager tag bindings in arbitrary key => tag key or value id format. Set on both the instance only for networking purposes, and modifiable without impacting the main resource lifecycle. | map(string) |
{} |
|
options | Instance options. | object({…}) |
{…} |
|
project_number | Project number. Used in tag bindings to avoid a permadiff. | string |
null |
|
scratch_disks | Scratch disks configuration. | object({…}) |
{…} |
|
service_account | Service account email and scopes. If email is null, the default Compute service account will be used unless auto_create is true, in which case a service account will be created. Set the variable to null to avoid attaching a service account. | object({…}) |
{} |
|
shielded_config | Shielded VM configuration of the instances. | object({…}) |
null |
|
snapshot_schedules | Snapshot schedule resource policies that can be attached to disks. | map(object({…})) |
{} |
|
tag_bindings | Resource manager tag bindings in arbitrary key => tag key or value id format. Set on both the instance and zonal disks, and modifiable without impacting the main resource lifecycle. | map(string) |
{} |
|
tag_bindings_immutable | Immutable resource manager tag bindings, in tagKeys/id => tagValues/id format. These are set on the instance or instance template at creation time, and trigger recreation if changed. | map(string) |
null |
|
tags | Instance network tags for firewall rule targets. | list(string) |
[] |
name | description | sensitive |
---|---|---|
external_ip | Instance main interface external IP addresses. | |
group | Instance group resource. | |
id | Fully qualified instance id. | |
instance | Instance resource. | ✓ |
internal_ip | Instance main interface internal IP address. | |
internal_ips | Instance interfaces internal IP addresses. | |
login_command | Command to SSH into the machine. | |
self_link | Instance self links. | |
service_account | Service account resource. | |
service_account_email | Service account email. | |
service_account_iam_email | Service account email. | |
template | Template resource. | |
template_name | Template name. |