Examples
These examples demonstrate practical, real-world usage patterns for the ec2-instance module. Each example is self-contained and ready to run—simply copy the configuration, customize the values for your environment, and apply.
Getting Started
To run any example, follow these steps:
- Authenticate with the registry:
terraform login registry.patterneddesigns.ca - Initialize the working directory:
terraform init - Review the execution plan:
terraform plan - Apply the configuration:
terraform apply
Usage Examples
Minimal EC2 instance configuration
module "web_server" {
source = "registry.patterneddesigns.ca/patterneddesigns/ec2-instance/aws"
version = "1.5.0"
instance_name = "web-server"
instance_type = "t3.micro"
ami_id = "ami-0abcdef1234567890"
subnet_id = module.vpc.private_subnets[0]
}
EC2 instance with additional EBS volumes attached
Deploy an EC2 instance with additional EBS volumes for persistent storage.
module "database_server" {
source = "registry.patterneddesigns.ca/patterneddesigns/ec2-instance/aws"
version = "1.5.0"
instance_name = "database-server"
instance_type = "r6i.large"
ami_id = data.aws_ami.amazon_linux.id
subnet_id = module.vpc.private_subnets[0]
ebs_optimized = true
root_block_device = {
volume_size = 50
volume_type = "gp3"
encrypted = true
}
ebs_block_devices = [
{
device_name = "/dev/sdf"
volume_size = 100
volume_type = "gp3"
iops = 3000
throughput = 125
encrypted = true
},
{
device_name = "/dev/sdg"
volume_size = 500
volume_type = "st1"
encrypted = true
}
]
tags = {
Environment = "production"
Service = "database"
}
}
Volume Types
| Type | Use Case | Performance |
|---|---|---|
| gp3 | General purpose SSD | 3,000-16,000 IOPS |
| io2 | High-performance databases | Up to 64,000 IOPS |
| st1 | Throughput-optimized HDD | 500 MB/s max |
| sc1 | Cold storage HDD | 250 MB/s max |
Notes
- EBS volumes are automatically encrypted when
encrypted = true - Use
ebs_optimized = truefor dedicated EBS throughput - Additional volumes require mounting inside the instance
EC2 instance with user data script for initialization
Configure an EC2 instance with a user data script to automate setup at launch.
module "app_server" {
source = "registry.patterneddesigns.ca/patterneddesigns/ec2-instance/aws"
version = "1.5.0"
instance_name = "app-server"
instance_type = "t3.medium"
ami_id = data.aws_ami.amazon_linux.id
subnet_id = module.vpc.private_subnets[0]
security_group_ids = [aws_security_group.app.id]
user_data = <<-EOF
#!/bin/bash
set -e
# Update system packages
yum update -y
# Install Docker
amazon-linux-extras install docker -y
systemctl start docker
systemctl enable docker
usermod -aG docker ec2-user
# Install CloudWatch agent
yum install -y amazon-cloudwatch-agent
# Pull and run application container
docker pull myregistry.example.com/myapp:latest
docker run -d -p 8080:8080 myregistry.example.com/myapp:latest
# Signal completion (for use with cfn-signal or similar)
echo "User data script completed successfully"
EOF
tags = {
Environment = "production"
Application = "myapp"
}
}
Using a Template File
For more complex scripts, use templatefile():
module "templated_server" {
source = "registry.patterneddesigns.ca/patterneddesigns/ec2-instance/aws"
version = "1.5.0"
instance_name = "templated-server"
instance_type = "t3.medium"
ami_id = data.aws_ami.amazon_linux.id
subnet_id = module.vpc.private_subnets[0]
user_data = templatefile("${path.module}/scripts/init.sh", {
environment = "production"
app_version = "1.2.3"
config_bucket = aws_s3_bucket.config.id
})
}
Notes
- User data scripts run as root on first boot only
- Scripts are limited to 16 KB (use S3 for larger scripts)
- Check
/var/log/cloud-init-output.logfor debugging - Use
base64encode()for binary user data
EC2 spot instance configuration for cost savings
Deploy EC2 spot instances for significant cost savings on fault-tolerant workloads.
module "spot_worker" {
source = "registry.patterneddesigns.ca/patterneddesigns/ec2-instance/aws"
version = "1.5.0"
instance_name = "batch-worker"
instance_type = "c6i.xlarge"
ami_id = data.aws_ami.amazon_linux.id
subnet_id = module.vpc.private_subnets[0]
spot_price = "0.10"
spot_type = "persistent"
spot_instance_interruption_behavior = "stop"
user_data = <<-EOF
#!/bin/bash
# Configure instance to handle spot interruption
yum install -y aws-cli jq
# Start monitoring for spot interruption
cat > /usr/local/bin/spot-monitor.sh << 'SCRIPT'
#!/bin/bash
while true; do
if curl -s -o /dev/null -w "%{http_code}" http://169.254.169.254/latest/meta-data/spot/termination-time | grep -q 200; then
echo "Spot instance termination notice received"
# Graceful shutdown logic here
systemctl stop myapp
exit 0
fi
sleep 5
done
SCRIPT
chmod +x /usr/local/bin/spot-monitor.sh
/usr/local/bin/spot-monitor.sh &
EOF
tags = {
Environment = "development"
Workload = "batch-processing"
}
}
Spot Instance Types
| Type | Behavior |
|---|---|
one-time | Instance terminates when interrupted |
persistent | Instance restarts after capacity available |
Interruption Behaviors
| Behavior | Description |
|---|---|
terminate | Instance is terminated |
stop | Instance is stopped (persistent only) |
hibernate | Instance hibernates (if supported) |
Cost Comparison
Spot instances typically offer 60-90% savings compared to on-demand pricing.
Notes
- Use for fault-tolerant, flexible workloads only
- Implement graceful shutdown handling
- Consider Spot Fleet for mixed instance types
- Monitor
/latest/meta-data/spot/termination-timefor 2-minute warning
EC2 instance with auto-recovery enabled for high availability
Configure EC2 instances with auto-recovery to automatically recover from underlying hardware failures.
module "critical_server" {
source = "registry.patterneddesigns.ca/patterneddesigns/ec2-instance/aws"
version = "1.5.0"
instance_name = "critical-app"
instance_type = "t3.large"
ami_id = data.aws_ami.amazon_linux.id
subnet_id = module.vpc.private_subnets[0]
security_group_ids = [aws_security_group.app.id]
enable_auto_recovery = true
monitoring = true
tags = {
Environment = "production"
Criticality = "high"
}
}
# CloudWatch alarm for auto-recovery
resource "aws_cloudwatch_metric_alarm" "auto_recovery" {
alarm_name = "ec2-auto-recovery-${module.critical_server.instance_id}"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = 2
metric_name = "StatusCheckFailed_System"
namespace = "AWS/EC2"
period = 60
statistic = "Maximum"
threshold = 1
dimensions = {
InstanceId = module.critical_server.instance_id
}
alarm_actions = [
"arn:aws:automate:${data.aws_region.current.name}:ec2:recover"
]
alarm_description = "Auto-recover EC2 instance on system status check failure"
tags = {
Environment = "production"
}
}
With Instance Recovery and SNS Notification
module "monitored_server" {
source = "registry.patterneddesigns.ca/patterneddesigns/ec2-instance/aws"
version = "1.5.0"
instance_name = "monitored-app"
instance_type = "m6i.large"
ami_id = data.aws_ami.amazon_linux.id
subnet_id = module.vpc.private_subnets[0]
enable_auto_recovery = true
monitoring = true
tags = {
Environment = "production"
}
}
resource "aws_cloudwatch_metric_alarm" "recovery_with_notification" {
alarm_name = "ec2-recovery-${module.monitored_server.instance_id}"
comparison_operator = "GreaterThanOrEqualToThreshold"
evaluation_periods = 2
metric_name = "StatusCheckFailed_System"
namespace = "AWS/EC2"
period = 60
statistic = "Maximum"
threshold = 1
dimensions = {
InstanceId = module.monitored_server.instance_id
}
alarm_actions = [
"arn:aws:automate:${data.aws_region.current.name}:ec2:recover",
aws_sns_topic.alerts.arn
]
alarm_description = "Auto-recover and notify on system failure"
}
Requirements
- Instance must use EBS-backed storage (not instance store)
- Instance must be in a VPC
- Detailed monitoring recommended for faster detection
Notes
- Auto-recovery preserves instance ID, private IP, and EBS volumes
- Recovery migrates the instance to new hardware
- Not supported on bare metal instances or instances with instance store volumes
Environment-based EC2 instance configuration
Deploy EC2 instances with environment-specific configurations using variable maps.
locals {
environment = "production" # or "staging", "development"
instance_configs = {
production = {
instance_type = "m6i.xlarge"
volume_size = 100
monitoring = true
multi_az = true
}
staging = {
instance_type = "t3.large"
volume_size = 50
monitoring = true
multi_az = false
}
development = {
instance_type = "t3.medium"
volume_size = 30
monitoring = false
multi_az = false
}
}
config = local.instance_configs[local.environment]
}
module "app_server" {
source = "registry.patterneddesigns.ca/patterneddesigns/ec2-instance/aws"
version = "1.5.0"
instance_name = "app-${local.environment}"
instance_type = local.config.instance_type
ami_id = data.aws_ami.amazon_linux.id
subnet_id = module.vpc.private_subnets[0]
monitoring = local.config.monitoring
root_block_device = {
volume_size = local.config.volume_size
volume_type = "gp3"
encrypted = true
}
tags = {
Environment = local.environment
ManagedBy = "terraform"
}
}
Using Terraform Workspaces
locals {
environment = terraform.workspace
instance_types = {
default = "t3.micro"
development = "t3.medium"
staging = "t3.large"
production = "m6i.xlarge"
}
}
module "workspace_server" {
source = "registry.patterneddesigns.ca/patterneddesigns/ec2-instance/aws"
version = "1.5.0"
instance_name = "app-${local.environment}"
instance_type = lookup(local.instance_types, local.environment, local.instance_types.default)
ami_id = data.aws_ami.amazon_linux.id
subnet_id = module.vpc.private_subnets[0]
tags = {
Environment = local.environment
Workspace = terraform.workspace
}
}
Multiple Instances Per Environment
variable "environment" {
type = string
default = "production"
}
variable "instance_count" {
type = map(number)
default = {
production = 3
staging = 2
development = 1
}
}
module "app_servers" {
source = "registry.patterneddesigns.ca/patterneddesigns/ec2-instance/aws"
version = "1.5.0"
for_each = toset([for i in range(var.instance_count[var.environment]) : tostring(i)])
instance_name = "app-${var.environment}-${each.key}"
instance_type = local.config.instance_type
ami_id = data.aws_ami.amazon_linux.id
subnet_id = module.vpc.private_subnets[tonumber(each.key) % length(module.vpc.private_subnets)]
tags = {
Environment = var.environment
Index = each.key
}
}
Notes
- Use consistent naming conventions across environments
- Consider using separate AWS accounts for production isolation
- Store environment-specific values in tfvars files
- Use remote state with workspace-aware backends