Demonstrations
These step-by-step demonstrations walk you through complete workflows using the cloudwatch-logs module. Each demonstration includes prerequisites, detailed instructions, and verification steps.
Getting Started
To follow any demonstration, ensure you have:
- Prerequisites met:
Terraform >= 1.0, AWS CLI configured - Authenticate with the registry:
terraform login registry.patterneddesigns.ca - Clone the demonstration repository:
git clone <demo-repo-url> - Follow the step-by-step instructions below
Step-by-Step Guides
Use Logs Insights to analyze and query log data
Prerequisites
- AWS account with appropriate permissions
- Terraform >= 1.0
- Log group with log data
Step 1: Create the Log Group
module "app_logs" {
source = "registry.patterneddesigns.ca/essentials/cloudwatch-logs/aws"
version = "1.3.0"
log_group_name = "/app/demo"
retention_in_days = 30
}
Step 2: Create Saved Queries
resource "aws_cloudwatch_query_definition" "recent_errors" {
name = "Recent Errors"
log_group_names = [module.app_logs.log_group_name]
query_string = <<-EOT
fields @timestamp, @message, @logStream
| filter @message like /ERROR/
| sort @timestamp desc
| limit 100
EOT
}
resource "aws_cloudwatch_query_definition" "request_latency" {
name = "Request Latency Stats"
log_group_names = [module.app_logs.log_group_name]
query_string = <<-EOT
fields @timestamp, @message
| parse @message /latency=(?<latency>\d+)ms/
| stats avg(latency) as avg_latency,
max(latency) as max_latency,
min(latency) as min_latency,
pct(latency, 95) as p95_latency
by bin(5m)
EOT
}
resource "aws_cloudwatch_query_definition" "error_breakdown" {
name = "Error Type Breakdown"
log_group_names = [module.app_logs.log_group_name]
query_string = <<-EOT
fields @timestamp, @message
| filter @message like /ERROR/
| parse @message /ERROR: (?<error_type>\w+)/
| stats count(*) as count by error_type
| sort count desc
EOT
}
Step 3: Query Via AWS CLI
Run queries from the command line:
# Start a query
aws logs start-query \
--log-group-name "/app/demo" \
--start-time $(date -d '1 hour ago' +%s) \
--end-time $(date +%s) \
--query-string 'fields @timestamp, @message | filter @message like /ERROR/ | limit 20'
# Get query results (use query-id from previous command)
aws logs get-query-results --query-id "your-query-id"
Step 4: Create Dashboard Widget
resource "aws_cloudwatch_dashboard" "main" {
dashboard_name = "application-logs"
dashboard_body = jsonencode({
widgets = [
{
type = "log"
x = 0
y = 0
width = 24
height = 6
properties = {
query = "SOURCE '${module.app_logs.log_group_name}' | fields @timestamp, @message | filter @message like /ERROR/ | sort @timestamp desc | limit 50"
region = data.aws_region.current.name
title = "Recent Errors"
}
}
]
})
}
Stream logs from multiple accounts to a central logging account
Prerequisites
- AWS accounts with appropriate permissions
- Terraform >= 1.0
- Cross-account IAM roles configured
Step 1: Create Destination in Central Account
In your central logging account:
# Central logging account
module "central_logs" {
source = "registry.patterneddesigns.ca/essentials/cloudwatch-logs/aws"
version = "1.3.0"
log_group_name = "/central/aggregated"
retention_in_days = 365
kms_key_arn = module.logging_kms.key_arn
}
resource "aws_cloudwatch_log_destination" "central" {
name = "central-log-destination"
role_arn = aws_iam_role.logs_destination.arn
target_arn = aws_kinesis_stream.logs.arn
}
resource "aws_cloudwatch_log_destination_policy" "central" {
destination_name = aws_cloudwatch_log_destination.central.name
access_policy = jsonencode({
Version = "2012-10-17"
Statement = [
{
Effect = "Allow"
Principal = {
AWS = var.source_account_ids
}
Action = "logs:PutSubscriptionFilter"
Resource = aws_cloudwatch_log_destination.central.arn
}
]
})
}
Step 2: Create Log Group in Source Account
In each source account:
# Source account
module "app_logs" {
source = "registry.patterneddesigns.ca/essentials/cloudwatch-logs/aws"
version = "1.3.0"
log_group_name = "/app/my-service"
retention_in_days = 30
}
resource "aws_cloudwatch_log_subscription_filter" "central" {
name = "stream-to-central"
log_group_name = module.app_logs.log_group_name
filter_pattern = ""
destination_arn = "arn:aws:logs:${var.region}:${var.central_account_id}:destination:central-log-destination"
distribution = "ByLogStream"
}
Step 3: Process Logs with Kinesis
# Central account - Kinesis stream and Lambda processor
resource "aws_kinesis_stream" "logs" {
name = "central-logs"
shard_count = 2
retention_period = 24
}
resource "aws_lambda_function" "log_processor" {
function_name = "log-processor"
runtime = "python3.12"
handler = "main.handler"
# ... additional configuration
}
resource "aws_lambda_event_source_mapping" "kinesis" {
event_source_arn = aws_kinesis_stream.logs.arn
function_name = aws_lambda_function.log_processor.arn
starting_position = "LATEST"
}
Step 4: Verify Streaming
Check that logs are flowing to the central account:
# In source account - generate test log
aws logs put-log-events \
--log-group-name "/app/my-service" \
--log-stream-name "test-stream" \
--log-events timestamp=$(date +%s000),message="Test log message"
# In central account - verify log arrival
aws logs filter-log-events \
--log-group-name "/central/aggregated" \
--filter-pattern "Test log message"