Container Logging
Architecture
Collect logs from containerized workloads:
- ECS Fargate tasks using awslogs driver
- ECS EC2 instances with CloudWatch agent
- EKS pods with Fluent Bit DaemonSet
- App Runner services with automatic log delivery
When to Use
This pattern is ideal when you need:
- Container log aggregation without managing infrastructure
- Integration with existing CloudWatch dashboards
- Correlation between container and AWS service logs
- Simple log driver configuration
Configuration
# Log group for ECS service
module "ecs_logs" {
source = "registry.patterneddesigns.ca/essentials/cloudwatch-logs/aws"
version = "1.3.0"
log_group_name = "/ecs/${var.cluster_name}/${var.service_name}"
retention_in_days = 30
metric_filters = [
{
name = "oom-kills"
pattern = "OutOfMemory"
metric_name = "OOMKills"
metric_namespace = "ECS/${var.service_name}"
},
{
name = "container-errors"
pattern = "[timestamp, level=ERROR, ...]"
metric_name = "ContainerErrors"
metric_namespace = "ECS/${var.service_name}"
}
]
}
# ECS Task Definition log configuration
resource "aws_ecs_task_definition" "app" {
# ... other config
container_definitions = jsonencode([
{
name = "app"
# ... other config
logConfiguration = {
logDriver = "awslogs"
options = {
"awslogs-group" = module.ecs_logs.log_group_name
"awslogs-region" = data.aws_region.current.name
"awslogs-stream-prefix" = "ecs"
}
}
}
])
}
Considerations
- Use log stream prefix for container identification
- Configure appropriate memory limits to prevent log buffer overflow
- Consider Firelens for multi-destination log routing
- Set up metric filters for container-specific issues