AWS Certified Solutions Architect - Associate / Question #1040 of 1019

Question #1040

A media company is developing a global real-time voting system for a popular TV show. The system must handle millions of votes per second with millisecond latency during peak voting windows and ensure high availability. Which architecture will meet these requirements with the LEAST operational overhead?

A

Deploy the voting application on Amazon EC2 instances in Auto Scaling groups across multiple Availability Zones. Use an Application Load Balancer (ALB) to distribute traffic. Store vote counts in Amazon RDS for PostgreSQL with read replicas for scalability.

B

Host the voting frontend in an Amazon S3 bucket with Amazon CloudFront for global caching. Use EC2 instances behind an ALB to process votes via REST APIs. Store vote data in Amazon Aurora with auto-scaling enabled.

C

Containerize the voting application and deploy it on Amazon EKS. Use Kubernetes Horizontal Pod Autoscaler to manage traffic spikes. Store vote results in Amazon ElastiCache for Redis and persist final counts in Amazon RDS for MySQL.

D

Host the static voting interface in Amazon S3 with Amazon CloudFront for low-latency delivery. Use Amazon API Gateway and AWS Lambda to process votes via serverless APIs. Store vote data in Amazon DynamoDB with on-demand capacity.

Explanation

Option D is correct because:
- Serverless Architecture: API Gateway and Lambda automatically scale to handle millions of requests per second without server management, reducing operational overhead.
- Low Latency: CloudFront delivers the static frontend globally with low latency, while DynamoDB provides single-digit millisecond latency for vote storage.
- High Scalability: DynamoDB's on-demand mode scales seamlessly to handle unpredictable traffic spikes.
- High Availability: All services are fully managed and inherently highly available.

Other options are incorrect because:
- A & B: EC2 and RDS/Aurora require manual scaling and have higher operational overhead. RDS/Aurora may struggle with write-heavy workloads at scale.
- C: EKS introduces Kubernetes management complexity, and RDS adds latency for persisting data.

Key Points: Use serverless (Lambda, API Gateway) and NoSQL (DynamoDB) for high-scale, low-latency systems. Avoid managing servers (EC2/EKS) for minimal operational overhead.

Answer

The correct answer is: D