Question #962
A company operates a mobile app for conducting short-term surveys on AWS. Each survey runs for a variable duration, and once completed, the company selects a random participant as the winner. The company does not need to retain any survey data after the survey ends.
The current architecture uses custom code hosted on Amazon EC2 instances behind an Application Load Balancer, with survey responses stored in Amazon RDS MySQL instances. The company wants to redesign the architecture to minimize costs.
Which solution meets these requirements MOST cost-effectively?
Migrate data storage to Amazon DynamoDB and set up a DAX cluster. Rewrite the code to run on Amazon ECS with Fargate. Delete the DynamoDB table after the survey ends.
Migrate data to Amazon Redshift. Convert the code to AWS Lambda functions. Delete the Redshift cluster post-survey.
Implement Amazon ElastiCache for Memcached in front of RDS. Use ECS Fargate for the code. Set ElastiCache TTL to expire data after the survey.
Move data to Amazon DynamoDB, use Lambda functions, and configure DynamoDB TTL to auto-expire entries post-survey.
Explanation
Answer D is correct because:
- DynamoDB is serverless, scales automatically, and incurs costs only for usage, avoiding EC2/RDS overhead.
- Lambda follows a pay-per-request model, ideal for variable workloads like short-term surveys.
- DynamoDB TTL auto-deletes data post-survey, eliminating manual cleanup costs.
Other options are less cost-effective:
- A: DAX adds unnecessary caching costs; deleting tables manually is less efficient than TTL.
- B: Redshift is overkill for simple survey data; cluster deletion delays add management overhead.
- C: ElastiCache/RDS retains storage costs; TTL only applies to cached data, not the database.
Key Points:
1. Use serverless services (Lambda, DynamoDB) for variable workloads.
2. Automate data cleanup via TTL to reduce operational effort.
3. Avoid over-provisioned services (Redshift, DAX) for simple use cases.
Answer
The correct answer is: D