Question #825
A developer has an AWS Lambda function that is triggered by messages from an Amazon SQS standard queue. The developer observes that certain messages are being processed multiple times, leading to data inconsistencies. What should the developer do to MOST cost-effectively prevent duplicate message processing?
Replace the Amazon SQS standard queue with an Amazon SQS FIFO queue utilizing message deduplication.
Implement a dead-letter queue to capture reprocessed messages.
Configure the Lambda function's concurrency limit to 1 to process messages sequentially.
Migrate the messaging system from Amazon SQS to Amazon Kinesis Data Firehose.
Explanation
Answer A is correct because:
- SQS Standard Queues allow at-least-once delivery, which can cause duplicates if a message isn't deleted before its visibility timeout expires.
- SQS FIFO Queues use message deduplication IDs or content-based deduplication to ensure messages are processed exactly once, eliminating duplicates.
Why other options are incorrect:
- B (Dead-letter queue): A DLQ captures failed messages but does not prevent duplicates; it only handles retries.
- C (Concurrency limit=1): Sequential processing does not guarantee no duplicates (e.g., if Lambda fails mid-process, SQS may reprocess the message). It also reduces scalability.
- D (Kinesis Data Firehose): Kinesis is designed for real-time streaming, not queue-based messaging. Migrating adds complexity and cost without addressing the root cause.
Key Takeaway: FIFO queues are purpose-built for deduplication and ordered processing, making them the most cost-effective solution.
Answer
The correct answer is: A