AWS Certified Developer – Associate / Question #983 of 557

Question #983

A developer is using AWS Lambda to process messages from an Amazon SQS standard queue. The Lambda function transforms each message and stores the results in an Amazon DynamoDB table. The SQS queue is expected to handle up to 2,000 messages per second. During testing, the developer notices that some entries in DynamoDB are duplicated. Investigation reveals that the same message was processed multiple times by Lambda, with duplicates appearing within 30 seconds of each other. How should the developer resolve this issue?

A

Create an SQS FIFO queue. Enable message deduplication on the SQS FIFO queue.

B

Reduce the maximum Lambda concurrency that the SQS queue can invoke.

C

Use Lambda's temporary storage to keep track of processed message identifiers.

D

Configure a message group ID for every sent message. Enable message deduplication on the SQS standard queue.

Explanation

The duplication occurs because SQS standard queues provide 'at-least-once' delivery, which allows messages to be processed multiple times. Option A resolves this by using an SQS FIFO queue, which guarantees 'exactly-once' processing via message deduplication. FIFO queues prevent duplicates by tracking messages using deduplication IDs or content-based hashes.

Other options are incorrect:
- B: Reducing Lambda concurrency does not address SQS's inherent duplicate delivery behavior.
- C: Lambda's temporary storage is ephemeral and not shared across instances, making it unreliable for tracking duplicates.
- D: Standard queues do not support message deduplication or message group IDs; these are FIFO queue features.

Key Takeaway: Use SQS FIFO queues with deduplication when message duplication is unacceptable. Standard queues are unsuitable for scenarios requiring exactly-once processing.

Answer

The correct answer is: A