Question #764
A developer is designing a serverless application that interacts with an Amazon RDS MySQL database. The application uses numerous AWS Lambda functions that trigger frequent scale-outs, each establishing a new database connection and straining database resources. The developer needs to minimize the number of database connections while ensuring Lambda functions can scale seamlessly. Which solution meets these requirements?
Set up provisioned concurrency for each Lambda function with ProvisionedConcurrentExecutions set to 20.
Enable query caching in RDS MySQL and update Lambda connection strings to use the cache endpoint.
Utilize Amazon RDS Proxy to manage database connections through pooling, adjusting Lambda connection strings to the proxy.
Apply reserved concurrency for each Lambda function, setting ReservedConcurrentExecutions to 20.
Explanation
The correct answer is C. Amazon RDS Proxy acts as a connection pool, allowing multiple Lambda functions to share and reuse database connections instead of each function creating a new connection. This reduces the total number of active connections to the RDS database, addressing resource strain. RDS Proxy also handles connection management, ensuring scalability without limiting Lambda's ability to scale out.
Why other options are incorrect:
- A: Provisioned concurrency reduces cold starts but does not address connection pooling. Each concurrent execution would still create a new database connection.
- B: Query caching reduces query load but does not solve the connection exhaustion issue, as Lambda instances would still open separate connections.
- D: Reserved concurrency limits Lambda's scalability, conflicting with the requirement for seamless scaling.
Key Points:
- RDS Proxy decouples connection management from Lambda scaling.
- Connection pooling reduces database load by reusing connections.
- Lambda can scale without being throttled by database connection limits.
Answer
The correct answer is: C