- Increase the configured CPU value
- Increase the configured timeout value
- Increase the configured memory value
- Increase the configured concurrency value
- blocks
- layers
- aliases
- handlers
- in sequence
- both of these answers
- neither of these answers
- in parallel
- aws lambda invoke --function ReturnBucketName outputfile.txt
- aws lambda execute --function-name ReturnBucketName outputfile.txt
- aws lambda invoke --function-name ReturnBucketName outputfile.txt
- aws lambda execute --function ReturnBucketName outputfile.txt
- AWS Trace
- CloudStack
- CloudTrail
- AWS X-Ray
Q6. You need to build a continuous integration/deployment pipeline for a set of Lambdas. What should you do?
- Create configuration files and deploy them using AWS CodePipeline.
- Create CloudFormation templates and deploy them using AWS CodeBuild
- Create configuration file and deploy using AWS CodeBuild
- Create CloudFormation templates and deploy them using AWS CodePipeline.
- API Gateway
- S3
- SAS
- CloudTrail
- Use S3 metrics and CloudWatch alarms
- Create custom metrics within your Lambda code.
- Create custom metrics within your CloudWatch code.
- Use Lambda metrics and CloudWatch alarms.
- an SSL certificate
- a bitmask
- an AWS KMS key
- an HTTP protocol
- binaries.
- all of these answers
- executables
- Shell scripts
- MVC
- virtual
- stateless
- protocol
- by uploading a .zip file
- all of these answers
- by editing inline
- from an S3 bucket
Q13. You are performance-testing your Lambda to verify that you set the memory size adequately. Where do you verify the execution overhead?
- CLoudWatch logs
- DynamoDB logs
- S3 logs
- Lambda logs.
- CodeStack
- ElasticStack
- Mobile Hub
- CodeDeploy
- proportionally
- equally
- periodically
- daily
Q16. You can restrict the scope of a user's permissions by specifying which two items in an IAM policy?
- resources and users
- resources and conditions
- events and users
- events and conditions
- logging streams
- rotating streams
- logging events
- advancing log groups
- create a Lambda
- be an event source
- assign an IAM role
- delete a Lambda
- Create a Lambda function with a custom runtime and reference the function in your Lambda
- Create a Lambda layer with a custom runtime and reference the layer in your lambda
- You cannot use Lambda in this situation
- Create a Lambda function with a custom runtime
- the execution policy
- the Lambda configuration
- the Lambda nodes
- the IAM user
- department:Sales,department:Sales
- department:Sales,department:sales
- aws:demo;aws:demo
- aws:demo;aws:DEMO
- neither of these answers
- UDP/IP
- TCP/IP
- both of these answers
- automatically
- none of these answers
- manually
- ad hoc
Q24. You are testing your stream-based application and the associated Lambda. AWS best practice advises you to test by varying what?
- stream and record sizes
- stream and shard sizes
- batch and record sizes
- batch and shard sizes
- Place each subnet in a VPC. Associate all subnets to your Lambda.
- Place all subnets in a VPC. Associate all subnets to your Lambda.
- Configure your Lambda to be available to multiple VPCs.
- Configure all application VPCs to be peered.
- number of function calls
- amount of code run
- compute time
- amount of infrastructure used
- Author a Lambda from scratch.
- Use a blueprint.
- Use a .zip deployment package.
- Use the serverless app repository.
- /tmp
- /default
- /temp
- /ds
- Delete the function.
- Set the function concurrent execution limit to 0 while you update the code.
- Reset the function.
- Set the function concurrent execution limit to 100 while you update the code.
- Overprovision memory to run your functions faster and reduce your costs. Do not overprovision your function timeout settings.
- Overprovision memory and your function timeout settings to run your functions faster and reduce your costs.
- Do not overprovision memory. Overprovision your function timeout settings to run your functions faster and reduce costs.
- Do not overprovision memory. Do not overprovision your function timeout settings to run your functions faster and reduce costs.
- removing log groups
- none of these answers
- creating log groups
- updating log groups
- DynamoDB tables
- key-value pairs
- S3 buckets
- none of these answers
Q33. You need to use a Lambda to provide backend logic to your website. Which service do you use to make your Lambda available to your website?
- S3
- API Gateway
- X-Ray
- DynamoDB
Q34. You are creating a Lambda to trigger on change to files in an S3 bucket. Where should you put the bucket name?
- in the Lambda function code
- in a Lambda environment variable
- in the Lambda tags
- in another S3 bucket
- Deploy the Lambda
- Export the function
- none of these answers
- Configure a test event
- Fleece
- NPM
- none of these answers
- Pod
- CloudTrail
- CloudWatch
- CloudFormation
- LogWatch
- a table definition
- queue isolation
- STS Write
- an SNS topic
Q39. You need to set an S3 event trigger on your Lambda to respond when data is added to your bucket from another S3 bucket. Which event type do you configure?
- POST
- "All object create events"
- PUT
- COPY
- Lambda configuration from logging code
- Lambda handler from logging code
- Lambda handler from core logic
- Lambda configuration from core logic
- YAML definition
- CloudFormation stack configuration
- SAML deployment stack
- Zip file of all related files
- only at creation
- only before deployment
- never
- anytime via configuration
- SAM templates are a superset of CloudFormation templates. SAM templates include additional resource types.
- SAM templates have some overlap with CloudFormation templates. Both SAM and CloudFormation templates include resource types that are not in the other type of template.
- CloudFormation templates are a superset of SAM templates. CloudFormation templates include additional resource types.
- SAM templates are a different name for CloudFormation templates. Both template types include the same resource types.
- EdgeCloud
- CloudEdge
- CloudFront
- CloudStack
- custom
- all of these answers
- Java
- Ruby
Q46. You need to setup a mechanism to put controls in place to notify you when you have a spike in Lambda concurrency. What should you do?
- Deploy a CloudTrail alarm that notifies you when function metrics exceed your threshold. Create an AWS budget to monitor costs.
- Deploy a CloudWatch alarm that notifies you when function metrics exceed your threshold. Create an AWS budget to monitor costs.
- Deploy a CloudWatch alarm that notifies you when function metrics exceed your threshold. Create an AWS CostMonitor to monitor costs.
- Deploy a CloudTrail alarm that notifies you when function metrics exceed your threshold. Create an AWS CostMonitor to monitor costs.
- Add extra code to check if the transient cache, or the /tmp directory, has the data that you stored.
- Add extra code to check if the permanent cache, or the /cache directory, has the data that you stored.
- Do nothing. AWS minimizes cols start time by default.
- Create a warm-up Lambda that calls your Lambda every minute
- at rest
- at runtime
- at deployment
- non of these answers
Q49. When you use a resource-based policy to give a service, resource, or account access to your function, how can you apply the scope of that permission??
- at the function level
- at the alias or function level
- at the version, alias, or function level
- at the version or function level
Q50. Lambda can read events from which other AWS services? (ref-https://docs.aws.amazon.com/lambda/latest/dg/lambda-services.html)
- Kinesis, S3, and SQS
- Kinesis, S3, and SNS
- Kinesis, DynamoDB, and SNS
- Kinesis, DynamoDB, and SQS
Explanation
Lambda can used for all services mentioned on the question: Kinesis, S3, SNS, SQS, DynamoDB. But as you can see in the reference, Lambda's responsibility and method invocation can be categorized by Lambda polling and Event Driven (synchronous invocation). When you implement an event-driven architecture, you grant the event-generating service permission to invoke your function in the function's resource-based policy. Then you configure that service to generate events that invoke your function. When you implement a Lambda polling architecture, you grant Lambda permission to access the other service in the function's execution role. Lambda reads data from the other service, creates an event, and invokes your function. According to this analytics, Kinesis-DynamoDB-SQS use same method invocation, Lambda polling.
- all of these answers
- a DynamoDB trigger
- an API Gateway
- an S3 bucket event
Explanation (source google)
With DynamoDB Streams, you can trigger a Lambda function to perform additional work each time a DynamoDB table is updated. Lambda reads records from the stream and invokes your function synchronously with an event that contains stream records.
These events are considered synchronous events. Simply put, it means that when somebody is calling an API Gateway, it will trigger your Lambda function. It's a synchronous event because your Lambda function has to respond to the client directly at the end of its invocation.
You can use Lambda to process event notifications from Amazon Simple Storage Service. Amazon S3 can send an event to a Lambda function when an object is created or deleted.
- Image processing
- web application
- both
- Neither 1st and 2nd
Q53. Events are AWS resources that trigger the Lambda function. What data type is the SAM file Events property?
- Integer
- Float
- Array
- String
Q54. A company is using an API built using Amazon Lambda, Amazon API Gateway, and Amazon DynamoDB in production. The developer has observed high latency during peak periods. Which approach would best resolve the issue?
- Increase the Lambda function timeout
- Route traffic to API Gateway using a Route 53 alias
- Disable payload compression for the API
- Enable API Gateway stage-level caching
- defines serverless applications
- associates permissions policies
- creates Lambda functions
- packages deployment artifacts
- the event source
- the downstream resource
- the log stream
- the Lambda function
Q57. A developer has created a Lambda function to scrub real-time data of extraneous information and then send the scrubbed data to Kinesis for further processing and storage. Some of the data showing up in Kinesis seems to be inaccurate. What's the best way for the developer to debug this?
- Look directly at the Lambda Logs in CloudWatch
- Send the Lambda failures to a Dead Letter Queue
- Use AWS X-Ray to step through the function
- Use Kinesis to write their own custom logging tool
- All of these answers
- From scratch
- From the app repository
- Using a blueprint
Q59. You need to quickly understand execution times for two different Lambda functions with different invocation types: asynchronous and synchronous. What do you do?
- Enable tracing, rerun the lambdas, and view in the lambda console
- View the logs in CloudTrail
- View the logs in CloudWatch
- Enable tracing, rerun the Lambdas, and view in the X-Ray console
- AWS SAM
- AWS CLI
- AWS CloudFormation
- AWS SAM CLI
- Caller
- Runtime
- Request
- Account
Q62. A company will be modernizing their application which is currently running on Amazon Elastic Cloud Compute (EC2) instances. They have experience with scaling this infrastructure using Amazon EC2 Autoscaling. They want to move to serverless infrastructure consisting of an Amazon API Gateway that triggers Lambda functions. They are consulting you about scaling this new infrastructure. What should the company consider in order to make sure the serverless infrastructure scales to their needs?
- Enable Auto Scaling Groups for AWS Lambda to ensure that enough Lambda functions are ready to handle the incoming requests
- Throttle Lambda functions by configuring reserved concurrency, sending the excess traffic to Dead Letter Queues (DLQ) that will be handled when the request volume reduces.
- Look at service limits for Amazon API Gateway and Lambda functions used in order to identify potential bottlenecks and balance performance requirements, costs, and business impact
- Do nothing. API Gateway and AWS Lambda are managed services that have built-in horizontal scaling, security, and high availability to handle unlimited amount of requests
Explanation
In serverless it is important to understand the service limits of all the services used end to end to understand the level of requests that can be handled.