amazon AWS DevOps Engineer Professional (DOP-C01) online exam
What students need to know about the aws-devops-engineer-professional-dop-c01 exam
- Total 557 Questions & Answers
Question 1
A company is using AWS CodeDeploy to automate software deployment. The deployment must meet these requirements:
A number of instances must be available to serve traffic during the deployment. Traffic must be balanced across those

instances, and the instances must automatically heal in the event of failure.
A new fleet of instances must be launched for deploying a new revision automatically, with no manual provisioning.

Traffic must be rerouted to the new environment to half of the new instances at a time. The deployment should succeed if

traffic is rerouted to at least half of the instances; otherwise, it should fail. Before routing traffic to the new fleet of

instances, the temporary files generated during the deployment process must be deleted.
At the end of a successful deployment, the original instances in the deployment group must be deleted immediately to

reduce costs.
How can a DevOps Engineer meet these requirements?
-
A. Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault.OneAtAtime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the AllowTraffic hook within appspec.yml to delete the temporary files.
-
B. Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, create a custom deployment configuration with minimum healthy hosts defined as 50%, and assign the configuration to the deployment group. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BeforeBlockTraffic hook within appspec.yml to delete the temporary files.
-
C. Use an Application Load Balancer and a blue/green deployment. Associate the Auto Scaling group and the Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault.HalfAtAtime as the deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BeforeAllowTraffic hook within appspec.yml to delete the temporary files.
-
D. Use an Application Load Balancer and an in-place deployment. Associate the Auto Scaling group and Application Load Balancer target group with the deployment group. Use the Automatically copy Auto Scaling group option, and use CodeDeployDefault.AllatOnce as a deployment configuration. Instruct AWS CodeDeploy to terminate the original instances in the deployment group, and use the BlockTraffic hook within appspec.yml to delete the temporary files.
Answer:
C
Explanation:
Reference:
https://docs.aws.amazon.com/codedeploy/latest/APIReference/API_BlueGreenDeploymentConfig uration.html
Question 2
A company is building a solution for storing files containing Personally Identifiable Information (PII) on AWS.
Requirements state:
All data must be encrypted at rest and in transit.

All data must be replicated in at least two locations that are at least 500 miles apart.

Which solution meets these requirements?
-
A. Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3 SSE- C on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
-
B. Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce S3-Managed Keys (SSE- S3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
-
C. Create primary and secondary Amazon S3 buckets in two separate AWS Regions that are at least 500 miles apart. Use an IAM role to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce Amazon S3-Managed Keys (SSE-S3) on all objects uploaded to the bucket. Configure cross-region replication between the two buckets.
-
D. Create primary and secondary Amazon S3 buckets in two separate Availability Zones that are at least 500 miles apart. Use a bucket policy to enforce access to the buckets only through HTTPS. Use a bucket policy to enforce AWS KMS encryption on all objects uploaded to the bucket. Configure cross-region replication between the two buckets. Create a KMS Customer Master Key (CMK) in the primary region for encrypting objects.
Answer:
B
Question 3
You want to build a new search tool feature for your monitoring system that will allow your information security team to
quickly audit all API calls in your AWS accounts. What combination of AWS services can you use to develop and automate
the backend processes supporting this tool? (Choose three.)
-
A. Create an Amazon CloudSearch domain for API call logs. Configure the search domain so that it can be used to index API call logs for the search tool.
-
B. Use AWS CloudTrail to store API call logs in an Amazon S3 bucket. Configure an Amazon Simple Notification Service topic called "log-notification" that notifies subscribers when new logs are available. Subscribe an Amazon SQS queue to the topic.
-
C. Use Amazon Cloudwatch to ship AWS CloudTrail logs to your monitoring system.
-
D. Create an AWS Elastic Beanstalk application in worker role mode that uses an Amazon Simple Email Service (SES) domain to facilitate batch processing new API call log files retrieved from an Amazon S3 bucket into a search index.
-
E. Use AWS CloudTrail to store API call logs in an Amazon S3 bucket. Configure Amazon Simple Email Service (SES) to notify subscribers when new logs are available. Subscribe an Amazon SQS queue to the email domain.
-
F. Create Amazon Cloudwatch custom metrics for the API call logs. Configure a Cloudwatch search domain so that it can be used to index API call logs for the search tool.
-
G. Create an AWS Elastic Beanstalk application in worker role mode that uses an Amazon SQS queue to facilitate batch processing new API call log files retrieved from an Amazon S3 bucket into a search index.
Answer:
A B G
Question 4
What is the scope of AWS IAM?
-
A. Global
-
B. Availability Zone
-
C. Region
-
D. Placement Group
Answer:
A
Explanation:
IAM resources are all global; there is not regional constraint.
Reference: https://aws.amazon.com/iam/faqs/
Question 5
A mobile application running on eight Amazon EC2 instances is relying on a third-party API endpoint. The third-party service
has a high failure rate because of limited capacity, which is expected to be resolved in a few weeks.
In the meantime, the mobile application developers have added a retry mechanism and are logging failed API requests. A
DevOps Engineer must automate the monitoring of application logs and count the specific error messages; if there are more
than 10 errors within a 1-minute window, the system must issue an alert.
How can the requirements be met with MINIMAL management overhead?
-
A. Install the Amazon CloudWatch Logs agent on all instances to push the application logs to CloudWatch Logs. Use metric filters to count the error messages every minute, and trigger a CloudWatch alarm if the count exceeds 10 errors.
-
B. Install the Amazon CloudWatch Logs agent on all instances to push the access logs to CloudWatch Logs. Create a CloudWatch Events rule to count the error messages every minute, and trigger a CloudWatch alarm if the count exceeds 10 errors.
-
C. Install the Amazon CloudWatch Logs agent on all instances to push the application logs to CloudWatch Logs. Use a metric filter to generate a custom CloudWatch metric that records the number of failures and triggers a CloudWatch alarm if the custom metric reaches 10 errors in a 1-minute period.
-
D. Deploy a custom script on all instances to check application logs regularly in a cron job. Count the number of error messages every minute, and push a data point to a custom CloudWatch metric. Trigger a CloudWatch alarm if the custom metric reaches 10 errors in a 1-minute period.
Answer:
C
Question 6
A company is using AWS CodeCommit as its source code repository. After an internal audit, the compliance team mandates
that any code change that go into the master branch must be committed by senior developers.
Which solution will meet these requirements?
-
A. Create two repositories in CodeCommit: one for working and another for the master. Create separate IAM groups for senior developers and developers. Assign the resource-level permissions on the repositories tied to the IAM groups. After the code changes are reviewed, sync the approved files to the master code commit repository.
-
B. Create a repository in CodeCommit. Create separate IAM groups for senior developers and developers. Assign code commit permissions for both groups, with code merge permissions for the senior developers group. Create a trigger to notify senior developers with a URL link to approve or deny commit requests delivered through Amazon SNS. Once a senior developer approves the code, the code gets merged to the master branch.
-
C. Create a repository in CodeCommit with a working and master branch. Create separate IAM groups for senior developers and developers. Use an IAM policy to assign each IAM group their corresponding branches. Once the code is merged to the working branch, senior developers can pull the changes from the working branch to the master branch.
-
D. Create a repository in CodeCommit. Create separate IAM groups for senior developers and developers. Use AWS Lambda triggers on the master branch and get the user name of the developer at the event object of the Lambda function. Validate the user name with the IAM group to approve or deny the commit.
Answer:
A
Question 7
A companys web application will be migrated to AWS. The application is designed so that there is no server-side code
required. As part of the migration, the company would like to improve the security of the application by adding HTTP
response headers, following the Open Web Application Security Project (OWASP) secure headers recommendations. How
can this solution be implemented to meet the security requirements using best practices?
-
A. Use an Amazon S3 bucket configured for website hosting, then set up server access logging on the S3 bucket to track user activity. Then configure the static website hosting and execute a scheduled AWS Lambda function to verify, and if missing, add security headers to the metadata.
-
B. Use an Amazon S3 bucket configured for website hosting, then set up server access logging on the S3 bucket to track user activity. Configure the static website hosting to return the required security headers.
-
C. Use an Amazon S3 bucket configured for website hosting. Create an Amazon CloudFront distribution that refers to this S3 bucket, with the origin response event set to trigger a [email protected] Node.js function to add in the security headers.
-
D. Use an Amazon S3 bucket configured for website hosting. Create an Amazon CloudFront distribution that refers to this S3 bucket. Set Cache Based on Selected Request Headers to Whitelist, and add the security headers into the whitelist.
Answer:
C
Question 8
A company has an application deployed using Amazon ECS with data stored in an Amazon DynamoDB table. The company
wants the application to fail over to another Region in a disaster recovery scenario. The application must also efficiently
recover from any accidental data loss events. The RPO for the application is 1 hour and the RTO is 2 hours.
Which highly available solution should a DevOps engineer recommend?
-
A. Change the configuration of the existing DynamoDB table. Enable this as a global table and specify the second Region that will be used. Enable DynamoDB point-in-time recovery.
-
B. Enable DynamoDB Streams for the table and create an AWS Lambda function to write the stream data to an S3 bucket in the second Region. Schedule a job for every 2 hours to use AWS Data Pipeline to restore the database to the failover Region.
-
C. Export the DynamoDB table every 2 hours using AWS Data Pipeline to an Amazon S3 bucket in the second Region. Use Data Pipeline in the second Region to restore the export from S3 into the second DynamoDB table.
-
D. Use AWS DMS to replicate the data every hour. Set the original DynamoDB table as the source and the new DynamoDB table as the target.
Answer:
B
Question 9
You are using AWS Elastic Beanstalk to deploy your application and must make data stored on an Amazon Elastic Block
Store (EBS) volume snapshot available to the Amazon Elastic Compute Cloud (EC2) instances. How can you modify your
Elastic Beanstalk environment so that the data is added to the Amazon EC2 instances every time you deploy your
application?
-
A. Add commands to a configuration file in the .ebextensions folder of your deployable archive that mount an additional Amazon EBS volume on launch. Also add a "BlockDeviceMappings" option, and specify the snapshot to use for the block device in the Auto Scaling launch configuration.
-
B. Add commands to a configuration file in the .ebextensions folder of your deployable archive that uses the create-volume Amazon EC2 API or CLI to create a new ephemeral volume based on the specified snapshot and then mounts the volume on launch.
-
C. Add commands to the Amazon EC2 user data that will be executed by eb-init, which uses the create- volume Amazon EC2 API or CLI to create a new Amazon EBS volume based on the specified snapshot, and then mounts the volume on launch.
-
D. Add commands to the Chef recipe associated with your environment, use the create-volume Amazon EC2 API or CLI to create a new Amazon EBS volume based on the specified snapshot, and then mount the volume on launch.
Answer:
A
Question 10
Which of these is not a reason a Multi-AZ RDS instance will failover?
-
A. An Availability Zone outage
-
B. A manual failover of the DB instance was initiated using Reboot with failover
-
C. To autoscale to a higher instance class
-
D. The primary DB instance fails
Answer:
C
Explanation:
The primary DB instance switches over automatically to the standby replica if any of the > following conditions occur: An
Availability Zone outage, the primary DB instance fails, the DB instance's server type is changed, the operating system of the
DB instance is, undergoing software patching, a manual failover of the DB instance was initiated using Reboot with failover.
Reference: http://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.MultiAZ.html