Amazon AWS Certified DevOps Engineer – Professional (DOP-C01) Advice To Share

Anyone with any suggestions for the AWS Certified DevOps Engineer-Professional (DOP-C01) exam (DOP-C01) would like to share? I saw someone asking this question on reddit.com. Are there many people who have this problem? Don’t worry, let me share suggestions about the Amazon DOP-C01 exam: First you have to master the basics (which are Amazon officially available) and then practice a lot of DOP-C01 questions. With DOP-C01 dumps pdf, it contains questions from real exams that allow you to learn efficiently!

Effective DOP-C01 dumps pdf link: https://www.pass4itsure.com/aws-devops-engineer-professional.html

Check out this free AWS Certified DevOps Engineer-Professional (DOP-C01) practice exam resource:

QUESTION 1 #

Which resource cannot be defined in an Ansible Playbook?

A. Fact Gathering State
B. Host Groups
C. Inventory File
D. Variables

Correct Answer: C

Ansible\\’s inventory can only be specified on the command line, the Ansible configuration file, or in environment variables.

Reference: http://docs.ansible.com/ansible/intro_inventory.html

QUESTION 2 #

A retail company wants to use AWS Elastic Beanstalk to host its online sales website running on Java. Since this will be the production website, the CTO has the following requirements for the deployment strategy:

1. Zero downtime. While the deployment is ongoing, the current Amazon EC2 instances in service should remain in service. No deployment or any other action should be performed on the EC2 instances because they serve production traffic.

2. A new fleet of instances should be provisioned for deploying the new application version.

3. Once the new application version is deployed successfully in the new fleet of instances, the new instances should be placed in service and the old ones should be removed.

4. The rollback should be as easy as possible. If the new fleet of instances fails to deploy the new application version, they should be terminated and the current instances should continue serving traffic as normal.

5. The resources within the environment (EC2 Auto Scaling group, Elastic Load Balancing, Elastic Beanstalk DNS CNAME) should remain the same and no DNS change should be made.

Which deployment strategy will meet the requirements?

A. Use rolling deployments with a fixed amount of one instance at a time and set the healthy threshold to OK.

B. Use rolling deployments with an additional batch with a fixed amount of one instance at a time and set the healthy threshold to OK.

C. launch a new environment and deploy the new application version there, then perform a CNAME swap between environments.

D. Use immutable environment updates to meet all the necessary requirements.

Correct Answer: D

QUESTION 3 #

A social networking service runs a web API that allows its partners to search public posts. Post data is
stored in Amazon DynamoDB and indexed by AWS Lambda functions, with an Amazon ES domain storing the indexes and providing search functionality to the application.

The service needs to maintain full capacity during deployments and ensure that failed deployments do not cause downtime or reduce capacity or prevent subsequent deployments.

How can these requirements be met? (Choose two.)

A. Run the web application in AWS Elastic Beanstalk with the deployment policy set to All at Once. Deploy the Lambda functions, DynamoDB tables, and Amazon ES domain with an AWS CloudFormation template.

B. Deploy the web application, Lambda functions, DynamoDB tables, and Amazon ES domain in an AWS CloudFormation template. Deploy changes with an AWS CodeDeploy in-place deployment.

C. Run the web application in AWS Elastic Beanstalk with the deployment policy set to Immutable. Deploy the Lambda functions, DynamoDB tables, and Amazon ES domain with an AWS CloudFormation template.

D. Deploy the web application, Lambda functions, DynamoDB tables, and Amazon ES domain in an AWS CloudFormation template. Deploy changes with an AWS CodeDeploy blue/green deployment.
E. Run the web application in AWS Elastic Beanstalk with the deployment policy set to Rolling. Deploy the Lambda functions, DynamoDB tables, and Amazon ES domain with an AWS CloudFormation template.

Correct Answer: CD

QUESTION 4 #

A company is deploying a container-based application using AWS CodeBuild. The Security team mandates that all containers are scanned for vulnerabilities prior to deployment using a password-protected endpoint.

All sensitive information must be stored securely.
Which solution should be used to meet these requirements?

A. Encrypt the password using AWS KMS. Store the encrypted password in the buildspec.yml file as an environment variable under the variables mapping. Reference the environment variable to initiate scanning.

B. Import the password into an AWS CloudHSM key. Reference the CloudHSM key in the buildpec.yml file as an environment variable under the variables mapping. Reference the environment variable to initiate scanning.

C. Store the password in the AWS Systems Manager Parameter Store as a secure string. Add the Parameter Store key to the buildspec.yml file as an environment variable under the parameter-store mapping. Reference the environment variable to initiate scanning.

D. Use the AWS Encryption SDK to encrypt the password and embed in the buildspec.yml file as a variable under the secrets mapping. Attach a policy to CodeBuild to enable access to the required decryption key.

Correct Answer: C

QUESTION 5 #

A user is creating a new EBS volume from an existing snapshot. The snapshot size shows 10 GB. Can the user create a volume of 30 GB from that snapshot?

A. Provided the original volume has set the change size attribute to true
B. Yes
C. Provided the snapshot has the modified size attribute set as true
D. No

Correct Answer: B

Explanation: A user can always create a new EBS volume of a higher size than the original snapshot size. The user cannot create a volume of a lower size. When the new volume is created the size in the instance will be shown as the original size.

The user needs to change the size of the device with resize2fs or other OS-specific commands.

QUESTION 6 #

A company is deploying a new mobile game on AWS for its customers around the world. The Development team uses AWS Code services and must meet the following requirements:

Clients need to send/receive real-time playing data from the backend frequently and with minimal latency -Game data must meet the data residency requirement

Which strategy can a DevOps Engineer implement to meet their needs?

A. Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build and deployment pipeline. Successful deployment in one region invokes an AWS Lambda function to copy the build artifacts to an Amazon S3 bucket in another region. After the artifact is copied, it triggers a deployment pipeline in the new region.

B. Deploy the backend application to multiple Availability Zones in a single region. Create an Amazon CloudFront distribution to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-and-deployment pipeline. The pipeline deploys the backend application to all Availability Zones.

C. Deploy the backend application to multiple regions. Use AWS Direct Connect to serve the application backend to global customers. Any update to the code repository triggers a two-stage build-and-deployment pipeline in the region. After successful deployment in the region, the pipeline continues to deploy the artifact to another region.

D. Deploy the backend application to multiple regions. Any update to the code repository triggers a two-stage build-and-deployment pipeline in the region. After successful deployment in the region, the pipeline invokes the pipeline in another region and passes the build artifact location. The pipeline uses the artifact location and deploys applications in the new region.

Correct Answer: A

Reference:
https://docs.aws.amazon.com/codepipeline/latest/userguide/integrations-actiontype.html#integrationsinvoke

QUESTION 7 #

What needs to be done in order to remotely access a Docker daemon running on Linux?

A. add certificate authentication to the Docker API
B. change the encryption level to TLS
C. enable the TCP socket
D. bind the Docker API to a Unix socket

Correct Answer: C

The Docker daemon can listen for Docker Remote API requests via three different types of Socket: Unix, TCP, and fd. By default, a Unix domain socket (or IPC socket) is created at /var/run/docker.sock, requiring either root permission, or docker group membership.

If you need to access the Docker daemon remotely, you need to enable the TCP Socket.
Beware that the default setup provides unencrypted and unauthenticated direct access to the Docker daemon – and should be secured either using the built-in HTTPS encrypted socket or by putting a secure web proxy in front of it.

Reference: https://docs.docker.com/engine/reference/commandline/dockerd/#daemon-socket-option

QUESTION 8 #

A company runs an application on Amazon EC2 instances behind an Application Load Balancer. The instances run in an Amazon EC2 Auto Scaling group across multiple Availability Zones in us-east-1. The application stores data in an Amazon RDS MySQL Multi-AZ DB instance.

A DevOps engineer wants to modify the current solution and create a hot standby of the environment in another region to minimize downtime if a problem occurs in us-east-1.

Which combination of steps should the DevOps engineer take to meet these requirements? (Choose three.)

A. Add a health check to the Amazon Route 53 alias record to evaluate the health of the primary region. Use AWS Lambda, configured with an Amazon CloudWatch Events trigger, to promote the Amazon RDS read replica in the disaster recovery region.

B. Create a new Application Load Balancer and Amazon EC2 Auto Scaling group in the disaster recovery region.

C. Extend the current Amazon EC2 Auto Scaling group to the subnets in the disaster recovery region.

D. Enable multi-region failover for the RDS configuration for the database instance.

E. Deploy a read replica of the RDS instance in the disaster recovery region.

F. Create an AWS Lambda function to evaluate the health of the primary region. If it fails, modify the Amazon Route 53 record to point at the disaster recovery region and promote the RDS read replica.

Correct Answer: ABE

QUESTION 9 #

Which of the following is an invalid variable name in Ansible?

A. host1st_ref
B. host-first-ref
C. Host1stRef
D. host_first_ref

Correct Answer: B

Variable names can contain letters, numbers, and underscores and should always start with a letter. Invalid variable examples, host first ref\\',1st_host_ref\’\’.

Reference: http://docs.ansible.com/ansible/playbooks_variables.html#what-makes-a-valid-variable-name

QUESTION 10 #

A company is hosting a web application in an AWS Region. For disaster recovery purposes, a second region is being used as a standby. Disaster recovery requirements state that session data must be replicated between regions in near real-time and 1% of requests should route to the secondary region to continuously verify system functionality.

Additionally, if there is a disruption in service in the main region, traffic should be automatically routed to the secondary region, and the secondary region must be able to scale up to handle all traffic. How should a DevOps Engineer meet these requirements?

A. In both regions, deploy the application on AWS Elastic Beanstalk and use Amazon DynamoDB global tables for session data. Use an Amazon Route 53 weighted routing policy with health checks to distribute the traffic across the regions.

B. In both regions, launch the application in Auto Scaling groups and use DynamoDB for session data. Use a Route 53 failover routing policy with health checks to distribute the traffic across the regions.

C. In both regions, deploy the application in AWS Lambda, exposed by Amazon API Gateway, and use Amazon RDS PostgreSQL with cross-region replication for session data. Deploy the web application with client-side logic to call the API Gateway directly.

D. In both regions, launch the application in Auto Scaling groups and use DynamoDB global tables for session data. Enable an Amazon CloudFront weighted distribution across regions. Point the Amazon Route 53 DNS record at the CloudFront distribution.

Correct Answer: A

QUESTION 11 #

The development team is creating a social media game that ranks users on a scoreboard. The current implementation uses an Amazon RDS for MySQL database for storing user data; however, the game cannot display scores quickly enough during performance testing.

Which service would provide the fastest retrieval times?

A. Migrate user data to Amazon DynamoDB for managing content.
B. Use AWS Batch to compute and deliver user and score content.
C. Deploy Amazon CloudFront for user and score content delivery.
D. Set up Amazon ElastiCache to deliver user and score content.

Correct Answer: D

QUESTION 12 #

Ansible supports running Playbook on the host directly or via SSH. How can Ansible be told to run its playbooks directly on the host?

A. Setting connection: local\’ in the tasks that run locally.
B. Specifying-type local\’ on the command line.
C. It does not need to be specified; it is the default.
D. Setting connection: local\’ in the Playbook.

Correct Answer: D

Ansible can be told to run locally on the command line with the-c\’ option or can be told via the connection: local\’ declaration in the playbook. The default connection method isremote\’.

Reference: http://docs.ansible.com/ansible/intro_inventory.html#non-ssh-connection-types

QUESTION 13 #

A company has an application deployed using Amazon ECS with data stored in an Amazon DynamoDB table. The company wants the application to failover to another Region in a disaster recovery scenario. The application must also efficiently recover from any accidental data loss events. The RPO for the application is 1 hour and the RTO is 2 hours.

Which highly available solution should a DevOps engineer recommend?

A. Change the configuration of the existing DynamoDB table. Enable this as a global table and specify the second Region that will be used. Enable DynamoDB point-in-time recovery.

B. Enable DynamoDB Streams for the table and create an AWS Lambda function to write the stream data to an S3 bucket in the second Region. Schedule a job for every 2 hours to use AWS Data Pipeline to restore the database to the failover Region.

C. Export the DynamoDB table every 2 hours using AWS Data Pipeline to an Amazon S3 bucket in the second Region. Use Data Pipeline in the second Region to restore the export from S3 into the second DynamoDB table.

D. Use AWS DMS to replicate the data every hour. Set the original DynamoDB table as the source and the new DynamoDB table as the target.

Correct Answer: B

Amazon DOP-C01 dumps pdf [google drive] download:

free DOP-C01 dumps pdf https://drive.google.com/file/d/1HR4OQX6_I7LUfvvYaqFqVxZ_uXoycuPm/view?usp=sharing

Without a doubt,

It’s a pleasure to share your suggestions. Passing the DOP-C01 exam is a lot of learning and practice exams, refueling. The DOP-C01 dumps pdf material is very solid and prepares you for most of the scenarios in the exam.

Getting the latest DOP-C01 dumps pdf https://www.pass4itsure.com/aws-devops-engineer-professional.html (Q-As: 548) is also a reminder that it’s important to keep the faith.

Other Amazon exam practice test is here: https://www.examdemosimulation.com/category/amazon-exam-practice-test/

[2021.5] New Valid Amazon DOP-C01 Practice Questions Free Share From Pass4itsure

Amazon AWS DOP-C01 is difficult. But with the Pass4itsure DOP-C01 dumps https://www.pass4itsure.com/aws-devops-engineer-professional.html preparation material candidate, it can be achieved easily. In DOP-C01 practice tests, you can practice on the same exam as the actual exam. If you master the tricks you gained through practice, it will be easier to achieve your target score.

Amazon AWS DOP-C01 pdf free https://drive.google.com/file/d/1RovXbw8hcBZyaxeONfBPpYvhw7pNSir0/view?usp=sharing

Latest Amazon DOP-C01 dumps practice test video tutorial

Latest Amazon AWS DOP-C01 practice exam questions at here:

QUESTION 1
A DevOps engineer is building a centralized CI/CD pipeline using AWS CodeBuild, AWS CodeDeploy, and Amazon S3.
The engineer is required to have the least privilege access and individual encryption at rest for all artifacts in Amazon S3.
The engineer must be able to prune old artifacts without the ability to download or read them.
The engineer has already completed the following steps:
1.
Created a unique AWS KMS CMK and S3 bucket for each project\\’s builds.
2.
Updated the S3 bucket policy to only allow uploads that use the associated KMS encryption.
Which final step should be taken to meet these requirements?
A. Update the attached IAM policies to allow access to the appropriate KMS key from the CodeDeploy role where the
application will be deployed.
B. Update the attached IAM policies to allow access to the appropriate KMS key from the EC2 instance roles where the
application will be deployed.
C. Update the CMK key policy to allow access to the appropriate KMS key from the CodeDeploy role where the
application will be deployed.
D. Update the CMK key policy to allow to the appropriate KMS key from the EC2 instance roles where the application
will be deployed.
Correct Answer: A

QUESTION 2
Your system uses a multi-master, multi-region DynamoDB configuration spanning two regions to achieve high
availablity. For the first time since launching your system, one of the AWS Regions in which you operate over went
down for 3 hours, and the failover worked correctly. However, after recovery, your users are experiencing strange bugs,
in which users on different sides of the globe see different data. What is a likely design issue that was not accounted for
when launching?
A. The system does not have Lambda Functor Repair Automatons, to perform table scans and chack for corrupted
partition blocks inside the Table in the recovered Region.
B. The system did not implement DynamoDB Table Defragmentation for restoring partition performance in the Region
that experienced an outage, so data is served stale.
C. The system did not include repair logic and request replay buffering logic for post-failure, to resynchronize data to the
Region that was unavailable for a number of hours.
D. The system did not use DynamoDB Consistent Read requests, so the requests in different areas are not utilizing
consensus across Regions at runtime.
Correct Answer: C
Explanation: When using multi-region DynamoDB systems, it is of paramount importance to make sure that all requests
made to one Region are replicated to the other. Under normal operation, the system in question would correctly perform
write replays into the other Region. If a whole Region went down, the system would be unable to perform these writes
for the period of downtime. Without buffering write requests somehow, there would be no way for the system to replay
dropped crossregion writes, and the requests would be serviced differently depending on the Region from which they
were served after recovery.
Reference: http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepl.html

QUESTION 3
A DevOps Engineer is using AWS CodeDeploy across a fleet of Amazon EC2 instances in an EC2 Auto Scaling group.
The associated CodeDeploy deployment group, which is integrated with EC2 Auto Scaling, is configured to perform inplace deployments with CodeDeployDefault.OneAtATime. During an ongoing new deployment, the Engineer discovers
that, although the overall deployment finished successfully, two out of five instances have the previous application
revision deployed. The other three instances have the newest application revision. What is likely causing this issue?
A. The two affected instances failed to fetch the new deployment.
B. A failed AfterInstall lifecycle event hook caused the CodeDeploy agent to roll back to the previous version on the
affected instances.
C. The CodeDeploy agent was not installed in two affected instances.
D. EC2 Auto Scaling launched two new instances while the new deployment had not yet finished, causing the previous
version to be deployed on the affected instances.
Correct Answer: D


QUESTION 4
Your company needs to automate 3 layers of a large cloud deployment. You want to be able to track this
deployment\\’s evolution as it changes over time, and carefully control any alterations.
What is a good way to automate a stack to meet these requirements?
A. Use OpsWorks Stacks with three layers to model the layering in your stack.
B. Use CloudFormation Nested Stack Templates, with three child stacks to represent the three logical layers of your
cloud.
C. Use AWS Config to declare a configuration set that AWS should roll out to your cloud.
D. Use Elastic Beanstalk Linked Applications, passing the important DNS entires between layers using the metadata
interface.
Correct Answer: B
Only CloudFormation allows source controlled, declarative templates as the basis for stack automation.
Nested Stacks help achieve clean separation of layers while simultaneously providing a method to control
all layers at once when needed.
Reference:
https://blogs.aws.amazon.com/application-management/post/Tx1T9JYQOS8AB9I/Use-Nested-Stacks-toCreateReusable-Templates-and-Support-Role-Specialization

QUESTION 5
A Development team uses AWS CodeCommit for source code control. Developers apply their changes to various
feature branches and create pull requests to move those changes to the master branch when they are ready for
production. A direct push to the master branch should not be allowed. The team applied the AWS managed policy
AWSCodeCommitPowerUser to the Developers’ IAM Rote, but now members are able to push to the master branch
directly on every repository in the AWS account. What actions should be taken to restrict this?
A. Create an additional policy to include a deny rule for the codecommit:GitPushaction, and include a restriction for the
specific repositories in the resource statement with a condition for the master reference.
B. Remove the IAM policy and add an AWSCodeCommitReadOnlypolicy. Add an allow rule for the codecommit:GitPush
action for the specific repositories in the resource statement with a condition for the master reference.
C. Modify the IAM policy and include a deny rule for the codecommit:GitPushaction for the specific repositories in the
resource statement with a condition for the master reference.
D. Create an additional policy to include an allow rule for the codecommit:GitPushaction and include a restriction for the
specific repositories in the resource statement with a condition for the feature branches reference.
Correct Answer: A
Reference:
https://aws.amazon.com/pt/blogs/devops/refining-access-to-branches-in-aws-codecommit/

QUESTION 6
Your system automatically provisions EIPs to EC2 instances in a VPC on boot. The system provisions the whole VPC
and stack at once. You have two of them per VPC. On your new AWS account, your attempt to create a Development
environment failed, after successfully creating Staging and Production environments in the same region. What
happened?
A. You didn\\’t choose the Development version of the AMI you are using.
B. You didn\\’t set the Development flag to true when deploying EC2 instances.
C. You hit the soft limit of 5 EIPs per region and requested a 6th.
D. You hit the soft limit of 2 VPCs per region and requested a 3rd.
Correct Answer: C
There is a soft limit of 5 EIPs per Region for VPC on new accounts. The third environment could not allocate the 6th
EIP. Reference: http://docs.aws.amazon.com/general/latest/gr/aws_service_limits.html#limits_vpc

QUESTION 7
A DevOps team manages an API running on-premises that serves as a backend for an Amazon API Gateway endpoint.
Customers have been complaining about high response latencies, which the development team has verified using the
API Gateway latency metrics in Amazon CloudWatch. To identify the cause, the team needs to collect relevant data
without introducing additional latency.
Which actions should be taken to accomplish this? (Choose two.)
A. Install the CloudWatch agent server side and configure the agent to upload relevant logs to CloudWatch.
B. Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and upload those
segments to X-Ray during each request.
C. Enable AWS X-Ray tracing in API Gateway, modify the application to capture request segments, and use the X-Ray
daemon to upload segments to X-Ray.
D. Modify the on-premises application to send log information back to API Gateway with each request.
E. Modify the on-premises application to calculate and upload statistical data relevant to the API service requests to
CloudWatch metrics.
Correct Answer: CE


QUESTION 8
A DevOps engineer has been tasked with ensuring that all Amazon S3 buckets, except for those with the word “public”
in the name, allow access only to authorized users utilizing S3 bucket policies. The security team wants to be notified
when a bucket is created without the proper policy and for the policy to be automatically updated.
Which solutions will meet these requirements?
A. Create a custom AWS Config rule that will trigger an AWS Lambda function when an S3 bucket is created or
updated. Use the Lambda function to look for S3 buckets that should be private, but that do not have a bucket policy
that enforces privacy. When such a bucket is found, invoke a remediation action and use Amazon SNS to notify the
security team.
B. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers when an S3 bucket is created. Use
an AWS Lambda function to determine whether the bucket should be private. If the bucket should be private, update the
PublicAccessBlock configuration. Configure a second EventBridge (CloudWatch Events) rule to notify the security team
using Amazon SNS when PutBucketPolicy is called.
C. Create an Amazon S3 event notification that triggers when an S3 bucket is created that does not have the word
“public” in the name. Define an AWS Lambda function as a target for this notification and use the function to apply a
new default policy to the S3 bucket. Create an additional notification with the same filter and use Amazon SNS to send
an email to the security team.
D. Create an Amazon EventBridge (Amazon CloudWatch Events) rule that triggers when a new object is created in a
bucket that does not have the word “public” in the name. Target and use an AWS Lambda function to update the
PublicAccessBlock configuration. Create an additional notification with the same filter and use Amazon SNS to send an
email to the security team.
Correct Answer: D

QUESTION 9
An e-commerce company is running a web application in an AWS Elastic Beanstalk environment. In recent months, the
average load of the Amazon EC2 instances has been increased to handle more traffic. The company would like to
improve the scalability and resilience of the environment. The Development team has been asked to decouple longrunning tasks from the environment if the tasks can be executed asynchronously. Examples of these tasks include
confirmation emails when users are registered to the platform, and processing images or videos. Also, some of the
periodic tasks that are currently running within the web server should be offloaded.
What is the MOST time-efficient and integrated way to achieve this?
A. Create an Amazon SQS queue and send the tasks that should be decoupled from the Elastic Beanstalk web server
environment to the SQS queue. Create a fleet of EC2 instances under an Auto Scaling group. Use an AMI that contains
the application to process the asynchronous tasks, configure the application to listen for messages within the SQS
queue, and create periodic tasks by placing those into the cron in the operating system. Create an environment variable
within the Elastic Beanstalk environment with a value pointing to the SQS queue endpoint.
B. Create a second Elastic Beanstalk worker tier environment and deploy the application to process the asynchronous
tasks there. Send the tasks that should be decoupled from the original Elastic Beanstalk web server environment to the
auto-generated Amazon SQS queue by the Elastic Beanstalk worker environment. Place a cron.yaml file within the root
of the application source bundle for the worker environment for periodic tasks. Use environment links to link the web
server environment with the worker environment.
C. Create a second Elastic Beanstalk web server tier environment and deploy the application to process the
asynchronous tasks. Send the tasks that should be decoupled from the original Elastic Beanstalk web server to the autogenerated Amazon SQS queue by the second Elastic Beanstalk web server tier environment. Place a cron.yaml file
within the root of the application source bundle for the second web server tier environment with the necessary periodic
tasks. Use environment links to link both web server environments.
D. Create an Amazon SQS queue and send the tasks that should be decoupled from the Elastic Beanstalk web server
environment to the SQS queue. Create a fleet of EC2 instances under an Auto Scaling group. Install and configure the
application to listen for messages within the SQS queue from UserData and create periodic tasks by placing those into
the cron in the operating system. Create an environment variable within the Elastic Beanstalk web server environment
with a value pointing to the SQS queue endpoint.
Correct Answer: B

QUESTION 10
Your application requires a fault-tolerant, low-latency and repeatable method to load configurations files via
Auto Scaling when Amazon Elastic Compute Cloud (EC2) instances launch.
Which approach should you use to satisfy these requirements?
A. Securely copy the content from a running Amazon EC2 instance.
B. Use an Amazon EC2 UserData script to copy the configurations from an Amazon Storage Services (S3) bucket.
C. Use a script via cfn-init to pull content hosted in an Amazon ElastiCache cluster.
D. Use a script via cfn-init to pull content hosted on your on-premises server.
E. Use an Amazon EC2 UserData script to pull content hosted on your on-premises server.
Correct Answer: B


QUESTION 11
An n-tier application requires a table in an Amazon RDS MySQL DB instance to be dropped and repopulated at each
deployment. This process can take several minutes and the web tier cannot come online until the process is complete.
Currently, the web tier is configured in an Amazon EC2 Auto Scaling group, with instances being terminated and
replaced at each deployment. The MySQL table is populated by running a SQL query through an AWS CodeBuild job.
What should be done to ensure that the web tier does not come online before the database is completely configured?
A. Use Amazon Aurora as a drop-in replacement for RDS MySQL. Use snapshots to populate the table with the correct
data.
B. Modify the launch configuration of the Auto Scaling group to pause user data execution for 600 seconds, allowing the
table to be populated.
C. Use AWS Step Functions to monitor and maintain the state of data population. Mark the database in service before
continuing with the deployment.
D. Use an EC2 Auto Scaling lifecycle hook to pause the configuration of the web tier until the table is populated.
Correct Answer: D

QUESTION 12
You need to replicate API calls across two systems in real-time. What tool should you use as a buffer and transport
the mechanism for API call events?
A. AWS SQS
B. AWS Lambda
C. AWS Kinesis
D. AWS SNS
Correct Answer: C
Explanation: AWS Kinesis is an event stream service. Streams can act as buffers and transport across systems for inorder programmatic events, making it ideal for replicating API calls across systems. A typical Amazon Kinesis Streams
application reads data from an Amazon Kinesis stream as data records. These applications can use the Amazon
Kinesis Client Library, and they can run on Amazon EC2 instances. The processed records can be sent to dashboards,
used to generate alerts, dynamically change pricing and advertising strategies, or send data to a variety of other AWS
services. For information about Streams features and pricing, see Amazon Kinesis Streams.
Reference:
http://docs.aws.amazon.com/kinesis/latest/dev/introduction.html


QUESTION 13
A DevOps Engineer is developing a deployment strategy that will allow for data-driven decisions before a feature is fully
approved for general availability. The current deployment process uses AWS CloudFormation and blue/green-style
deployments. The development team has decided that customers should be randomly assigned to groups, rather than
using a set percentage, and redirects should be avoided. What process should be followed to implement the new
deployment strategy?
A. Configure Amazon Route 53 weighted records for the blue and green stacks, with 50% of traffic configured to route to
each stack.
B. Configure Amazon CloudFront with an AWS [email protected] function to set a cookie when CloudFront receives a
request. Assign the user to a version A or B, and configure the web server to redirect to version A or B.
C. Configure Amazon CloudFront with an AWS [email protected] function to set a cookie when CloudFront receives a
request. Assign the user to a version A or B, then return the corresponding version to the viewer.
D. Configure Amazon Route 53 with an AWS Lambda function to set a cookie when Amazon CloudFront receives a
request. Assign the user to version A or B, then return the corresponding version to the viewer.
Correct Answer: C

Welcome to download the valid Pass4itsure DOP-C01 pdf

Free downloadGoogle Drive
Amazon AWS DOP-C01 pdf https://drive.google.com/file/d/1RovXbw8hcBZyaxeONfBPpYvhw7pNSir0/view?usp=sharing

Pass4itsure latest Amazon exam dumps coupon code free share

Summary:

New Amazon DOP-C01 exam questions from Pass4itsure DOP-C01 dumps! Welcome to download the newest Pass4itsure DOP-C01 dumps https://www.pass4itsure.com/aws-devops-engineer-professional.html (537 Q&As), verified the latest DOP-C01 practice test questions with relevant answers.

Amazon AWS DOP-C01 dumps pdf free share https://drive.google.com/file/d/1RovXbw8hcBZyaxeONfBPpYvhw7pNSir0/view?usp=sharing

[2021.3] Valid Amazon AWS DOP-C01 Practice Questions Free Share From Pass4itsure

Amazon AWS DOP-C01 is difficult. But with the Pass4itsure DOP-C01 dumps https://www.pass4itsure.com/aws-devops-engineer-professional.html preparation material candidate, it can be achieved easily. In DOP-C01 practice tests, you can practice on the same exam as the actual exam. If you master the tricks you gained through practice, it will be easier to achieve your target score.

Amazon AWS DOP-C01 pdf free https://drive.google.com/file/d/16BQYHcZSuBYjN6O-LTQQEQB0RP7AItCB/view?usp=sharing

Latest Amazon DOP-C01 dumps Practice test video tutorial

Latest Amazon AWS DOP-C01 practice exam questions at here:

QUESTION 1
A company has multiple development groups working in a single shared AWS account. The Senior Manager of the
groups want to be alerted via a third-party API call when the creation of resources approaches the service limits for the
account.
Which solution will accomplish this with the LEAST amount of development effort?
A. Create an Amazon CloudWatch Event rule that runs periodically and targets an AWS Lambda function. Within the
Lambda function, evaluate the current state of the AWS environment, and compare deployed resource values to
resource limits on the account. Notify the Senior Manager if the account is approaching a service limit.
B. Deploy an AWS Lambda function that refreshes AWS Trusted Advisor checks and configure an Amazon
CloudWatch Events rule to run the Lambda function periodically. Create another CloudWatch Events rule with an event
pattern matching Trusted Advisor events and a target Lambda function. In the target Lambda function, notify the Senior
Manager.
C. Deploy an AWS Lambda function that refreshes AWS Personal Health Dashboard checks, and configure an Amazon
CloudWatch Events rule to run the Lambda function periodically. Create another CloudWatch Events rule with an event
pattern matching Personal Health Dashboard events and a target Lambda function. In the target Lambda function, notify
the Senior Manager.
D. Add an AWS Config custom rule that runs periodically, checks the AWS service limit status, and streams notifications
to an Amazon SNS topic. Deploy an AWS Lambda function that notifies the Senior Manager, and subscribe the Lambda
function to the SNS topic.
Correct Answer: D

QUESTION 2
A DevOps engineer notices that all Amazon EC2 instances running behind an Application Load Balancer in an Auto
The scaling group is failing to respond to user requests. The EC2 instances are also failing target group HTTP health
checks.
Upon inspection, the engineer notices the application process was not running in any EC2 instances. There are a
significant number of out-of-memory messages in the system logs. The engineer needs to improve the resilience of the
application to cope with a potential application memory leak. Monitoring and notifications should be enabled to alert
when there is an issue.
Which combination of actions will meet these requirements? (Choose two.)
A. Change the Auto Scaling configuration to replace the instances when they fail the load balancer\\’s health checks.
B. Change the target group health check HealthCheckIntervalSeconds parameter to reduce the interval between health
checks.
C. Change the target group health checks from HTTP to TCP to check if the port where the application is listening is
reachable.
D. Enable the available memory consumption metric within the Amazon CloudWatch dashboard for the entire Auto
Scaling group. Create an alarm when the memory utilization is high. Associate an Amazon SNS topic to the alarm to
receive notifications when the alarm goes off.
E. Use the Amazon CloudWatch agent to collect the memory utilization of the EC2 instances in the Auto Scaling group.
Create an alarm when the memory utilization is high and associate an Amazon SNS topic to receive a notification.
Correct Answer: DE

QUESTION 3
A company is using AWS CodePipeline to automate its release pipeline. AWS CodeDeploy is being used in the pipeline
to deploy an application to Amazon ECS using the blue/green deployment model. The company wants to implement
scripts to test the green version of the application before shifting traffic. These scripts will complete in 5 minutes or less.
If errors are discovered during these tests, the application must be rolled back.
Which strategy will meet these requirements?
A. Add a stage to the CodePipeline pipeline between the source and deploy stages. Use AWS CodeBuild to create an
execution environment and build commands in the build spec file to invoke test scripts. If errors are found, use the AWS
deploy stop-deployment command to stop the deployment.
B. Add a stage to the CodePipeline pipeline between the source and deploy stages. Use this stage to execute an AWS
Lambda function that will run the test scripts. If errors are found, use the AWS deploy stop-deployment command to stop
the deployment.
C. Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTestTraffic lifecycle event to invoke an AWS
Lambda function to run the test scripts. If errors are found, exit the Lambda function with an error to trigger a rollback.
D. Add a hooks section to the CodeDeploy AppSpec file. Use the AfterAllowTraffic lifecycle event to invoke the test
scripts. If errors are found, use the AWS deploy stop-deployment CLI command to stop the deployment.
Correct Answer: C
Reference: https://docs.aws.amazon.com/codedeploy/latest/userguide/reference-appspec-file-structurehooks.html

QUESTION 4
A company runs a production application workload in a single AWS account that uses Amazon Route 53,
AWS Elastic Beanstalk, and Amazon RDS. In the event of a security incident, the Security team wants the
application workload to failover to a new AWS account. The Security team also wants to block all access
to the original account immediately, with no access to any AWS resources in the original AWS account,
during forensic analysis.
What is the most cost-effective way to prepare to failover to the second account prior to a security
incident?
A. Migrate the Amazon Route 53 configuration to a dedicated AWS account. Mirror the Elastic Beanstalk configuration
in a different account. Enable RDS Database Read Replicas in a different account.
B. Migrate the Amazon Route 53 configuration to a dedicated AWS account. Save/copy the Elastic Beanstalk
configuration files in a different AWS account. Copy snapshots of the RDS Database to a different account.
Latest DOP-C01 Dumps | DOP-C01 Practice Test | DOP-C01 Braindumps 3 / 9
https://www.pass4itsure.com/aws-devops-engineer-professional.html
2021 Latest pass4itsure DOP-C01 PDF and VCE dumps Download
C. Save/copy the Amazon Route 53 configurations for use in a different AWS account after an incident. Save/copy
Elastic Beanstalk configuration files to a different account. Enable the RDS database read replica in a different account.
D. Save/copy the Amazon Route 53 configurations for use in a different AWS account after an incident. Mirror the
configuration of Elastic Beanstalk in a different account. Copy snapshots of the RDS database to a different account.
Correct Answer: A


QUESTION 5
A company has an application that has predictable peak traffic times. The company wants the application instances to
scale up only during the peak times. The application stores state in Amazon DynamoDB. The application environment
uses a standard Node.js application stack and custom Chef recipes stored in a private Git repository.
Which solution is MOST cost-effective and requires the LEAST amount of management overhead when performing
rolling updates of the application environment?
A. Create a custom AMI with the Node.js environment and application stack using Chef recipes. Use the AMI in an Auto
Scaling group and set up scheduled scaling for the required times, then set up an Amazon EC2 IAM role that provides
permission to access DynamoDB.
B. Create a Docker file that uses the Chef recipes for the application environment based on an official Node.js Docker
image. Create an Amazon ECS cluster and a service for the application environment, then create a task based on this
Docker image. Use scheduled scaling to scale the containers at the appropriate times and attach a task-level IAM role
that provides permission to access DynamoDB.
C. Configure AWS OpsWorks stacks and use custom Chef cookbooks. Add the Git repository information where the
custom recipes are stored, and add a layer in OpsWorks for the Node.js application server. Then configure the custom
recipe to deploy the application in the deploy step. Configure time-based instances and attach an Amazon EC2 IAM role
that provides permission to access DynamoDB.
D. Configure AWS OpsWorks stacks and push the custom recipes to an Amazon S3 bucket and configure custom
recipes to point to the S3 bucket. Then add an application layer type for a standard Node.js application server and
configure the custom recipe to deploy the application in the deploy step from the S3 bucket. Configure time-based
instances and attach an Amazon EC2 IAM role that provides permission to access DynamoDB.
Correct Answer: D

QUESTION 6
A company is using several AWS CloudFormation templates for deploying infrastructure as code. In most
of the deployments, the company uses Amazon EC2 Auto Scaling groups. A DevOps Engineer needs to
update the AMIs for the Auto Scaling group in the template if newer AMIs are available.
How can these requirements be met?
A. Manage the AMI mappings in the CloudFormation template. Use Amazon CloudWatch Events for detecting new
AMIs and updating the mapping in the template. Reference the map in the launch configuration resource block.
B. Use conditions in the AWS CloudFormation template to check if new AMIs are available and return the AMI ID.
Reference the returned AMI ID in the launch configuration resource block.
C. Use an AWS Lambda-backed custom resource in the template to fetch the AMI IDs. Reference the returned AMI ID
in the launch configuration resource block.
D. Launch an Amazon EC2 m4 small instance and run a script on it to check for new AMIs. If new AMIs are available,
the script should update the launch configuration resource block with the new AMI ID.
Correct Answer: D

QUESTION 7
A company requires an RPO of 2 hours and an RTP of 10 minutes for its data and application at all times. An
application uses a MySQL database and Amazon EC2 web servers. The development team needs a strategy for
failover and disaster recovery.
Which combination of deployment strategies will meet these requirements? (Choose two.)
A. Create an Amazon Aurora cluster in one Availability Zone across multiple Regions as the data store. Use Aurora\\’s
automatic recovery capabilities in the event of a disaster.
B. Create an Amazon Aurora global database in two Regions as the data store. In the event of a failure, promote the
secondary Region as the master for the application.
C. Create an Amazon Aurora multi-master cluster across multiple Regions as the data store. Use a Network Load
Balancer to balance the database traffic in different Regions.
D. Set up the application in two Regions and use Amazon Route 53 failover-based routing that points to the Application
Load Balancers in both Regions. Use health checks to determine the availability in a given Region. Use Auto Scaling
groups in each Region to adjust capacity based on demand.
E. Set up the application in two Regions and use a multi-Region Auto Scaling group behind Application Load Balancers
to manage the capacity based on demand. In the event of a disaster, adjust the Auto Scaling group\\’s desired instance
count to increase baseline capacity in the failover Region.
Correct Answer: BE

QUESTION 8
A DevOps Engineer is tasked with moving a mission-critical business application running in Go to AWS. The
Development team running this application is understaffed and requires a solution that allows the team to focus on
application development. They also want to enable blue/green deployments and perform A/B testing.
Which solution will meet these requirements?
A. Deploy the application on an Amazon EC2 instance and create an AMI of this instance. Use this AMI to create an
automatic scaling launch configuration that is used in an Auto Scaling group. Use an Elastic Load Balancer to distribute
traffic. When changes are made to the application, a new AMI is created and replaces the launch configuration.
B. Use Amazon Lightsail to deploy the application. Store the application in a zipped format in an Amazon S3 bucket.
Use this zipped version to deploy new versions of the application to Lightsail. Use Lightsail deployment options to
manage the deployment.
C. Use AWS CodePipeline with AWS CodeDeploy to deploy the application to a fleet of Amazon EC2 instances. Use an
Elastic Load Balancer to distribute the traffic to the EC2 instances. When making changes to the application, upload a
new version to CodePipeline and let it deploy the new version.
D. Use AWS Elastic Beanstalk to host the application. Store a zipped version of the application in Amazon S3, and use
that location to deploy new versions of the application using Elastic Beanstalk to manage the deployment options.
Correct Answer: C

QUESTION 9
An n-tier application requires a table in an Amazon RDS MySQL DB instance to be dropped and repopulated at each
deployment. This process can take several minutes and the web tier cannot come online until the process is complete.
Currently, the web tier is configured in an Amazon EC2 Auto Scaling group, with instances being terminated and
replaced at each deployment. The MySQL table is populated by running a SQL query through an AWS CodeBuild job.
What should be done to ensure that the web tier does not come online before the database is completely configured?
A. Use Amazon Aurora as a drop-in replacement for RDS MySQL. Use snapshots to populate the table with the correct
data.
B. Modify the launch configuration of the Auto Scaling group to pause user data execution for 600 seconds, allowing the
table to be populated.
C. Use AWS Step Functions to monitor and maintain the state of the data population. Mark the database in service before
continuing with the deployment.
D. Use an EC2 Auto Scaling lifecycle hook to pause the configuration of the web tier until the table is populated.
Correct Answer: D

QUESTION 10
You run accounting software in the AWS cloud. This software needs to be online continuously during the
day every day of the week and has a very static requirement for computing resources. You also have other,
unrelated batch jobs that need to run once per day at any time of your choosing.
How should you minimize cost?
A. Purchase a Heavy Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch
jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
B. Purchase a Medium Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the
batch jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
C. Purchase a Light Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch
jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
D. Purchase a Full Utilization Reserved Instance to run the accounting software. Turn it off after hours. Run the batch
jobs with the same instance class, so the Reserved Instance credits are also applied to the batch jobs.
Correct Answer: A
Because the instance will always be online during the day, in a predictable manner, and there is a sequence of batch
jobs to perform at any time, we should run the batch jobs when the accounting software is off. We can achieve Heavy
Utilization by alternating these times, so we should purchase the reservation as such, as this represents the lowest cost.
There is no such thing a “Full” level utilization purchases on EC2. Reference:
https://d0.awsstatic.com/whitepapers/Cost_Optimization_with_AWS.pdf

QUESTION 11
An application is being deployed with two Amazon EC2 Auto Scaling groups, each configured with an Application Load
Balancer. The application is deployed to one of the Auto Scaling groups and an Amazon Route 53 alias record is
pointed to the Application Load Balancer of the last deployed Auto Scaling group. Deployments alternate between the
two Auto Scaling groups. Home security devices are making requests into the application. The Development team notes
that new requests are coming into the old stack days after the deployment. The issue is caused by devices that are not
observing the Time to Live (TTL) setting on the Amazon Route 53 alias record. What steps should the DevOps Engineer
take to address the issue with requests coming to the old stacks, while creating minimal additional resources?
A. Create a fleet of Amazon EC2 instances running HAProxy behind an Application Load Balancer. The HAProxy
instances will proxy the requests to one of the existing Auto Scaling groups. After a deployment, the HAProxy instances
are updated to send requests to the newly deployed Auto Scaling group.
B. Reduce the application to one Application Load Balancer. Create two target groups named Blue and Green. Create a
rule on the Application Load Balancer pointed to a single target group. Add logic to the deployment to update the
Application Load Balancer rule to the target group of the newly deployed Auto Scaling group.
C. Move the application to an AWS Elastic Beanstalk application with two environments. Perform new deployments on
the non-live environment. After a deployment, perform an Elastic Beanstalk CNAME swap to make the newly deployed
environment the live environment.
D. Create an Amazon CloudFront distribution. Set the two existing Application Load Balancers as origins on the
distribution. After a deployment, update the CloudFront distribution behavior to send requests to the newly deployed
Auto Scaling group.
Correct Answer: B

QUESTION 12
An AWS CodePipeline pipeline has implemented a code release process. The pipeline is integrated with AWS
CodeDeploy to deploy versions of an application to multiple Amazon EC2 instances for each CodePipeline stage.
During a recent deployment, the pipeline failed due to a CodeDeploy issue. The DevOps team wants to improve
monitoring and notifications during deployment to decrease resolution times. What should the DevOps Engineer do to
create notifications when issues are discovered?
A. Implement AWS CloudWatch Logs for CodePipeline and CodeDeploy, create an AWS Config rule to evaluate code
deployment issues, and create an Amazon SNS topic to notify stakeholders of deployment issues.
B. Implement AWS CloudWatch Events for CodePipeline and CodeDeploy, create an AWS Lambda function to evaluate
code deployment issues and create an Amazon SNS topic to notify stakeholders of deployment issues.
C. Implement AWS CloudTrail to record CodePipeline and CodeDeploy API call information, create an AWS Lambda
function to evaluate code deployment issues, and create an Amazon SNS topic to notify stakeholders of deployment
issues.
D. Implement AWS CloudWatch Events for CodePipeline and CodeDeploy, create an Amazon Inspector assessment
target to evaluate code deployment issues and create an Amazon SNS topic to notify stakeholders of deployment
issues.
Correct Answer: A

QUESTION 13
An application has microservices spread across different AWS accounts and is integrated with an on-premises legacy
system for some of its functionality. Because of the segmented architecture and missing logs, every time the application
experiences issues, it is taking too long to gather the logs to identify the issues. A DevOps Engineer must fix the log
aggregation process and provide a way to centrally analyze the logs. Which is the MOST efficient and cost-effective
solution?
A. Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to
export on-premises logs, and store the logs in an S3 bucket in a central account. Build an Amazon EMR cluster to
reduce the logs and derive the root cause.
B. Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Use the Amazon S3 API to
import on-premises logs. Store all logs in S3 buckets in individual accounts. Use Amazon Macie to write a query to
search for the required specific event-related data point.
C. Collect system logs and application logs using the Amazon CloudWatch Logs agent. Install the CloudWatch Logs
agent on the on-premises servers. Transfer all logs from AWS to the on-premises data center. Use an Amazon
Elasticsearch Logstash Kibana stack to analyze logs on premises.
D. Collect system logs and application logs by using the Amazon CloudWatch Logs agent. Install a CloudWatch Logs
agent for on-premises resources. Store all logs in an S3 bucket in a central account. Set up an Amazon S3 trigger and
an AWS Lambda function to analyze incoming logs and automatically identify anomalies. Use Amazon Athena to run ad
hoc queries on the logs in the central account.
Correct Answer: C

Welcome to download the valid Pass4itsure DOP-C01 pdf

Free downloadGoogle Drive
Amazon AWS DOP-C01 pdf https://drive.google.com/file/d/16BQYHcZSuBYjN6O-LTQQEQB0RP7AItCB/view?usp=sharing

Pass4itsure latest Amazon exam dumps coupon code free share

Summary:

New Amazon DOP-C01 exam questions from Pass4itsure DOP-C01 dumps! Welcome to download the newest Pass4itsure DOP-C01 dumps https://www.pass4itsure.com/aws-devops-engineer-professional.html (449 Q&As), verified the latest DOP-C01 practice test questions with relevant answers.

Amazon AWS DOP-C01 dumps pdf free share https://drive.google.com/file/d/16BQYHcZSuBYjN6O-LTQQEQB0RP7AItCB/view?usp=sharing

[2021.2] Valid Amazon AWS DOP-C01 Practice Questions Free Share From Pass4itsure

Amazon AWS DOP-C01 is difficult. But with the Pass4itsure DOP-C01 dumps https://www.pass4itsure.com/aws-devops-engineer-professional.html preparation material candidate, it can be achieved easily. In DOP-C01 practice tests, you can practice on the same exam as the actual exam. If you master the tricks you gained through practice, it will be easier to achieve your target score.

Amazon AWS DOP-C01 pdf free https://drive.google.com/file/d/1QiYZ9hneGiEH0l0kRuuz5CKOhSL7VN8F/view?usp=sharing

Latest Amazon AWS DOP-C01 practice exam questions at here:

QUESTION 1
What method should I use to author automation if I want to wait for a CloudFormation stack to finish completing in a
script?
A. Event subscription using SQS.
B. Event subscription using SNS.
C. Poll using ListStacks / list-stacks.
D. Poll using GetStackStatus / get-stack-status.
Correct Answer: C
Event driven systems are good for IFTTT logic, but only polling will make a script wait to complete. ListStacks / liststacks is a real method, GetStackStatus / get-stack-status is not. Reference:
http://docs.aws.amazon.com/cli/latest/reference/cloudformation/list-stacks.html


QUESTION 2
A company has multiple development groups working in a single shared AWS account. The Senior Manager of the
groups wants to be alerted via a third-party API call when the creation of resources approaches the service limits for the
account.
Which solution will accomplish this with the LEAST amount of development effort?
A. Create an Amazon CloudWatch Event rule that runs periodically and targets an AWS Lambda function. Within the
Lambda function, evaluate the current state of the AWS environment and compare deployed resource values to
resource limits on the account. Notify the Senior Manager if the account is approaching a service limit.
B. Deploy an AWS Lambda function that refreshes AWS Trusted Advisor checks, and configure an Amazon
CloudWatch Events rule to run the Lambda function periodically. Create another CloudWatch Events rule with an event
pattern matching Trusted Advisor events and a target Lambda function. In the target Lambda function, notify the Senior
Manager.
C. Deploy an AWS Lambda function that refreshes AWS Personal Health Dashboard checks, and configure an Amazon
CloudWatch Events rule to run the Lambda function periodically. Create another CloudWatch Events rule with an event
pattern matching Personal Health Dashboard events and a target Lambda function. In the target Lambda function, notify
the Senior Manager.
D. Add an AWS Config custom rule that runs periodically, checks the AWS service limit status, and streams notifications
to an Amazon SNS topic. Deploy an AWS Lambda function that notifies the Senior Manager, and subscribe the Lambda
function to the SNS topic.
Correct Answer: D

QUESTION 3
A company runs a three-tier web application in its production environment, which is built on a single AWS
CloudFormation template made up of Amazon EC2 instances behind an ELB Application Load Balancer. The instances
run in an EC2
Auto Scaling group across multiple Availability Zones. Data is stored in an Amazon RDS Multi-AZ DB instance with read
replicas. Amazon Route 53 manages the application\\’s public DNS record. A DevOps Engineer must create a workflow
to
mitigate a failed software deployment by rolling back changes in the production environment when a software cutover
occurs for new application software.
What steps should the Engineer perform to meet these requirements with the LEAST amount of downtime?
A. Use CloudFormation to deploy an additional staging environment and configure the Route 53 DNS with weighted
records. During cutover, change the Route 53 A record weights to achieve an even traffic distribution between the two
environments. Validate the traffic in the new environment and immediately terminate the old environment if tests are
successful.
B. Use a single AWS Elastic Beanstalk environment to deploy the staging and production environments. Update the
environment by uploading the ZIP file with the new application code. Swap the Elastic Beanstalk environment CNAME.
Validate the traffic in the new environment and immediately terminate the old environment if tests are successful.
C. Use a single AWS Elastic Beanstalk environment and an AWS OpsWorks environment to deploy the staging and
production environments. Update the environment by uploading the ZIP file with the new application code into the
Elastic Beanstalk environment deployed with the OpsWorks stack. Validate the traffic in the new environment and
immediately terminate the old environment if tests are successful.
D. Use AWS CloudFormation to deploy an additional staging environment, and configure the Route 53 DNS with
weighted records. During cutover, increase the weight distribution to have more traffic directed to the new staging
environment as workloads are successfully validated. Keep the old production environment in place until the new
staging environment handles all traffic.
Correct Answer: D

QUESTION 4
What is required to achieve gigabit network throughput on EC2? You already selected cluster-compute, 10GB instances
with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds.
A. Enable biplex networking on your servers, so packets are non-blocking in both directions and there\\’s no switching
overhead.
B. Ensure the instances are in different VPCs so you don\\’t saturate the Internet Gateway on any one VPC.
C. Select PIOPS for your drives and mount several, so you can provision sufficient disk throughput.
D. Use a placement group for your instances so the instances are physically near each other in the same Availability
Zone.
Correct Answer: D
You are not guaranteed 10gigabit performance, except within a placement group. A placement group is a logical
grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a
low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network
latency, high network throughput, or both. Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placementgroups.html

QUESTION 5
A DevOps Engineer is leading the implementation for automating patching of Windows-based workstations in a hybrid
cloud environment by using AWS Systems Manager (SSM). What steps should the Engineer follow to set up Systems
Manager to automate patching in this environment? (Select TWO.)
A. Create multiple IAM service roles for Systems Manager so that the ssm amazonaws.com service can execute the
AssumeRole operation on every instance. Register the role on a per-resource level to enable the creation of a service
token. Perform managed-instance activation with the newly created service role attached to each managed instance.
B. Create an IAM service role for Systems Manager so that the ssm amazonaws.com service can execute the
AssumeRole operation. Register the role to enable the creation of a service token. Perform managed-instance activation
with the newly created service role.
C. Using previously obtained activation codes and activation IDs, download and install the SSM Agent on the hybrid
servers, and register the servers or virtual machines on the Systems Manager service. Hybrid instances will show with
an “mi-” prefix in the SSM console.
D. Using previously obtained activation codes and activation IDs, download and install the SSM Agent on the hybrid
servers, and register the servers or virtual machines on the Systems Manager service. Hybrid instances will show with
an “i-” prefix in the SSM console as if they were provisioned as a regular Amazon EC2 instance.
E. Run AWS Config to create a list of instances that are unpatched and not compliant. Create an instance scheduler job,
and through an AWS Lambda function, perform the instance patching to bring them up to compliance.
Correct Answer: BE


QUESTION 6
You need to replicate API calls across two systems in real time. What tool should you use as a buffer and transport
mechanism for API call events?
A. AWS SQS
B. AWS Lambda
C. AWS Kinesis
D. AWS SNS
Correct Answer: C
AWS Kinesis is an event stream service. Streams can act as buffers and transport across systems for in-order
programmatic events, making it ideal for replicating API calls across systems. A typical Amazon Kinesis Streams
application reads data from an Amazon Kinesis stream as data records. These applications can use the Amazon
Kinesis Client Library, and they can run on Amazon EC2 instances. The processed records can be sent to dashboards,
used to generate alerts, dynamically change pricing and advertising strategies, or send data to a variety of other AWS
services. For information about Streams features and pricing, see Amazon Kinesis Streams. Reference:
http://docs.aws.amazon.com/kinesis/latest/dev/introduction.html

QUESTION 7
If you\\’re trying to configure an AWS Elastic Beanstalk worker tier for easy debugging if there are problems finishing
queue jobs, what should you configure?
A. Configure Rolling Deployments.
B. Configure Enhanced Health Reporting
C. Configure Blue-Green Deployments.
D. Configure a Dead Letter Queue
Correct Answer: D
Elastic Beanstalk worker environments support Amazon Simple Queue Service (SQS) dead letter queues. A dead letter
queue is a queue where other (source) queues can send messages that for some reason could not be successfully
processed. A primary benefit of using a dead letter queue is the ability to sideline and isolate the unsuccessfully
processed messages. You can then analyze any messages sent to the dead letter queue to try to determine why they
were not successfully processed. Reference: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-featuresmanaging-env-tiers.html#worker-deadletter

QUESTION 8
A highly regulated company has a policy that DevOps Engineers should not log in to their Amazon EC2 instances
except in emergencies. If a DevOps Engineer does log in, the Security team must be notified within 15 minutes of the
occurrence.
Which solution will meet these requirements?
A. Install the Amazon Inspector agent on each EC2 instance. Subscribe to Amazon CloudWatch Events notifications.
Trigger an AWS Lambda function to check if a message is about user logins. If it is, send a notification to the Security
team using Amazon SNS.
B. Install the Amazon CloudWatch agent on each EC2 instance. Configure the agent to push all logs to Amazon
CloudWatch Logs and set up a CloudWatch metric filter that searches for user logins. If a login is found, send a
notification to the Security team using Amazon SNS.
C. Set up AWS CloudTrail with Amazon CloudWatch Logs. Subscribe CloudWatch Logs to Amazon Kinesis. Attach
AWS Lambda to Kinesis to parse and determine if a log contains a user login. If it does, send a notification to the
Security team using Amazon SNS.
D. Set up a script on each Amazon EC2 instance to push all logs to Amazon S3. Set up an S3 event to trigger an AWS
Lambda function, which triggers an Amazon Athena query to run. The Athena query checks for logins and sends the
output to the Security team using Amazon SNS.
Correct Answer: A

QUESTION 9
When thinking of AWS Elastic Beanstalk, which statement is true?
A. Worker tiers pull jobs from SNS.
B. Worker tiers pull jobs from HTTP.
C. Worker tiers pull jobs from JSON.
D. Worker tiers pull jobs from SQS.
Correct Answer: D
Elastic Beanstalk installs a daemon on each Amazon EC2 instance in the Auto Scaling group to process Amazon SQS
messages in the worker environment. The daemon pulls data off the Amazon SQS queue, inserts it into the message
body of an HTTP POST request, and sends it to a user-configurable URL path on the local host. The content type for
the message body within an HTTP POST request is application/json by default.
Reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html

QUESTION 10
You need your API backed by DynamoDB to stay online during a total regional AWS failure. You can tolerate a couple
minutes of lag or slowness during a large failure event, but the system should recover with normal operation after those
few minutes.
What is a good approach?
A. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region.
Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53
Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records.
B. Set up a DynamoDB Multi-Region table. Create an Auto Scaling Group behind an ELB in each of the two regions
DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as
the resource records.
C. Set up a DynamoDB Multi-Region table. Create a cross-region ELB pointing to a cross-region Auto Scaling Group,
and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB.
D. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region.
Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record
with DNS Failover to the cross-region ELB.
Correct Answer: A
There is no such thing as a cross-region ELB, nor such thing as a cross-region Auto Scaling Group, nor such thing as a
DynamoDB Multi-Region Table. The only option that makes sense is the cross-regional replication version with two
ELBs
and ASGs with Route53 Failover and Latency DNS.
Reference:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepl.html

QUESTION 11
A DevOps Engineer discovered a sudden spike in a website\\’s page load times and found that a recent deployment
occurred. A brief diff of the related commit shows that the URL for an external API call was altered and the connecting
port
changed from 80 to 443. The external API has been verified and works outside the application. The application logs
show that the connection is now timing out, resulting in multiple retries and eventual failure of the call.
Which debug steps should the Engineer take to determine the root cause of the issue\\’?
A. Check the VPC Flow Logs looking for denies originating from Amazon EC2 instances that are part of the web Auto
Scaling group. Check the ingress security group rules and routing rules for the VPC.
B. Check the existing egress security group rules and network ACLs for the VPC. Also check the application logs being
written to Amazon CloudWatch Logs for debug information.
C. Check the egress security group rules and network ACLs for the VPC. Also check the VPC flow logs looking for
accepts originating from the web Auto Scaling group.
D. Check the application logs being written to Amazon CloudWatch Logs for debug information. Check the ingress
security group rules and routing rules for the VPC.
Correct Answer: C

QUESTION 12
A business has an application that consists of five independent AWS Lambda functions.
The DevOps Engineer has built a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild that builds, tests,
packages, and deploys each Lambda function in sequence. The pipeline uses an Amazon CloudWatch Events rule to
ensure
the pipeline execution starts as quickly as possible after a change is made to the application source code.
After working with the pipeline for a few months the DevOps Engineer has noticed the pipeline takes too long to
complete.
What should the DevOps Engineer implement to BEST improve the speed of the pipeline?
A. Modify the CodeBuild projects within the pipeline to use a compute type with more available network throughput.
B. Create a custom CodeBuild execution environment that includes a symmetric multiprocessing configuration to run the
builds in parallel.
C. Modify the CodePipeline configuration to execute actions for each Lambda function in parallel by specifying the same
runOrder.
D. Modify each CodeBuild project to run within a VPC and use dedicated instances to increase throughput.
Correct Answer: D

QUESTION 13
A company has a website in an AWS Elastic Beanstalk load balancing and automatic scaling environment. This environment has an Amazon RDS MySQL instance configured as its database resource. After a sudden increase in
traffic, the website started dropping traffic. An administrator discovered that the application on some instances is not
responding as the result of out-of-memory errors. Classic Load Balancer marked those instances as out of service, and
the health status of Elastic Beanstalk enhanced health reporting is degraded. However, Elastic Beanstalk did not
replace those instances. Because of the diminished capacity behind the Classic Load Balancer, the application
response times are slower for the customers. Which action will permanently fix this issue?
A. Clone the Elastic Beanstalk environment. When the new environment is up, swap CNAME and terminate the earlier
environment.
B. Temporarily change the maximum number of instances in the Auto Scaling group to allow the group to support more
traffic.
C. Change the setting for the Auto Scaling group health check from Amazon EC2 to Elastic Load Balancing, and
increase the capacity of the group.
D. Write a cron script for restraining the web server process when memory is full, and deploy it with AWS Systems
Manager.
Correct Answer: C

Welcome to download the valid Pass4itsure DOP-C01 pdf

Free downloadGoogle Drive
Amazon AWS DOP-C01 pdf https://drive.google.com/file/d/1QiYZ9hneGiEH0l0kRuuz5CKOhSL7VN8F/view?usp=sharing

Summary:

New Amazon DOP-C01 exam questions from Pass4itsure DOP-C01 dumps! Welcome to download the newest Pass4itsure DOP-C01 dumps https://www.pass4itsure.com/aws-devops-engineer-professional.html (362 Q&As), verified the latest DOP-C01 practice test questions with relevant answers.

Amazon AWS DOP-C01 dumps pdf free share https://drive.google.com/file/d/1QiYZ9hneGiEH0l0kRuuz5CKOhSL7VN8F/view?usp=sharing