Amazon exam practice test / dop-c01 dumps / dop-c01 dumps pdf / dop-c01 exam / dop-c01 pdf / dop-c01 practice test / dop-c01 questions

[2021.2] Valid Amazon AWS DOP-C01 Practice Questions Free Share From Pass4itsure

Amazon AWS DOP-C01 is difficult. But with the Pass4itsure DOP-C01 dumps https://www.pass4itsure.com/aws-devops-engineer-professional.html preparation material candidate, it can be achieved easily. In DOP-C01 practice tests, you can practice on the same exam as the actual exam. If you master the tricks you gained through practice, it will be easier to achieve your target score.

Amazon AWS DOP-C01 pdf free https://drive.google.com/file/d/1QiYZ9hneGiEH0l0kRuuz5CKOhSL7VN8F/view?usp=sharing

Latest Amazon AWS DOP-C01 practice exam questions at here:

QUESTION 1
What method should I use to author automation if I want to wait for a CloudFormation stack to finish completing in a
script?
A. Event subscription using SQS.
B. Event subscription using SNS.
C. Poll using ListStacks / list-stacks.
D. Poll using GetStackStatus / get-stack-status.
Correct Answer: C
Event driven systems are good for IFTTT logic, but only polling will make a script wait to complete. ListStacks / liststacks is a real method, GetStackStatus / get-stack-status is not. Reference:
http://docs.aws.amazon.com/cli/latest/reference/cloudformation/list-stacks.html


QUESTION 2
A company has multiple development groups working in a single shared AWS account. The Senior Manager of the
groups wants to be alerted via a third-party API call when the creation of resources approaches the service limits for the
account.
Which solution will accomplish this with the LEAST amount of development effort?
A. Create an Amazon CloudWatch Event rule that runs periodically and targets an AWS Lambda function. Within the
Lambda function, evaluate the current state of the AWS environment and compare deployed resource values to
resource limits on the account. Notify the Senior Manager if the account is approaching a service limit.
B. Deploy an AWS Lambda function that refreshes AWS Trusted Advisor checks, and configure an Amazon
CloudWatch Events rule to run the Lambda function periodically. Create another CloudWatch Events rule with an event
pattern matching Trusted Advisor events and a target Lambda function. In the target Lambda function, notify the Senior
Manager.
C. Deploy an AWS Lambda function that refreshes AWS Personal Health Dashboard checks, and configure an Amazon
CloudWatch Events rule to run the Lambda function periodically. Create another CloudWatch Events rule with an event
pattern matching Personal Health Dashboard events and a target Lambda function. In the target Lambda function, notify
the Senior Manager.
D. Add an AWS Config custom rule that runs periodically, checks the AWS service limit status, and streams notifications
to an Amazon SNS topic. Deploy an AWS Lambda function that notifies the Senior Manager, and subscribe the Lambda
function to the SNS topic.
Correct Answer: D

QUESTION 3
A company runs a three-tier web application in its production environment, which is built on a single AWS
CloudFormation template made up of Amazon EC2 instances behind an ELB Application Load Balancer. The instances
run in an EC2
Auto Scaling group across multiple Availability Zones. Data is stored in an Amazon RDS Multi-AZ DB instance with read
replicas. Amazon Route 53 manages the application\\’s public DNS record. A DevOps Engineer must create a workflow
to
mitigate a failed software deployment by rolling back changes in the production environment when a software cutover
occurs for new application software.
What steps should the Engineer perform to meet these requirements with the LEAST amount of downtime?
A. Use CloudFormation to deploy an additional staging environment and configure the Route 53 DNS with weighted
records. During cutover, change the Route 53 A record weights to achieve an even traffic distribution between the two
environments. Validate the traffic in the new environment and immediately terminate the old environment if tests are
successful.
B. Use a single AWS Elastic Beanstalk environment to deploy the staging and production environments. Update the
environment by uploading the ZIP file with the new application code. Swap the Elastic Beanstalk environment CNAME.
Validate the traffic in the new environment and immediately terminate the old environment if tests are successful.
C. Use a single AWS Elastic Beanstalk environment and an AWS OpsWorks environment to deploy the staging and
production environments. Update the environment by uploading the ZIP file with the new application code into the
Elastic Beanstalk environment deployed with the OpsWorks stack. Validate the traffic in the new environment and
immediately terminate the old environment if tests are successful.
D. Use AWS CloudFormation to deploy an additional staging environment, and configure the Route 53 DNS with
weighted records. During cutover, increase the weight distribution to have more traffic directed to the new staging
environment as workloads are successfully validated. Keep the old production environment in place until the new
staging environment handles all traffic.
Correct Answer: D

QUESTION 4
What is required to achieve gigabit network throughput on EC2? You already selected cluster-compute, 10GB instances
with enhanced networking, and your workload is already network-bound, but you are not seeing 10 gigabit speeds.
A. Enable biplex networking on your servers, so packets are non-blocking in both directions and there\\’s no switching
overhead.
B. Ensure the instances are in different VPCs so you don\\’t saturate the Internet Gateway on any one VPC.
C. Select PIOPS for your drives and mount several, so you can provision sufficient disk throughput.
D. Use a placement group for your instances so the instances are physically near each other in the same Availability
Zone.
Correct Answer: D
You are not guaranteed 10gigabit performance, except within a placement group. A placement group is a logical
grouping of instances within a single Availability Zone. Using placement groups enables applications to participate in a
low-latency, 10 Gbps network. Placement groups are recommended for applications that benefit from low network
latency, high network throughput, or both. Reference:
http://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placementgroups.html

QUESTION 5
A DevOps Engineer is leading the implementation for automating patching of Windows-based workstations in a hybrid
cloud environment by using AWS Systems Manager (SSM). What steps should the Engineer follow to set up Systems
Manager to automate patching in this environment? (Select TWO.)
A. Create multiple IAM service roles for Systems Manager so that the ssm amazonaws.com service can execute the
AssumeRole operation on every instance. Register the role on a per-resource level to enable the creation of a service
token. Perform managed-instance activation with the newly created service role attached to each managed instance.
B. Create an IAM service role for Systems Manager so that the ssm amazonaws.com service can execute the
AssumeRole operation. Register the role to enable the creation of a service token. Perform managed-instance activation
with the newly created service role.
C. Using previously obtained activation codes and activation IDs, download and install the SSM Agent on the hybrid
servers, and register the servers or virtual machines on the Systems Manager service. Hybrid instances will show with
an “mi-” prefix in the SSM console.
D. Using previously obtained activation codes and activation IDs, download and install the SSM Agent on the hybrid
servers, and register the servers or virtual machines on the Systems Manager service. Hybrid instances will show with
an “i-” prefix in the SSM console as if they were provisioned as a regular Amazon EC2 instance.
E. Run AWS Config to create a list of instances that are unpatched and not compliant. Create an instance scheduler job,
and through an AWS Lambda function, perform the instance patching to bring them up to compliance.
Correct Answer: BE


QUESTION 6
You need to replicate API calls across two systems in real time. What tool should you use as a buffer and transport
mechanism for API call events?
A. AWS SQS
B. AWS Lambda
C. AWS Kinesis
D. AWS SNS
Correct Answer: C
AWS Kinesis is an event stream service. Streams can act as buffers and transport across systems for in-order
programmatic events, making it ideal for replicating API calls across systems. A typical Amazon Kinesis Streams
application reads data from an Amazon Kinesis stream as data records. These applications can use the Amazon
Kinesis Client Library, and they can run on Amazon EC2 instances. The processed records can be sent to dashboards,
used to generate alerts, dynamically change pricing and advertising strategies, or send data to a variety of other AWS
services. For information about Streams features and pricing, see Amazon Kinesis Streams. Reference:
http://docs.aws.amazon.com/kinesis/latest/dev/introduction.html

QUESTION 7
If you\\’re trying to configure an AWS Elastic Beanstalk worker tier for easy debugging if there are problems finishing
queue jobs, what should you configure?
A. Configure Rolling Deployments.
B. Configure Enhanced Health Reporting
C. Configure Blue-Green Deployments.
D. Configure a Dead Letter Queue
Correct Answer: D
Elastic Beanstalk worker environments support Amazon Simple Queue Service (SQS) dead letter queues. A dead letter
queue is a queue where other (source) queues can send messages that for some reason could not be successfully
processed. A primary benefit of using a dead letter queue is the ability to sideline and isolate the unsuccessfully
processed messages. You can then analyze any messages sent to the dead letter queue to try to determine why they
were not successfully processed. Reference: http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-featuresmanaging-env-tiers.html#worker-deadletter

QUESTION 8
A highly regulated company has a policy that DevOps Engineers should not log in to their Amazon EC2 instances
except in emergencies. If a DevOps Engineer does log in, the Security team must be notified within 15 minutes of the
occurrence.
Which solution will meet these requirements?
A. Install the Amazon Inspector agent on each EC2 instance. Subscribe to Amazon CloudWatch Events notifications.
Trigger an AWS Lambda function to check if a message is about user logins. If it is, send a notification to the Security
team using Amazon SNS.
B. Install the Amazon CloudWatch agent on each EC2 instance. Configure the agent to push all logs to Amazon
CloudWatch Logs and set up a CloudWatch metric filter that searches for user logins. If a login is found, send a
notification to the Security team using Amazon SNS.
C. Set up AWS CloudTrail with Amazon CloudWatch Logs. Subscribe CloudWatch Logs to Amazon Kinesis. Attach
AWS Lambda to Kinesis to parse and determine if a log contains a user login. If it does, send a notification to the
Security team using Amazon SNS.
D. Set up a script on each Amazon EC2 instance to push all logs to Amazon S3. Set up an S3 event to trigger an AWS
Lambda function, which triggers an Amazon Athena query to run. The Athena query checks for logins and sends the
output to the Security team using Amazon SNS.
Correct Answer: A

QUESTION 9
When thinking of AWS Elastic Beanstalk, which statement is true?
A. Worker tiers pull jobs from SNS.
B. Worker tiers pull jobs from HTTP.
C. Worker tiers pull jobs from JSON.
D. Worker tiers pull jobs from SQS.
Correct Answer: D
Elastic Beanstalk installs a daemon on each Amazon EC2 instance in the Auto Scaling group to process Amazon SQS
messages in the worker environment. The daemon pulls data off the Amazon SQS queue, inserts it into the message
body of an HTTP POST request, and sends it to a user-configurable URL path on the local host. The content type for
the message body within an HTTP POST request is application/json by default.
Reference:
http://docs.aws.amazon.com/elasticbeanstalk/latest/dg/using-features-managing-env-tiers.html

QUESTION 10
You need your API backed by DynamoDB to stay online during a total regional AWS failure. You can tolerate a couple
minutes of lag or slowness during a large failure event, but the system should recover with normal operation after those
few minutes.
What is a good approach?
A. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region.
Create an Auto Scaling Group behind an ELB in each of the two regions DynamoDB is running in. Add a Route53
Latency DNS Record with DNS Failover, using the ELBs in the two regions as the resource records.
B. Set up a DynamoDB Multi-Region table. Create an Auto Scaling Group behind an ELB in each of the two regions
DynamoDB is running in. Add a Route53 Latency DNS Record with DNS Failover, using the ELBs in the two regions as
the resource records.
C. Set up a DynamoDB Multi-Region table. Create a cross-region ELB pointing to a cross-region Auto Scaling Group,
and direct a Route53 Latency DNS Record with DNS Failover to the cross-region ELB.
D. Set up DynamoDB cross-region replication in a master-standby configuration, with a single standby in another region.
Create a cross-region ELB pointing to a cross-region Auto Scaling Group, and direct a Route53 Latency DNS Record
with DNS Failover to the cross-region ELB.
Correct Answer: A
There is no such thing as a cross-region ELB, nor such thing as a cross-region Auto Scaling Group, nor such thing as a
DynamoDB Multi-Region Table. The only option that makes sense is the cross-regional replication version with two
ELBs
and ASGs with Route53 Failover and Latency DNS.
Reference:
http://docs.aws.amazon.com/amazondynamodb/latest/developerguide/Streams.CrossRegionRepl.html

QUESTION 11
A DevOps Engineer discovered a sudden spike in a website\\’s page load times and found that a recent deployment
occurred. A brief diff of the related commit shows that the URL for an external API call was altered and the connecting
port
changed from 80 to 443. The external API has been verified and works outside the application. The application logs
show that the connection is now timing out, resulting in multiple retries and eventual failure of the call.
Which debug steps should the Engineer take to determine the root cause of the issue\\’?
A. Check the VPC Flow Logs looking for denies originating from Amazon EC2 instances that are part of the web Auto
Scaling group. Check the ingress security group rules and routing rules for the VPC.
B. Check the existing egress security group rules and network ACLs for the VPC. Also check the application logs being
written to Amazon CloudWatch Logs for debug information.
C. Check the egress security group rules and network ACLs for the VPC. Also check the VPC flow logs looking for
accepts originating from the web Auto Scaling group.
D. Check the application logs being written to Amazon CloudWatch Logs for debug information. Check the ingress
security group rules and routing rules for the VPC.
Correct Answer: C

QUESTION 12
A business has an application that consists of five independent AWS Lambda functions.
The DevOps Engineer has built a CI/CD pipeline using AWS CodePipeline and AWS CodeBuild that builds, tests,
packages, and deploys each Lambda function in sequence. The pipeline uses an Amazon CloudWatch Events rule to
ensure
the pipeline execution starts as quickly as possible after a change is made to the application source code.
After working with the pipeline for a few months the DevOps Engineer has noticed the pipeline takes too long to
complete.
What should the DevOps Engineer implement to BEST improve the speed of the pipeline?
A. Modify the CodeBuild projects within the pipeline to use a compute type with more available network throughput.
B. Create a custom CodeBuild execution environment that includes a symmetric multiprocessing configuration to run the
builds in parallel.
C. Modify the CodePipeline configuration to execute actions for each Lambda function in parallel by specifying the same
runOrder.
D. Modify each CodeBuild project to run within a VPC and use dedicated instances to increase throughput.
Correct Answer: D

QUESTION 13
A company has a website in an AWS Elastic Beanstalk load balancing and automatic scaling environment. This environment has an Amazon RDS MySQL instance configured as its database resource. After a sudden increase in
traffic, the website started dropping traffic. An administrator discovered that the application on some instances is not
responding as the result of out-of-memory errors. Classic Load Balancer marked those instances as out of service, and
the health status of Elastic Beanstalk enhanced health reporting is degraded. However, Elastic Beanstalk did not
replace those instances. Because of the diminished capacity behind the Classic Load Balancer, the application
response times are slower for the customers. Which action will permanently fix this issue?
A. Clone the Elastic Beanstalk environment. When the new environment is up, swap CNAME and terminate the earlier
environment.
B. Temporarily change the maximum number of instances in the Auto Scaling group to allow the group to support more
traffic.
C. Change the setting for the Auto Scaling group health check from Amazon EC2 to Elastic Load Balancing, and
increase the capacity of the group.
D. Write a cron script for restraining the web server process when memory is full, and deploy it with AWS Systems
Manager.
Correct Answer: C

Welcome to download the valid Pass4itsure DOP-C01 pdf

Free downloadGoogle Drive
Amazon AWS DOP-C01 pdf https://drive.google.com/file/d/1QiYZ9hneGiEH0l0kRuuz5CKOhSL7VN8F/view?usp=sharing

Summary:

New Amazon DOP-C01 exam questions from Pass4itsure DOP-C01 dumps! Welcome to download the newest Pass4itsure DOP-C01 dumps https://www.pass4itsure.com/aws-devops-engineer-professional.html (362 Q&As), verified the latest DOP-C01 practice test questions with relevant answers.

Amazon AWS DOP-C01 dumps pdf free share https://drive.google.com/file/d/1QiYZ9hneGiEH0l0kRuuz5CKOhSL7VN8F/view?usp=sharing