[Split-New] Real And Effective Amazon DBS-C01 Dumps Questions By Pass4itSure

The Amazon AWS Certified Specialty certification is a very popular certification. Pass the DBS-C01 exam to earn this certification. You can do this with the help of a real Amazon AWS DBS-C01 dumps.

Pass4itSure has launched the latest version of AWS DBS-C01 dumps https://www.pass4itsure.com/aws-certified-database-specialty.html (Updated: Feb 01, 2022)

Maybe there are more other Amazon certification exams you want to pass https://www.pass4itsure.com/amazon.html Welcome to pass.

In addition, the site shares some AWS DBS-C01 exam practice questions q1-q12 from the Pass4itSure dumps.

Start testing your abilities now >>>

Latest AWS DBS-C01 exam questions and answers – Pass4itSure DBS-C01 dumps

AWS Certified Database – Specialty (DBS-C01) exam questions online test

Q 1

A company uses a single-node Amazon RDS for MySQL DB instance for its production database. The DB instance runs in an AWS Region in the United States.

A week before a big sales event, a new maintenance update is available for the DB instance. The maintenance update is marked as required. The company wants to minimize downtime for the DB instance and asks a database specialist to make the DB instance highly available until the sales event ends.

Which solution will meet these requirements?

A. Defer the maintenance update until the sales event is over.
B. Create a read replica with the latest update. Initiate a failover before the sales event.
C. Create a read replica with the latest update. Transfer all read-only traffic to the read replica during the sales event.
D. Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.

Correct Answer: D

Reference: https://aws.amazon.com/rds/features/multi-az/

Q 2

An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load.

The RDS instance exhibits multi- second read and write latency, and uses all of its maximum bandwidth for reading throughput, yet the instance uses less than half of its CPU and RAM resources.

What should a Database Specialist do in this situation to increase performance and return latency to sub-second levels?

A. Increase the size of the DB instance storage
B. Change the underlying EBS storage type to General Purpose SSD (gp2)
C. Disable EBS optimization on the DB instance
D. Change the DB instance to an instance class with a higher maximum bandwidth

Correct Answer: B

Q 3

A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the company discovered there is a period of time every day around 3:00 PM where the response time of the application is noticeably slower. The company has narrowed down the cause of this issue to the database and not the application.

Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?

A. Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
B. Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
C. Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
D. Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

Correct Answer: D

Q 4

A company is going through a security audit. The audit team has identified cleartext master user passwords in the AWS CloudFormation templates for Amazon RDS for MySQL DB instances. The audit team has flagged this as a security risk to the database team.

What should a database specialist do to mitigate this risk?

A. Change all the databases to use AWS IAM for authentication and remove all the cleartext passwords in CloudFormation templates.
B. Use an AWS Secrets Manager resource to generate a random password and reference the secret in the CloudFormation template.
C. Remove the passwords from the CloudFormation templates so Amazon RDS prompts for the password when the database is being created.
D. Remove the passwords from the CloudFormation template and store them in a separate file. Replace the passwords by running CloudFormation using the sed command.

Correct Answer: C

Q 5

A bank plans to use an Amazon RDS for MySQL DB instance. The database should support read-intensive traffic with very few repeated queries. Which solution meets these requirements?

A. Create an Amazon ElastiCache cluster. Use a write-through strategy to populate the cache.
B. Create an Amazon ElastiCache cluster. Use a lazy loading strategy to populate the cache.
C. Change the DB instance to Multi-AZ with a standby instance in another AWS Region.
D. Create a read replica of the DB instance. Use the read replica to distribute the read traffic.

Correct Answer: D

Q 6

A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level.

How can the Database Specialist meet these requirements?

A. Use AWS IAM database authentication and restrict access to the tables using an IAM policy.
B. Configure the rules in a NaCl to restrict outbound traffic from the Aurora DB cluster.
C. Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.
D. Define access privileges to the tables containing sensitive data in the pg_hba.conf file.

Correct Answer: C

Reference: https://aws.amazon.com/blogs/database/managing-postgresql-users-and-roles/

Q 7

An eCommerce company is migrating its core application database to Amazon Aurora MySQL. The company is currently performing online transaction processing (OLTP) stress testing with concurrent database sessions. During the first round of tests, a database specialist noticed slow performance for some specific write operations.

Reviewing Amazon CloudWatch metrics for the Aurora DB cluster showed 90% CPU utilization.
Which steps should the database specialist take to MOST effectively identify the root cause of high CPU utilization and slow performance? (Choose two.)

A. Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.
B. Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.
C. Review Amazon RDS Performance Insights to identify the top SQL statements and wait for events.
D. Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.
E. Enable Advanced Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.

Correct Answer: BC

Q 8

A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for the PostgreSQL DB instance is currently using the default parameter group.

A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.

Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)

A. Update the log_connections parameter in the default parameter group
B. Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance
C. Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days
D. Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days
E. Connect to the RDS PostgreSQL host and update the log_connections parameter in the PostgreSQL.conf file

Correct Answer: AE

Reference: https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-1/

Q 9

A company is using a Single-AZ Amazon RDS for MySQL DB instance for development. The DB instance is experiencing slow performance when queries are executed. Amazon CloudWatch metrics indicate that the instance requires more I/ O capacity.

Which actions can a database specialist perform to resolve this issue? (Choose two.)

A. Restart the application tool used to execute queries.
B. Change to a database instance class with higher throughput.
C. Convert from Single-AZ to Multi-AZ.
D. Increase the I/O parameter in Amazon RDS Enhanced Monitoring.
E. Convert from General Purpose to Provisioned IOPS (PIOPS).

Correct Answer: BD

Q 10

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.

What can the Database Specialist do to reduce the overall cost?

A. Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.
B. Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.
C. Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.
D. Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

Correct Answer: A

Q 11

An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning.

Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.

Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?

A. Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

B. Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.

C. Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

D. Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.

Correct Answer: B

Q 12

A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.

Which combination of actions should a database specialist take to meet these requirements? (Choose two.)

A. Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.

B. Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.

C. Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.

D. Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.

E. Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.

Correct Answer: BC

Get Pass4itSure 2022 Amazon DBS-C01 dumps pdf from Google Drive:

free Amazon DBS-C01 dumps pdf 2022 https://drive.google.com/file/d/1x9QqoUAMlj21qVKMRcZOCBJqnHBRGLte/view?usp=sharing

Well, I’ll share it here, and the emphasis is that the AWS DBS-C01 dump is important for passing the exam, of course, this also requires your hard work.

Get the full Pass4itSure AWS DBS-C01 dumps https://www.pass4itsure.com/aws-certified-database-specialty.html (both PDF and VCE modes) to get started.

[2021.8] Pdf, Practice Exam Free, Amazon DBS-C01 Practice Questions Free Share

Are you preparing for the Amazon DBS-C01 exam? Well, this is the right place, we provide you with free Amazon DBS-C01 practice questions. Free DBS-C01 exam sample questions, DBS-C01 PDF download. Pass Amazon DBS-C01 exam with practice tests and exam dumps from Pass4itSure! Pass4itSure DBS-C01 dumps https://www.pass4itsure.com/aws-certified-database-specialty.html (Q&As: 157).

Amazon DBS-C01 pdf free download

DBS-C01 pdf free https://drive.google.com/file/d/12xHfa1QHo5goUnYglyrQXBMs_X3TnW4Y/view?usp=sharing

Latest Amazon DBS-C01 practice exam questions

QUESTION 1
A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns
throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase
by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will
spike rapidly.
How should a Database Specialist ensure DynamoDB can handle the increased traffic?
A. Ensure the table is always provisioned to meet peak needs
B. Allow burst capacity to handle the additional load
C. Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
D. Preprovision additional capacity for the known peaks and then reduce the capacity after the event
Correct Answer: B

QUESTION 2
A company released a mobile game that quickly grew to 10 million daily active users in North America. The game\\’s
backend is hosted on AWS and makes extensive use of an Amazon DynamoDB table that is configured with a TTL
attribute.
When an item is added or updated, its TTL is set to the current epoch time plus 600 seconds. The game logic relies on
old data being purged so that it can calculate rewards points accurately. Occasionally, items are read from the table that
are several hours past their TTL expiry.
How should a database specialist fix this issue?
A. Use a client library that supports the TTL functionality for DynamoDB.
B. Include a query filter expression to ignore items with an expired TTL.
C. Set the ConsistentRead parameter to true when querying the table.
D. Create a local secondary index on the TTL attribute.
Correct Answer: A

QUESTION 3
A company wants to migrate its on-premises MySQL databases to Amazon RDS for MySQL. To comply with the
company\\’s security policy, all databases must be encrypted at rest. RDS DB instance snapshots must also be shared
across various accounts to provision testing and staging environments.
Which solution meets these requirements?
A. Create an RDS for MySQL DB instance with an AWS Key Management Service (AWS KMS) customer managed
CMK. Update the key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal,
and then allow the kms:CreateGrant action.
B. Create an RDS for MySQL DB instance with an AWS managed CMK. Create a new key policy to include the Amazon
Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.
C. Create an RDS for MySQL DB instance with an AWS owned CMK. Create a new key policy to include the
administrator user name of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.
D. Create an RDS for MySQL DB instance with an AWS CloudHSM key. Update the key policy to include the Amazon
Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.
Correct Answer: A
Reference: https://docs.aws.amazon.com/kms/latest/developerguide/grants.html

QUESTION 4
A company has an ecommerce web application with an Amazon RDS for MySQL DB instance. The marketing team has
noticed some unexpected updates to the product and pricing information on the website, which is impacting sales
targets. The marketing team wants a database specialist to audit future database activity to help identify how and when
the changes are being made.
What should the database specialist do to meet these requirements? (Choose two.)
A. Create an RDS event subscription to the audit event type.
B. Enable auditing of CONNECT and QUERY_DML events.
C. SSH to the DB instance and review the database logs.
D. Publish the database logs to Amazon CloudWatch Logs.
E. Enable Enhanced Monitoring on the DB instance.
Correct Answer: AD

QUESTION 5
A database specialist was alerted that a production Amazon RDS MariaDB instance with 100 GB of storage was out of
space. In response, the database specialist modified the DB instance and added 50 GB of storage capacity. Three
hours later, a new alert is generated due to a lack of free space on the same DB instance. The database specialist
decides to modify the instance immediately to increase its storage capacity by 20 GB.
What will happen when the modification is submitted?
A. The request will fail because this storage capacity is too large.
B. The request will succeed only if the primary instance is in active status.
C. The request will succeed only if CPU utilization is less than 10%.
D. The request will fail as the most recent modification was too soon.
Correct Answer: B

QUESTION 6
A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including
development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and
can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a
primary concern for the company, and a solution that does not require significant rework is needed.
Which solution meets these requirements?
A. Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.
B. Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger
DB cluster.
C. Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.
D. Change the DB clusters to the burstable instance family.
Correct Answer: D

QUESTION 7
A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and
the data have been migrated successfully. The on-premises database server was also being used to run database
maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs
for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to
complete. These maintenance jobs need to be set up for Aurora PostgreSQL. How can the Database Specialist
schedule these jobs so the setup requires minimal maintenance and provides high availability?
A. Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.
B. Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.
C. Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.
D. Create the maintenance job using the Amazon CloudWatch job scheduling plugin.
Correct Answer: D
Reference: https://docs.aws.amazon.com/systems-manager/latest/userguide/mw-cli-task-options.html

QUESTION 8
A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data
includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata lookups must
be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal
operational overhead and development effort. Which solution meets these requirements in the MOST efficient way?
A. Use Amazon RDS for MySQL as the database and use Amazon ElastiCache
B. Use Amazon DynamoDB as the database and use DynamoDB Accelerator
C. Use Amazon Aurora MySQL as the database and use Aurora\\’s buffer cache
D. Use Amazon DynamoDB as the database and use Amazon API Gateway
Correct Answer: D
Reference: https://aws.amazon.com/solutions/case-studies/lyft/

QUESTION 9
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company
requires fast responses for end-user queries when looking at data from the current year, and users must have access to
the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries.
Storage costs for the 100 TB of data must be kept low.
Which solution meets these requirements?
A. Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the
data on local Amazon Redshift storage. Provision enough instances to support high demand.
B. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent
data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough
instances to support high demand.
C. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent
data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon
Redshift Concurrency Scaling.
D. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent
data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon
Redshift elastic resize.
Correct Answer: C

QUESTION 10
An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical
business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by
the dashboard should be available within 100 milliseconds of an update. The Database Specialist needs to review the
current configuration of the Aurora DB cluster and develop a cost-effective solution. The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability
and performance of the DB cluster. Which solution meets these requirements?
A. Turn on the serverless option in the DB cluster so it can automatically scale based on demand.
B. Provision a clone of the existing DB cluster for the new Application team.
C. Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing
replication using AWS DMS change data capture (CDC).
D. Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.
Correct Answer: A

QUESTION 11
A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server
environment. The cause of a recent spike in CPU utilization was not determined using the standard metrics that were
collected. The CPU spike caused the application to perform poorly, impacting users. A Database Specialist needs to
determine what caused the CPU spike. Which combination of steps should be taken to provide more visibility into the
processes and queries running during an increase in CPU load? (Choose two.)
A. Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.
B. Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.
C. Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.
D. Use Amazon QuickSight to view the SQL statement being run.
E. Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements,
hosts, or users.
Correct Answer: BE

QUESTION 12
A company has migrated a single MySQL database to Amazon Aurora. The production data is hosted in a DB cluster in
VPC_PROD, and 12 testing environments are hosted in VPC_TEST using the same AWS account. Testing results in
minimal changes to the test data. The Development team wants each environment refreshed nightly so each test
database contains fresh production data every day.
Which migration approach will be the fastest and most cost-effective to implement?
A. Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and
re-created nightly.
B. Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST
using Aurora Serverless.
C. Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be
deleted and re-created nightly.
D. Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the
clones to be deleted and re-created nightly.
Correct Answer: A

QUESTION 13
A manufacturing company\\’s website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)
A. Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
B. Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB
cluster is unreachable.
C. Edit and enable Aurora DB cluster cache management in parameter groups.
D. Set TCP keepalive parameters to a high value.
E. Set JDBC connection string timeout variables to a low value.
F. Set Java DNS caching timeouts to a high value.
Correct Answer: ABC

Pass4itsure Amazon exam dumps coupon code 2021

Pass4itsure Amazon exam dumps coupon code 2021

DBS-C01 pdf free share https://drive.google.com/file/d/12xHfa1QHo5goUnYglyrQXBMs_X3TnW4Y/view?usp=sharing

AWS Certified Specialty

Valid Amazon ANS-C00 Practice Questions Free Share
[2021.5] ANS-C00 Questions https://www.examdemosimulation.com/valid-amazon-aws-ans-c00-practice-questions-free-share-from-pass4itsure-2/

Valid Amazon DBS-C01 Practice Questions Free Share
[2021.5] DBS-C01 Questions https://www.examdemosimulation.com/valid-amazon-aws-dbs-c01-practice-questions-free-share-from-pass4itsure/

ps.

Pass4itSure provides updated Amazon DBS-C01 dumps as the practice test and pdf https://www.pass4itsure.com/aws-certified-database-specialty.html (Updated: Jul 30, 2021). Pass4itSure DBS-C01 dumps help you prepare for the Amazon DBS-C01 exam quickly!

[2021.5] Valid Amazon AWS DBS-C01 Practice Questions Free Share From Pass4itsure

Amazon AWS DBS-C01 is difficult. But with the Pass4itsure DBS-C01 dumps https://www.pass4itsure.com/aws-certified-database-specialty.html preparation material candidate, it can be achieved easily. In DBS-C01 practice tests, you can practice on the same exam as the actual exam. If you master the tricks you gained through practice, it will be easier to achieve your target score.

Amazon AWS DBS-C01 pdf free https://drive.google.com/file/d/16YqKaTSxNTW4PhDIrMlTPcFuL76zLbgg/view?usp=sharing

Latest Amazon DBS-C01 dumps Practice test video tutorial

Latest Amazon AWS DBS-C01 practice exam questions at here:

QUESTION 1
A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the
company discovered there is a period of time every day around 3:00 PM where the response time of the application is
noticeably slower. The company has narrowed down the cause of this issue to the database and not the application.
Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?
A. Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space
consumption. Watch these dashboards during the next slow period.
B. Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run
reports based on the output error logs.
C. Modify the logging database parameter to log all the queries related to locking in the database and then check the
logs after the next slow period for this information.
D. Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that
are related to spikes in the graph during the next slow period.
Correct Answer: D


QUESTION 2
A company uses the Amazon DynamoDB table contractDB in us-east-1 for its contract system with the following
schema:
1.
orderID (primary key)
2.
timestamp (sort key)
3.
contract (map)
4.
createdBy (string)
5.
customerEmail (string)
After a problem in production, the operations team has asked a database specialist to provide an IAM policy to read
items from the database to debug the application. In addition, the developer is not allowed to access the value of the
customerEmail field to stay compliant.
Which IAM policy should the database specialist use to achieve these requirements?

DBS-C01 exam questions-q2

DBS-C01 exam questions-q2-2

DBS-C01 exam questions-q2-3

DBS-C01 exam questions-q2-4

A. Option A
B. Option B
C. Option C
D. Option D
Correct Answer: A

QUESTION 3
A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune
for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection
data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and
an S3 VPC endpoint, and 80% of the company\\’s network bandwidth is available.
How should the company perform this data load?
A. Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy
command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
B. Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the
Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
C. Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to
move the data in bulk from the S3 bucket to the Neptune DB instance.
D. Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to
move the data in bulk from the S3 bucket to the Neptune DB instance.
Correct Answer: C


QUESTION 4
A company is building a new web platform where user requests trigger an AWS Lambda function that performs an insert
into an Amazon Aurora MySQL DB cluster. Initial tests with less than 10 users on the new platform yielded successful
execution and fast response times. However, upon more extensive tests with the actual target of 3,000 concurrent
users, Lambda functions are unable to connect to the DB cluster and receive too many connections errors. Which of the
following will resolve this issue?
A. Edit the my.cnf file for the DB cluster to increase max_connections
B. Increase the instance size of the DB cluster
C. Change the DB cluster to Multi-AZ
D. Increase the number of Aurora Replicas
Correct Answer: B

QUESTION 5
A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to
an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database
Specialist. Other members of the Development team can connect, but this user is consistently receiving an error
indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number
of times, but the error persists. Which step should be taken to troubleshoot this issue?
A. Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine\\’s IP
address
B. Ensure that the RDS DB instance\\’s subnet group includes a public subnet to allow the Developer to connect
C. Ensure that the RDS DB instance has not reached its maximum connections limit
D. Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for
encrypted connections
Correct Answer: B


QUESTION 6
A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently has a
publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well-Architected
Framework review, a Database Specialist was given new security requirements.
1.
Only certain on-premises corporate network IPs should connect to the DB instance.
2.
Connectivity is allowed from the corporate network only.
Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose
three.)
A. Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.
B. Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.
C. Move the DB instance to a private subnet using AWS DMS.
D. Enable VPC peering between the application host running on the corporate network and the VPC associated with the
DB instance.
E. Disable the publicly accessible setting.
F. Connect to the DB instance using private IPs and a VPN.
Correct Answer: DEF

QUESTION 7
A company is using Amazon Aurora PostgreSQL for the backend of its application. The system users are complaining
that the responses are slow. A database specialist has determined that the queries to Aurora take longer during peak
times. With the Amazon RDS Performance Insights dashboard, the load in the chart for average active sessions is often above the line that denotes maximum CPU usage and the wait state shows that most wait events are IO:XactSync.
What should the company do to resolve these performance issues?
A. Add an Aurora Replica to scale the read traffic.
B. Scale up the DB instance class.
C. Modify applications to commit transactions in batches.
D. Modify applications to avoid conflicts by taking locks.
Correct Answer: A


QUESTION 8
An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company
has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor
data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants
are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A
database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given
power plant.
Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time
when looking for faulty sensors?
A. Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary
index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
B. Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the
sort key. Create a local secondary index (LSI) on the fault attribute.
C. Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the
sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the
sort key.
D. Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index
(LSI) on the fault attribute.
Correct Answer: B

QUESTION 9
A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL DB
cluster. The company\\’s Database Specialist discovered that the Oracle database is storing 100 GB of large binary
objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of 500 MB with an average LOB
size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the largest replication
instances. How should the Database Specialist optimize the database migration using AWS DMS?
A. Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together
B. Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without
LOBs
C. Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2
without LOBs
D. Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs
together
Correct Answer: C


QUESTION 10
An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data
was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of
accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)
A. Set DeletionProtection to True
B. Set MultiAZ to True
C. Set TerminationProtection to True
D. Set DeleteAutomatedBackups to False
E. Set DeletionPolicy to Delete
F. Set DeletionPolicy to Retain
Correct Answer: ACF
Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attributedeletionpolicy.html
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-accidental-updates/

QUESTION 11
A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system
patches are applied during the Amazon RDS-specified maintenance window.
What is the MOST cost-effective action that should be taken to avoid downtime?
A. Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB
B. Enable cross-Region read replicas and direct read traffic to then when Amazon RDS is down
C. Enable a read replicas and direct read traffic to it when Amazon RDS is down
D. Enable an Amazon RDS for MySQL Multi-AZ configuration
Correct Answer: C


QUESTION 12
A company runs a customer relationship management (CRM) system that is hosted on-premises with a MySQL
database as the backend. A custom stored procedure is used to send email notifications to another system when data is
inserted into a table. The company has noticed that the performance of the CRM system has decreased due to
database reporting applications used by various teams. The company requires an AWS solution that would reduce
maintenance, improve performance, and accommodate the email notification feature.
Which AWS solution meets these requirements?
A. Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications.
Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the
other system.
B. Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon
RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system\\’s email
address to the topic.
C. Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications.
Configure Amazon SES integration to send email notifications to the other system.
D. Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an
AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system\\’s email address to
the topic.
Correct Answer: D


QUESTION 13
A ride-hailing application uses an Amazon RDS for MySQL DB instance as persistent storage for bookings. This
application is very popular and the company expects a tenfold increase in the user base in next few months. The
application experiences more traffic during the morning and evening hours.
This application has two parts:
1.
An in-house booking component that accepts online bookings that directly correspond to simultaneous requests from
users.
2.
A third-party customer relationship management (CRM) component used by customer care representatives. The CRM
uses queries to access booking data.
A database specialist needs to design a cost-effective database solution to handle this workload.
Which solution meets these requirements?
A. Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes
and push the booking data to the RDS for MySQL DB instance used by the CRM.
B. Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda
function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda
function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.
C. Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes
and push the booking data to an Amazon Redshift database used by the CRM.
D. Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda
function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.
Correct Answer: A

Welcome to download the valid Pass4itsure DBS-C01 pdf

Free downloadGoogle Drive
Amazon AWS DBS-C01 pdf https://drive.google.com/file/d/16YqKaTSxNTW4PhDIrMlTPcFuL76zLbgg/view?usp=sharing

Pass4itsure latest Amazon exam dumps coupon code free share

Summary:

New Amazon DBS-C01 exam questions from Pass4itsure DBS-C01 dumps! Welcome to download the newest Pass4itsure DBS-C01 dumps https://www.pass4itsure.com/aws-certified-database-specialty.html (145 Q&As), verified the latest DBS-C01 practice test questions with relevant answers.

Amazon AWS DBS-C01 dumps pdf free share https://drive.google.com/file/d/16YqKaTSxNTW4PhDIrMlTPcFuL76zLbgg/view?usp=sharing