[Split-New] Real And Effective Amazon DBS-C01 Dumps Questions By Pass4itSure

The Amazon AWS Certified Specialty certification is a very popular certification. Pass the DBS-C01 exam to earn this certification. You can do this with the help of a real Amazon AWS DBS-C01 dumps.

Pass4itSure has launched the latest version of AWS DBS-C01 dumps https://www.pass4itsure.com/aws-certified-database-specialty.html (Updated: Feb 01, 2022)

Maybe there are more other Amazon certification exams you want to pass https://www.pass4itsure.com/amazon.html Welcome to pass.

In addition, the site shares some AWS DBS-C01 exam practice questions q1-q12 from the Pass4itSure dumps.

Start testing your abilities now >>>

Latest AWS DBS-C01 exam questions and answers – Pass4itSure DBS-C01 dumps

AWS Certified Database – Specialty (DBS-C01) exam questions online test

Q 1

A company uses a single-node Amazon RDS for MySQL DB instance for its production database. The DB instance runs in an AWS Region in the United States.

A week before a big sales event, a new maintenance update is available for the DB instance. The maintenance update is marked as required. The company wants to minimize downtime for the DB instance and asks a database specialist to make the DB instance highly available until the sales event ends.

Which solution will meet these requirements?

A. Defer the maintenance update until the sales event is over.
B. Create a read replica with the latest update. Initiate a failover before the sales event.
C. Create a read replica with the latest update. Transfer all read-only traffic to the read replica during the sales event.
D. Convert the DB instance into a Multi-AZ deployment. Apply the maintenance update.

Correct Answer: D

Reference: https://aws.amazon.com/rds/features/multi-az/

Q 2

An Amazon RDS EBS-optimized instance with Provisioned IOPS (PIOPS) storage is using less than half of its allocated IOPS over the course of several hours under constant load.

The RDS instance exhibits multi- second read and write latency, and uses all of its maximum bandwidth for reading throughput, yet the instance uses less than half of its CPU and RAM resources.

What should a Database Specialist do in this situation to increase performance and return latency to sub-second levels?

A. Increase the size of the DB instance storage
B. Change the underlying EBS storage type to General Purpose SSD (gp2)
C. Disable EBS optimization on the DB instance
D. Change the DB instance to an instance class with a higher maximum bandwidth

Correct Answer: B

Q 3

A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the company discovered there is a period of time every day around 3:00 PM where the response time of the application is noticeably slower. The company has narrowed down the cause of this issue to the database and not the application.

Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?

A. Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space consumption. Watch these dashboards during the next slow period.
B. Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run reports based on the output error logs.
C. Modify the logging database parameter to log all the queries related to locking in the database and then check the logs after the next slow period for this information.
D. Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that are related to spikes in the graph during the next slow period.

Correct Answer: D

Q 4

A company is going through a security audit. The audit team has identified cleartext master user passwords in the AWS CloudFormation templates for Amazon RDS for MySQL DB instances. The audit team has flagged this as a security risk to the database team.

What should a database specialist do to mitigate this risk?

A. Change all the databases to use AWS IAM for authentication and remove all the cleartext passwords in CloudFormation templates.
B. Use an AWS Secrets Manager resource to generate a random password and reference the secret in the CloudFormation template.
C. Remove the passwords from the CloudFormation templates so Amazon RDS prompts for the password when the database is being created.
D. Remove the passwords from the CloudFormation template and store them in a separate file. Replace the passwords by running CloudFormation using the sed command.

Correct Answer: C

Q 5

A bank plans to use an Amazon RDS for MySQL DB instance. The database should support read-intensive traffic with very few repeated queries. Which solution meets these requirements?

A. Create an Amazon ElastiCache cluster. Use a write-through strategy to populate the cache.
B. Create an Amazon ElastiCache cluster. Use a lazy loading strategy to populate the cache.
C. Change the DB instance to Multi-AZ with a standby instance in another AWS Region.
D. Create a read replica of the DB instance. Use the read replica to distribute the read traffic.

Correct Answer: D

Q 6

A company is going to use an Amazon Aurora PostgreSQL DB cluster for an application backend. The DB cluster contains some tables with sensitive data. A Database Specialist needs to control the access privileges at the table level.

How can the Database Specialist meet these requirements?

A. Use AWS IAM database authentication and restrict access to the tables using an IAM policy.
B. Configure the rules in a NaCl to restrict outbound traffic from the Aurora DB cluster.
C. Execute GRANT and REVOKE commands that restrict access to the tables containing sensitive data.
D. Define access privileges to the tables containing sensitive data in the pg_hba.conf file.

Correct Answer: C

Reference: https://aws.amazon.com/blogs/database/managing-postgresql-users-and-roles/

Q 7

An eCommerce company is migrating its core application database to Amazon Aurora MySQL. The company is currently performing online transaction processing (OLTP) stress testing with concurrent database sessions. During the first round of tests, a database specialist noticed slow performance for some specific write operations.

Reviewing Amazon CloudWatch metrics for the Aurora DB cluster showed 90% CPU utilization.
Which steps should the database specialist take to MOST effectively identify the root cause of high CPU utilization and slow performance? (Choose two.)

A. Enable Enhanced Monitoring at less than 30 seconds of granularity to review the operating system metrics before the next round of tests.
B. Review the VolumeBytesUsed metric in CloudWatch to see if there is a spike in write I/O.
C. Review Amazon RDS Performance Insights to identify the top SQL statements and wait for events.
D. Review Amazon RDS API calls in AWS CloudTrail to identify long-running queries.
E. Enable Advanced Auditing to log QUERY events in Amazon CloudWatch before the next round of tests.

Correct Answer: BC

Q 8

A company is using Amazon RDS for PostgreSQL. The Security team wants all database connection requests to be logged and retained for 180 days. The RDS for the PostgreSQL DB instance is currently using the default parameter group.

A Database Specialist has identified that setting the log_connections parameter to 1 will enable connections logging.

Which combination of steps should the Database Specialist take to meet the logging and retention requirements? (Choose two.)

A. Update the log_connections parameter in the default parameter group
B. Create a custom parameter group, update the log_connections parameter, and associate the parameter with the DB instance
C. Enable publishing of database engine logs to Amazon CloudWatch Logs and set the event expiration to 180 days
D. Enable publishing of database engine logs to an Amazon S3 bucket and set the lifecycle policy to 180 days
E. Connect to the RDS PostgreSQL host and update the log_connections parameter in the PostgreSQL.conf file

Correct Answer: AE

Reference: https://aws.amazon.com/blogs/database/working-with-rds-and-aurora-postgresql-logs-part-1/

Q 9

A company is using a Single-AZ Amazon RDS for MySQL DB instance for development. The DB instance is experiencing slow performance when queries are executed. Amazon CloudWatch metrics indicate that the instance requires more I/ O capacity.

Which actions can a database specialist perform to resolve this issue? (Choose two.)

A. Restart the application tool used to execute queries.
B. Change to a database instance class with higher throughput.
C. Convert from Single-AZ to Multi-AZ.
D. Increase the I/O parameter in Amazon RDS Enhanced Monitoring.
E. Convert from General Purpose to Provisioned IOPS (PIOPS).

Correct Answer: BD

Q 10

A company is concerned about the cost of a large-scale, transactional application using Amazon DynamoDB that only needs to store data for 2 days before it is deleted. In looking at the tables, a Database Specialist notices that much of the data is months old, and goes back to when the application was first deployed.

What can the Database Specialist do to reduce the overall cost?

A. Create a new attribute in each table to track the expiration time and create an AWS Glue transformation to delete entries more than 2 days old.
B. Create a new attribute in each table to track the expiration time and enable DynamoDB Streams on each table.
C. Create a new attribute in each table to track the expiration time and enable time to live (TTL) on each table.
D. Create an Amazon CloudWatch Events event to export the data to Amazon S3 daily using AWS Data Pipeline and then truncate the Amazon DynamoDB table.

Correct Answer: A

Q 11

An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning.

Power plants are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given power plant.

Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time when looking for faulty sensors?

A. Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

B. Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a local secondary index (LSI) on the fault attribute.

C. Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.

D. Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index (LSI) on the fault attribute.

Correct Answer: B

Q 12

A company uses Amazon Aurora for secure financial transactions. The data must always be encrypted at rest and in transit to meet compliance requirements.

Which combination of actions should a database specialist take to meet these requirements? (Choose two.)

A. Create an Aurora Replica with encryption enabled using AWS Key Management Service (AWS KMS). Then promote the replica to master.

B. Use SSL/TLS to secure the in-transit connection between the financial application and the Aurora DB cluster.

C. Modify the existing Aurora DB cluster and enable encryption using an AWS Key Management Service (AWS KMS) encryption key. Apply the changes immediately.

D. Take a snapshot of the Aurora DB cluster and encrypt the snapshot using an AWS Key Management Service (AWS KMS) encryption key. Restore the snapshot to a new DB cluster and update the financial application database endpoints.

E. Use AWS Key Management Service (AWS KMS) to secure the in-transit connection between the financial application and the Aurora DB cluster.

Correct Answer: BC

Get Pass4itSure 2022 Amazon DBS-C01 dumps pdf from Google Drive:

free Amazon DBS-C01 dumps pdf 2022 https://drive.google.com/file/d/1x9QqoUAMlj21qVKMRcZOCBJqnHBRGLte/view?usp=sharing

Well, I’ll share it here, and the emphasis is that the AWS DBS-C01 dump is important for passing the exam, of course, this also requires your hard work.

Get the full Pass4itSure AWS DBS-C01 dumps https://www.pass4itsure.com/aws-certified-database-specialty.html (both PDF and VCE modes) to get started.

[2021.8] Pdf, Practice Exam Free, Amazon DBS-C01 Practice Questions Free Share

Are you preparing for the Amazon DBS-C01 exam? Well, this is the right place, we provide you with free Amazon DBS-C01 practice questions. Free DBS-C01 exam sample questions, DBS-C01 PDF download. Pass Amazon DBS-C01 exam with practice tests and exam dumps from Pass4itSure! Pass4itSure DBS-C01 dumps https://www.pass4itsure.com/aws-certified-database-specialty.html (Q&As: 157).

Amazon DBS-C01 pdf free download

DBS-C01 pdf free https://drive.google.com/file/d/12xHfa1QHo5goUnYglyrQXBMs_X3TnW4Y/view?usp=sharing

Latest Amazon DBS-C01 practice exam questions

QUESTION 1
A large ecommerce company uses Amazon DynamoDB to handle the transactions on its web portal. Traffic patterns
throughout the year are usually stable; however, a large event is planned. The company knows that traffic will increase
by up to 10 times the normal load over the 3-day event. When sale prices are published during the event, traffic will
spike rapidly.
How should a Database Specialist ensure DynamoDB can handle the increased traffic?
A. Ensure the table is always provisioned to meet peak needs
B. Allow burst capacity to handle the additional load
C. Set an AWS Application Auto Scaling policy for the table to handle the increase in traffic
D. Preprovision additional capacity for the known peaks and then reduce the capacity after the event
Correct Answer: B

QUESTION 2
A company released a mobile game that quickly grew to 10 million daily active users in North America. The game\\’s
backend is hosted on AWS and makes extensive use of an Amazon DynamoDB table that is configured with a TTL
attribute.
When an item is added or updated, its TTL is set to the current epoch time plus 600 seconds. The game logic relies on
old data being purged so that it can calculate rewards points accurately. Occasionally, items are read from the table that
are several hours past their TTL expiry.
How should a database specialist fix this issue?
A. Use a client library that supports the TTL functionality for DynamoDB.
B. Include a query filter expression to ignore items with an expired TTL.
C. Set the ConsistentRead parameter to true when querying the table.
D. Create a local secondary index on the TTL attribute.
Correct Answer: A

QUESTION 3
A company wants to migrate its on-premises MySQL databases to Amazon RDS for MySQL. To comply with the
company\\’s security policy, all databases must be encrypted at rest. RDS DB instance snapshots must also be shared
across various accounts to provision testing and staging environments.
Which solution meets these requirements?
A. Create an RDS for MySQL DB instance with an AWS Key Management Service (AWS KMS) customer managed
CMK. Update the key policy to include the Amazon Resource Name (ARN) of the other AWS accounts as a principal,
and then allow the kms:CreateGrant action.
B. Create an RDS for MySQL DB instance with an AWS managed CMK. Create a new key policy to include the Amazon
Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.
C. Create an RDS for MySQL DB instance with an AWS owned CMK. Create a new key policy to include the
administrator user name of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.
D. Create an RDS for MySQL DB instance with an AWS CloudHSM key. Update the key policy to include the Amazon
Resource Name (ARN) of the other AWS accounts as a principal, and then allow the kms:CreateGrant action.
Correct Answer: A
Reference: https://docs.aws.amazon.com/kms/latest/developerguide/grants.html

QUESTION 4
A company has an ecommerce web application with an Amazon RDS for MySQL DB instance. The marketing team has
noticed some unexpected updates to the product and pricing information on the website, which is impacting sales
targets. The marketing team wants a database specialist to audit future database activity to help identify how and when
the changes are being made.
What should the database specialist do to meet these requirements? (Choose two.)
A. Create an RDS event subscription to the audit event type.
B. Enable auditing of CONNECT and QUERY_DML events.
C. SSH to the DB instance and review the database logs.
D. Publish the database logs to Amazon CloudWatch Logs.
E. Enable Enhanced Monitoring on the DB instance.
Correct Answer: AD

QUESTION 5
A database specialist was alerted that a production Amazon RDS MariaDB instance with 100 GB of storage was out of
space. In response, the database specialist modified the DB instance and added 50 GB of storage capacity. Three
hours later, a new alert is generated due to a lack of free space on the same DB instance. The database specialist
decides to modify the instance immediately to increase its storage capacity by 20 GB.
What will happen when the modification is submitted?
A. The request will fail because this storage capacity is too large.
B. The request will succeed only if the primary instance is in active status.
C. The request will succeed only if CPU utilization is less than 10%.
D. The request will fail as the most recent modification was too soon.
Correct Answer: B

QUESTION 6
A software development company is using Amazon Aurora MySQL DB clusters for several use cases, including
development and reporting. These use cases place unpredictable and varying demands on the Aurora DB clusters, and
can cause momentary spikes in latency. System users run ad-hoc queries sporadically throughout the week. Cost is a
primary concern for the company, and a solution that does not require significant rework is needed.
Which solution meets these requirements?
A. Create new Aurora Serverless DB clusters for development and reporting, then migrate to these new DB clusters.
B. Upgrade one of the DB clusters to a larger size, and consolidate development and reporting activities on this larger
DB cluster.
C. Use existing DB clusters and stop/start the databases on a routine basis using scheduling tools.
D. Change the DB clusters to the burstable instance family.
Correct Answer: D

QUESTION 7
A Database Specialist has migrated an on-premises Oracle database to Amazon Aurora PostgreSQL. The schema and
the data have been migrated successfully. The on-premises database server was also being used to run database
maintenance cron jobs written in Python to perform tasks including data purging and generating data exports. The logs
for these jobs show that, most of the time, the jobs completed within 5 minutes, but a few jobs took up to 10 minutes to
complete. These maintenance jobs need to be set up for Aurora PostgreSQL. How can the Database Specialist
schedule these jobs so the setup requires minimal maintenance and provides high availability?
A. Create cron jobs on an Amazon EC2 instance to run the maintenance jobs following the required schedule.
B. Connect to the Aurora host and create cron jobs to run the maintenance jobs following the required schedule.
C. Create AWS Lambda functions to run the maintenance jobs and schedule them with Amazon CloudWatch Events.
D. Create the maintenance job using the Amazon CloudWatch job scheduling plugin.
Correct Answer: D
Reference: https://docs.aws.amazon.com/systems-manager/latest/userguide/mw-cli-task-options.html

QUESTION 8
A Database Specialist is designing a new database infrastructure for a ride hailing application. The application data
includes a ride tracking system that stores GPS coordinates for all rides. Real-time statistics and metadata lookups must
be performed with high throughput and microsecond latency. The database should be fault tolerant with minimal
operational overhead and development effort. Which solution meets these requirements in the MOST efficient way?
A. Use Amazon RDS for MySQL as the database and use Amazon ElastiCache
B. Use Amazon DynamoDB as the database and use DynamoDB Accelerator
C. Use Amazon Aurora MySQL as the database and use Aurora\\’s buffer cache
D. Use Amazon DynamoDB as the database and use Amazon API Gateway
Correct Answer: D
Reference: https://aws.amazon.com/solutions/case-studies/lyft/

QUESTION 9
A company needs a data warehouse solution that keeps data in a consistent, highly structured format. The company
requires fast responses for end-user queries when looking at data from the current year, and users must have access to
the full 15-year dataset, when needed. This solution also needs to handle a fluctuating number incoming queries.
Storage costs for the 100 TB of data must be kept low.
Which solution meets these requirements?
A. Leverage an Amazon Redshift data warehouse solution using a dense storage instance type while keeping all the
data on local Amazon Redshift storage. Provision enough instances to support high demand.
B. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent
data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Provision enough
instances to support high demand.
C. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent
data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Enable Amazon
Redshift Concurrency Scaling.
D. Leverage an Amazon Redshift data warehouse solution using a dense storage instance to store the most recent
data. Keep historical data on Amazon S3 and access it using the Amazon Redshift Spectrum layer. Leverage Amazon
Redshift elastic resize.
Correct Answer: C

QUESTION 10
An ecommerce company has tasked a Database Specialist with creating a reporting dashboard that visualizes critical
business metrics that will be pulled from the core production database running on Amazon Aurora. Data that is read by
the dashboard should be available within 100 milliseconds of an update. The Database Specialist needs to review the
current configuration of the Aurora DB cluster and develop a cost-effective solution. The solution needs to accommodate the unpredictable read workload from the reporting dashboard without any impact on the write availability
and performance of the DB cluster. Which solution meets these requirements?
A. Turn on the serverless option in the DB cluster so it can automatically scale based on demand.
B. Provision a clone of the existing DB cluster for the new Application team.
C. Create a separate DB cluster for the new workload, refresh from the source DB cluster, and set up ongoing
replication using AWS DMS change data capture (CDC).
D. Add an automatic scaling policy to the DB cluster to add Aurora Replicas to the cluster based on CPU consumption.
Correct Answer: A

QUESTION 11
A company has a database monitoring solution that uses Amazon CloudWatch for its Amazon RDS for SQL Server
environment. The cause of a recent spike in CPU utilization was not determined using the standard metrics that were
collected. The CPU spike caused the application to perform poorly, impacting users. A Database Specialist needs to
determine what caused the CPU spike. Which combination of steps should be taken to provide more visibility into the
processes and queries running during an increase in CPU load? (Choose two.)
A. Enable Amazon CloudWatch Events and view the incoming T-SQL statements causing the CPU to spike.
B. Enable Enhanced Monitoring metrics to view CPU utilization at the RDS SQL Server DB instance level.
C. Implement a caching layer to help with repeated queries on the RDS SQL Server DB instance.
D. Use Amazon QuickSight to view the SQL statement being run.
E. Enable Amazon RDS Performance Insights to view the database load and filter the load by waits, SQL statements,
hosts, or users.
Correct Answer: BE

QUESTION 12
A company has migrated a single MySQL database to Amazon Aurora. The production data is hosted in a DB cluster in
VPC_PROD, and 12 testing environments are hosted in VPC_TEST using the same AWS account. Testing results in
minimal changes to the test data. The Development team wants each environment refreshed nightly so each test
database contains fresh production data every day.
Which migration approach will be the fastest and most cost-effective to implement?
A. Run the master in Amazon Aurora MySQL. Create 12 clones in VPC_TEST, and script the clones to be deleted and
re-created nightly.
B. Run the master in Amazon Aurora MySQL. Take a nightly snapshot, and restore it into 12 databases in VPC_TEST
using Aurora Serverless.
C. Run the master in Amazon Aurora MySQL. Create 12 Aurora Replicas in VPC_TEST, and script the replicas to be
deleted and re-created nightly.
D. Run the master in Amazon Aurora MySQL using Aurora Serverless. Create 12 clones in VPC_TEST, and script the
clones to be deleted and re-created nightly.
Correct Answer: A

QUESTION 13
A manufacturing company\\’s website uses an Amazon Aurora PostgreSQL DB cluster.
Which configurations will result in the LEAST application downtime during a failover? (Choose three.)
A. Use the provided read and write Aurora endpoints to establish a connection to the Aurora DB cluster.
B. Create an Amazon CloudWatch alert triggering a restore in another Availability Zone when the primary Aurora DB
cluster is unreachable.
C. Edit and enable Aurora DB cluster cache management in parameter groups.
D. Set TCP keepalive parameters to a high value.
E. Set JDBC connection string timeout variables to a low value.
F. Set Java DNS caching timeouts to a high value.
Correct Answer: ABC

Pass4itsure Amazon exam dumps coupon code 2021

Pass4itsure Amazon exam dumps coupon code 2021

DBS-C01 pdf free share https://drive.google.com/file/d/12xHfa1QHo5goUnYglyrQXBMs_X3TnW4Y/view?usp=sharing

AWS Certified Specialty

Valid Amazon ANS-C00 Practice Questions Free Share
[2021.5] ANS-C00 Questions https://www.examdemosimulation.com/valid-amazon-aws-ans-c00-practice-questions-free-share-from-pass4itsure-2/

Valid Amazon DBS-C01 Practice Questions Free Share
[2021.5] DBS-C01 Questions https://www.examdemosimulation.com/valid-amazon-aws-dbs-c01-practice-questions-free-share-from-pass4itsure/

ps.

Pass4itSure provides updated Amazon DBS-C01 dumps as the practice test and pdf https://www.pass4itsure.com/aws-certified-database-specialty.html (Updated: Jul 30, 2021). Pass4itSure DBS-C01 dumps help you prepare for the Amazon DBS-C01 exam quickly!

[2021.5] Valid Amazon AWS DBS-C01 Practice Questions Free Share From Pass4itsure

Amazon AWS DBS-C01 is difficult. But with the Pass4itsure DBS-C01 dumps https://www.pass4itsure.com/aws-certified-database-specialty.html preparation material candidate, it can be achieved easily. In DBS-C01 practice tests, you can practice on the same exam as the actual exam. If you master the tricks you gained through practice, it will be easier to achieve your target score.

Amazon AWS DBS-C01 pdf free https://drive.google.com/file/d/16YqKaTSxNTW4PhDIrMlTPcFuL76zLbgg/view?usp=sharing

Latest Amazon DBS-C01 dumps Practice test video tutorial

Latest Amazon AWS DBS-C01 practice exam questions at here:

QUESTION 1
A company just migrated to Amazon Aurora PostgreSQL from an on-premises Oracle database. After the migration, the
company discovered there is a period of time every day around 3:00 PM where the response time of the application is
noticeably slower. The company has narrowed down the cause of this issue to the database and not the application.
Which set of steps should the Database Specialist take to most efficiently find the problematic PostgreSQL query?
A. Create an Amazon CloudWatch dashboard to show the number of connections, CPU usage, and disk space
consumption. Watch these dashboards during the next slow period.
B. Launch an Amazon EC2 instance, and install and configure an open-source PostgreSQL monitoring tool that will run
reports based on the output error logs.
C. Modify the logging database parameter to log all the queries related to locking in the database and then check the
logs after the next slow period for this information.
D. Enable Amazon RDS Performance Insights on the PostgreSQL database. Use the metrics to identify any queries that
are related to spikes in the graph during the next slow period.
Correct Answer: D


QUESTION 2
A company uses the Amazon DynamoDB table contractDB in us-east-1 for its contract system with the following
schema:
1.
orderID (primary key)
2.
timestamp (sort key)
3.
contract (map)
4.
createdBy (string)
5.
customerEmail (string)
After a problem in production, the operations team has asked a database specialist to provide an IAM policy to read
items from the database to debug the application. In addition, the developer is not allowed to access the value of the
customerEmail field to stay compliant.
Which IAM policy should the database specialist use to achieve these requirements?

DBS-C01 exam questions-q2

DBS-C01 exam questions-q2-2

DBS-C01 exam questions-q2-3

DBS-C01 exam questions-q2-4

A. Option A
B. Option B
C. Option C
D. Option D
Correct Answer: A

QUESTION 3
A company is moving its fraud detection application from on premises to the AWS Cloud and is using Amazon Neptune
for data storage. The company has set up a 1 Gbps AWS Direct Connect connection to migrate 25 TB of fraud detection
data from the on-premises data center to a Neptune DB instance. The company already has an Amazon S3 bucket and
an S3 VPC endpoint, and 80% of the company\\’s network bandwidth is available.
How should the company perform this data load?
A. Use an AWS SDK with a multipart upload to transfer the data from on premises to the S3 bucket. Use the Copy
command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
B. Use AWS Database Migration Service (AWS DMS) to transfer the data from on premises to the S3 bucket. Use the
Loader command for Neptune to move the data in bulk from the S3 bucket to the Neptune DB instance.
C. Use AWS DataSync to transfer the data from on premises to the S3 bucket. Use the Loader command for Neptune to
move the data in bulk from the S3 bucket to the Neptune DB instance.
D. Use the AWS CLI to transfer the data from on premises to the S3 bucket. Use the Copy command for Neptune to
move the data in bulk from the S3 bucket to the Neptune DB instance.
Correct Answer: C


QUESTION 4
A company is building a new web platform where user requests trigger an AWS Lambda function that performs an insert
into an Amazon Aurora MySQL DB cluster. Initial tests with less than 10 users on the new platform yielded successful
execution and fast response times. However, upon more extensive tests with the actual target of 3,000 concurrent
users, Lambda functions are unable to connect to the DB cluster and receive too many connections errors. Which of the
following will resolve this issue?
A. Edit the my.cnf file for the DB cluster to increase max_connections
B. Increase the instance size of the DB cluster
C. Change the DB cluster to Multi-AZ
D. Increase the number of Aurora Replicas
Correct Answer: B

QUESTION 5
A large financial services company requires that all data be encrypted in transit. A Developer is attempting to connect to
an Amazon RDS DB instance using the company VPC for the first time with credentials provided by a Database
Specialist. Other members of the Development team can connect, but this user is consistently receiving an error
indicating a communications link failure. The Developer asked the Database Specialist to reset the password a number
of times, but the error persists. Which step should be taken to troubleshoot this issue?
A. Ensure that the database option group for the RDS DB instance allows ingress from the Developer machine\\’s IP
address
B. Ensure that the RDS DB instance\\’s subnet group includes a public subnet to allow the Developer to connect
C. Ensure that the RDS DB instance has not reached its maximum connections limit
D. Ensure that the connection is using SSL and is addressing the port where the RDS DB instance is listening for
encrypted connections
Correct Answer: B


QUESTION 6
A media company is using Amazon RDS for PostgreSQL to store user data. The RDS DB instance currently has a
publicly accessible setting enabled and is hosted in a public subnet. Following a recent AWS Well-Architected
Framework review, a Database Specialist was given new security requirements.
1.
Only certain on-premises corporate network IPs should connect to the DB instance.
2.
Connectivity is allowed from the corporate network only.
Which combination of steps does the Database Specialist need to take to meet these new requirements? (Choose
three.)
A. Modify the pg_hba.conf file. Add the required corporate network IPs and remove the unwanted IPs.
B. Modify the associated security group. Add the required corporate network IPs and remove the unwanted IPs.
C. Move the DB instance to a private subnet using AWS DMS.
D. Enable VPC peering between the application host running on the corporate network and the VPC associated with the
DB instance.
E. Disable the publicly accessible setting.
F. Connect to the DB instance using private IPs and a VPN.
Correct Answer: DEF

QUESTION 7
A company is using Amazon Aurora PostgreSQL for the backend of its application. The system users are complaining
that the responses are slow. A database specialist has determined that the queries to Aurora take longer during peak
times. With the Amazon RDS Performance Insights dashboard, the load in the chart for average active sessions is often above the line that denotes maximum CPU usage and the wait state shows that most wait events are IO:XactSync.
What should the company do to resolve these performance issues?
A. Add an Aurora Replica to scale the read traffic.
B. Scale up the DB instance class.
C. Modify applications to commit transactions in batches.
D. Modify applications to avoid conflicts by taking locks.
Correct Answer: A


QUESTION 8
An electric utility company wants to store power plant sensor data in an Amazon DynamoDB table. The utility company
has over 100 power plants and each power plant has over 200 sensors that send data every 2 seconds. The sensor
data includes time with milliseconds precision, a value, and a fault attribute if the sensor is malfunctioning. Power plants
are identified by a globally unique identifier. Sensors are identified by a unique identifier within each power plant. A
database specialist needs to design the table to support an efficient method of finding all faulty sensors within a given
power plant.
Which schema should the database specialist use when creating the DynamoDB table to achieve the fastest query time
when looking for faulty sensors?
A. Use the plant identifier as the partition key and the measurement time as the sort key. Create a global secondary
index (GSI) with the plant identifier as the partition key and the fault attribute as the sort key.
B. Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the
sort key. Create a local secondary index (LSI) on the fault attribute.
C. Create a composite of the plant identifier and sensor identifier as the partition key. Use the measurement time as the
sort key. Create a global secondary index (GSI) with the plant identifier as the partition key and the fault attribute as the
sort key.
D. Use the plant identifier as the partition key and the sensor identifier as the sort key. Create a local secondary index
(LSI) on the fault attribute.
Correct Answer: B

QUESTION 9
A company is looking to migrate a 1 TB Oracle database from on-premises to an Amazon Aurora PostgreSQL DB
cluster. The company\\’s Database Specialist discovered that the Oracle database is storing 100 GB of large binary
objects (LOBs) across multiple tables. The Oracle database has a maximum LOB size of 500 MB with an average LOB
size of 350 MB. The Database Specialist has chosen AWS DMS to migrate the data with the largest replication
instances. How should the Database Specialist optimize the database migration using AWS DMS?
A. Create a single task using full LOB mode with a LOB chunk size of 500 MB to migrate the data and LOBs together
B. Create two tasks: task1 with LOB tables using full LOB mode with a LOB chunk size of 500 MB and task2 without
LOBs
C. Create two tasks: task1 with LOB tables using limited LOB mode with a maximum LOB size of 500 MB and task 2
without LOBs
D. Create a single task using limited LOB mode with a maximum LOB size of 500 MB to migrate data and LOBs
together
Correct Answer: C


QUESTION 10
An AWS CloudFormation stack that included an Amazon RDS DB instance was accidentally deleted and recent data
was lost. A Database Specialist needs to add RDS settings to the CloudFormation template to reduce the chance of
accidental instance data loss in the future.
Which settings will meet this requirement? (Choose three.)
A. Set DeletionProtection to True
B. Set MultiAZ to True
C. Set TerminationProtection to True
D. Set DeleteAutomatedBackups to False
E. Set DeletionPolicy to Delete
F. Set DeletionPolicy to Retain
Correct Answer: ACF
Reference: https://docs.aws.amazon.com/AWSCloudFormation/latest/UserGuide/aws-attributedeletionpolicy.html
https://aws.amazon.com/premiumsupport/knowledge-center/cloudformation-accidental-updates/

QUESTION 11
A company is running Amazon RDS for MySQL for its workloads. There is downtime when AWS operating system
patches are applied during the Amazon RDS-specified maintenance window.
What is the MOST cost-effective action that should be taken to avoid downtime?
A. Migrate the workloads from Amazon RDS for MySQL to Amazon DynamoDB
B. Enable cross-Region read replicas and direct read traffic to then when Amazon RDS is down
C. Enable a read replicas and direct read traffic to it when Amazon RDS is down
D. Enable an Amazon RDS for MySQL Multi-AZ configuration
Correct Answer: C


QUESTION 12
A company runs a customer relationship management (CRM) system that is hosted on-premises with a MySQL
database as the backend. A custom stored procedure is used to send email notifications to another system when data is
inserted into a table. The company has noticed that the performance of the CRM system has decreased due to
database reporting applications used by various teams. The company requires an AWS solution that would reduce
maintenance, improve performance, and accommodate the email notification feature.
Which AWS solution meets these requirements?
A. Use MySQL running on an Amazon EC2 instance with Auto Scaling to accommodate the reporting applications.
Configure a stored procedure and an AWS Lambda function that uses Amazon SES to send email notifications to the
other system.
B. Use Amazon Aurora MySQL in a multi-master cluster to accommodate the reporting applications. Configure Amazon
RDS event subscriptions to publish a message to an Amazon SNS topic and subscribe the other system\\’s email
address to the topic.
C. Use MySQL running on an Amazon EC2 instance with a read replica to accommodate the reporting applications.
Configure Amazon SES integration to send email notifications to the other system.
D. Use Amazon Aurora MySQL with a read replica for the reporting applications. Configure a stored procedure and an
AWS Lambda function to publish a message to an Amazon SNS topic. Subscribe the other system\\’s email address to
the topic.
Correct Answer: D


QUESTION 13
A ride-hailing application uses an Amazon RDS for MySQL DB instance as persistent storage for bookings. This
application is very popular and the company expects a tenfold increase in the user base in next few months. The
application experiences more traffic during the morning and evening hours.
This application has two parts:
1.
An in-house booking component that accepts online bookings that directly correspond to simultaneous requests from
users.
2.
A third-party customer relationship management (CRM) component used by customer care representatives. The CRM
uses queries to access booking data.
A database specialist needs to design a cost-effective database solution to handle this workload.
Which solution meets these requirements?
A. Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes
and push the booking data to the RDS for MySQL DB instance used by the CRM.
B. Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda
function to capture changes and push the booking data to an Amazon SQS queue. This triggers another Lambda
function that pulls data from Amazon SQS and writes it to the RDS for MySQL DB instance used by the CRM.
C. Use Amazon ElastiCache for Redis to accept the bookings. Associate an AWS Lambda function to capture changes
and push the booking data to an Amazon Redshift database used by the CRM.
D. Use Amazon DynamoDB to accept the bookings. Enable DynamoDB Streams and associate an AWS Lambda
function to capture changes and push the booking data to Amazon Athena, which is used by the CRM.
Correct Answer: A

Welcome to download the valid Pass4itsure DBS-C01 pdf

Free downloadGoogle Drive
Amazon AWS DBS-C01 pdf https://drive.google.com/file/d/16YqKaTSxNTW4PhDIrMlTPcFuL76zLbgg/view?usp=sharing

Pass4itsure latest Amazon exam dumps coupon code free share

Summary:

New Amazon DBS-C01 exam questions from Pass4itsure DBS-C01 dumps! Welcome to download the newest Pass4itsure DBS-C01 dumps https://www.pass4itsure.com/aws-certified-database-specialty.html (145 Q&As), verified the latest DBS-C01 practice test questions with relevant answers.

Amazon AWS DBS-C01 dumps pdf free share https://drive.google.com/file/d/16YqKaTSxNTW4PhDIrMlTPcFuL76zLbgg/view?usp=sharing

Latest Amazon Exam Dumps

Exam Name Free Online practice test Free PDF Dumps Premium Exam Dumps
AWS Certified Professional
AWS Certified DevOps Engineer – Professional (DOP-C01) Free DOP-C01 practice test (Online) Free DOP-C01 PDF Dumps (Download) pass4itsure DOP-C01 Exam Dumps (Premium)
AWS Certified Solutions Architect – Professional (SAP-C01) Free SAP-C01 practice test (Online) Free SAP-C01 PDF Dumps (Download) pass4itsure SAP-C01 Exam Dumps (Premium)
AWS Certified Associate
AWS Certified Developer – Associate (DVA-C01) Free DVA-C01 practice test (Online) Free DVA-C01 PDF Dumps (Download) pass4itsure DVA-C01 Exam Dumps (Premium)
AWS Certified Solutions Architect – Associate (SAA-C01) Free SAA-C01 practice test (Online) Free SAA-C01 PDF Dumps (Download) pass4itsure SAA-C01 Exam Dumps (Premium)
AWS Certified Solutions Architect – Associate (SAA-C02) Free SAA-C02 practice test (Online) Free SAA-C02 PDF Dumps (Download) pass4itsure SAA-C02 Exam Dumps (Premium)
AWS Certified SysOps Administrator – Associate (SOA-C01) Free SOA-C01 practice test (Online) Free SOA-C01 PDF Dumps (Download) pass4itsure SOA-C01 Exam Dumps (Premium)
AWS Certified Foundational
AWS Certified Cloud Practitioner (CLF-C01) Free CLF-C01 practice test (Online) Free CLF-C01 PDF Dumps (Download) pass4itsure CLF-C01 Exam Dumps (Premium)
AWS Certified Specialty
AWS Certified Advanced Networking – Specialty (ANS-C00) Free ANS-C00 practice test (Online) Free ANS-C00 PDF Dumps (Download) pass4itsure ANS-C00 Exam Dumps (Premium)
AWS Certified Database – Specialty (DBS-C01) Free DBS-C01 practice test (Online) Free DBS-C01 PDF Dumps (Download) pass4itsure DBS-C01 Exam Dumps (Premium)
AWS Certified Alexa Skill Builder – Specialty (AXS-C01) Free AXS-C01 practice test (Online) Free AXS-C01 PDF Dumps (Download) pass4itsure AXS-C01 Exam Dumps (Premium)
AWS Certified Big Data – Speciality (BDS-C00) Free BDS-C00 practice test (Online) Free BDS-C00 PDF Dumps (Download) pass4itsure BDS-C00 Exam Dumps (Premium)
AWS Certified Machine Learning – Specialty (MLS-C01) Free MLS-C01 practice test (Online) Free MLS-C01 PDF Dumps (Download) pass4itsure MLS-C01 Exam Dumps (Premium)
AWS Certified Security – Specialty (SCS-C01) Free SCS-C01 practice test (Online) Free SCS-C01 PDF Dumps (Download) pass4itsure SCS-C01 Exam Dumps (Premium)

[2021.8] Pdf, Practice Exam Free, Amazon DAS-C01 Practice Questions Free Share

Are you preparing for the Amazon DAS-C01 exam? Well, this is the right place, we provide you with free Amazon DAS-C01 practice questions. Free DAS-C01 exam sample questions, DAS-C01 PDF download. Pass Amazon DAS-C01 exam with practice tests and exam dumps from Pass4itSure! Pass4itSure DAS-C01 dumps https://www.pass4itsure.com/das-c01.html (Q&As: 111).

Amazon DAS-C01 pdf free download

CLF-DAS-C01 pdf free https://drive.google.com/file/d/18Pv4W7ZW0JumeS8hAHSg5Sh2lk0ZJ3Jx/view?usp=sharing

Latest Amazon DAS-C01 practice exam questions

QUESTION 1
A financial services company needs to aggregate daily stock trade data from the exchanges into a data store. The
company requires that data be streamed directly into the data store, but also occasionally allows data to be modified
using SQL. The solution should integrate complex, analytic queries running with minimal latency. The solution must
provide a business intelligence dashboard that enables viewing of the top contributors to anomalies in stock prices.
Which solution meets the company\\’s requirements?
A. Use Amazon Kinesis Data Firehose to stream data to Amazon S3. Use Amazon Athena as a data source for Amazon
QuickSight to create a business intelligence dashboard.
B. Use Amazon Kinesis Data Streams to stream data to Amazon Redshift. Use Amazon Redshift as a data source for
Amazon QuickSight to create a business intelligence dashboard.
C. Use Amazon Kinesis Data Firehose to stream data to Amazon Redshift. Use Amazon Redshift as a data source for
Amazon QuickSight to create a business intelligence dashboard.
D. Use Amazon Kinesis Data Streams to stream data to Amazon S3. Use Amazon Athena as a data source for Amazon
QuickSight to create a business intelligence dashboard.
Correct Answer: D

QUESTION 2
A retail company wants to use Amazon QuickSight to generate dashboards for web and in-store sales. A group of 50
business intelligence professionals will develop and use the dashboards. Once ready, the dashboards will be shared
with a group of 1,000 users.
The sales data comes from different stores and is uploaded to Amazon S3 every 24 hours. The data is partitioned by
year and month, and is stored in Apache Parquet format. The company is using the AWS Glue Data Catalog as its main
data catalog and Amazon Athena for querying. The total size of the uncompressed data that the dashboards query from
at any point is 200 GB.
Which configuration will provide the MOST cost-effective solution that meets these requirements?
A. Load the data into an Amazon Redshift cluster by using the COPY command. Configure 50 author users and 1,000
reader users. Use QuickSight Enterprise edition. Configure an Amazon Redshift data source with a direct query option.
B. Use QuickSight Standard edition. Configure 50 author users and 1,000 reader users. Configure an Athena data
source with a direct query option.
C. Use QuickSight Enterprise edition. Configure 50 author users and 1,000 reader users. Configure an Athena data
source and import the data into SPICE. Automatically refresh every 24 hours.
D. Use QuickSight Enterprise edition. Configure 1 administrator and 1,000 reader users. Configure an S3 data source
and import the data into SPICE. Automatically refresh every 24 hours.
Correct Answer: C

QUESTION 3
A company is building a data lake and needs to ingest data from a relational database that has time-series data. The
company wants to use managed services to accomplish this. The process needs to be scheduled daily and bring
incremental data only from the source into Amazon S3.
What is the MOST cost-effective approach to meet these requirements?
A. Use AWS Glue to connect to the data source using JDBC Drivers. Ingest incremental records only
using job bookmarks.
B. Use AWS Glue to connect to the data source using JDBC Drivers. Store the last updated key in an Amazon
DynamoDB table and ingest the data using the updated key as a filter.
C. Use AWS Glue to connect to the data source using JDBC Drivers and ingest the entire dataset. Use appropriate
Apache Spark libraries to compare the dataset, and find the delta.
D. Use AWS Glue to connect to the data source using JDBC Drivers and ingest the full data. Use AWS DataSync to
ensure the delta only is written into Amazon S3.
Correct Answer: B

QUESTION 4
A company wants to use an automatic machine learning (ML) Random Cut Forest (RCF) algorithm to visualize complex
real-word scenarios, such as detecting seasonality and trends, excluding outers, and imputing missing values.
The team working on this project is non-technical and is looking for an out-of-the-box solution that will require the
LEAST amount of management overhead.
Which solution will meet these requirements?
A. Use an AWS Glue ML transform to create a forecast and then use Amazon QuickSight to visualize the data.
B. Use Amazon QuickSight to visualize the data and then use ML-powered forecasting to forecast the key business
metrics.
C. Use a pre-build ML AMI from the AWS Marketplace to create forecasts and then use Amazon QuickSight to visualize
the data.
D. Use calculated fields to create a new forecast and then use Amazon QuickSight to visualize the data.
Correct Answer: A
Reference: https://aws.amazon.com/blogs/big-data/query-visualize-and-forecast-trufactor-web-sessionintelligence-withaws-data-exchange/

QUESTION 5
An online retail company with millions of users around the globe wants to improve its ecommerce analytics capabilities.
Currently, clickstream data is uploaded directly to Amazon S3 as compressed files. Several times each day, an
application running on Amazon EC2 processes the data and makes search options and reports available for
visualization by editors and marketers. The company wants to make website clicks and aggregated data available to
editors and marketers in minutes to enable them to connect with users more effectively.
Which options will help meet these requirements in the MOST efficient way? (Choose two.)
A. Use Amazon Kinesis Data Firehose to upload compressed and batched clickstream records to Amazon Elasticsearch
Service.
B. Upload clickstream records to Amazon S3 as compressed files. Then use AWS Lambda to send data to Amazon
Elasticsearch Service from Amazon S3.
C. Use Amazon Elasticsearch Service deployed on Amazon EC2 to aggregate, filter, and process the data. Refresh
content performance dashboards in near-real time.
D. Use Kibana to aggregate, filter, and visualize the data stored in Amazon Elasticsearch Service. Refresh content
performance dashboards in near-real time.
E. Upload clickstream records from Amazon S3 to Amazon Kinesis Data Streams and use a Kinesis Data Streams
consumer to send records to Amazon Elasticsearch Service.
Correct Answer: CE

QUESTION 6
A company has a data lake on AWS that ingests sources of data from multiple business units and uses Amazon Athena
for queries. The storage layer is Amazon S3 using the AWS Glue Data Catalog. The company wants to make the data
available to its data scientists and business analysts. However, the company first needs to manage data access for
Athena based on user roles and responsibilities.
What should the company do to apply these access controls with the LEAST operational overhead?
A. Define security policy-based rules for the users and applications by role in AWS Lake Formation.
B. Define security policy-based rules for the users and applications by role in AWS Identity and Access Management
(IAM).
C. Define security policy-based rules for the tables and columns by role in AWS Glue.
D. Define security policy-based rules for the tables and columns by role in AWS Identity and Access Management
(IAM).
Correct Answer: D

QUESTION 7
A marketing company is using Amazon EMR clusters for its workloads. The company manually installs third-party
libraries on the clusters by logging in to the master nodes. A data analyst needs to create an automated solution to
replace the manual process.
Which options can fulfill these requirements? (Choose two.)
A. Place the required installation scripts in Amazon S3 and execute them using custom bootstrap actions.
B. Place the required installation scripts in Amazon S3 and execute them through Apache Spark in Amazon EMR.
C. Install the required third-party libraries in the existing EMR master node. Create an AMI out of that master node and
use that custom AMI to re-create the EMR cluster.
D. Use an Amazon DynamoDB table to store the list of required applications. Trigger an AWS Lambda function with
DynamoDB Streams to install the software.
E. Launch an Amazon EC2 instance with Amazon Linux and install the required third-party libraries on the instance.
Create an AMI and use that AMI to create the EMR cluster.
Correct Answer: AC

QUESTION 8
A banking company is currently using an Amazon Redshift cluster with dense storage (DS) nodes to store sensitive
data. An audit found that the cluster is unencrypted. Compliance requirements state that a database with sensitive data
must be encrypted through a hardware security module (HSM) with automated key rotation.
Which combination of steps is required to achieve compliance? (Choose two.)
A. Set up a trusted connection with HSM using a client and server certificate with automatic key rotation.
B. Modify the cluster with an HSM encryption option and automatic key rotation.
C. Create a new HSM-encrypted Amazon Redshift cluster and migrate the data to the new cluster.
D. Enable HSM with key rotation through the AWS CLI.
E. Enable Elliptic Curve Diffie-Hellman Ephemeral (ECDHE) encryption in the HSM.
Correct Answer: BD
Reference: https://docs.aws.amazon.com/redshift/latest/mgmt/working-with-db-encryption.html

QUESTION 9
A company wants to enrich application logs in near-real-time and use the enriched dataset for further analysis. The
application is running on Amazon EC2 instances across multiple Availability Zones and storing its logs using Amazon
CloudWatch Logs. The enrichment source is stored in an Amazon DynamoDB table.
Which solution meets the requirements for the event collection and enrichment?
A. Use a CloudWatch Logs subscription to send the data to Amazon Kinesis Data Firehose. Use AWS Lambda to
transform the data in the Kinesis Data Firehose delivery stream and enrich it with the data in the DynamoDB table.
Configure Amazon S3 as the Kinesis Data Firehose delivery destination.
B. Export the raw logs to Amazon S3 on an hourly basis using the AWS CLI. Use AWS Glue crawlers to catalog the
logs. Set up an AWS Glue connection for the DynamoDB table and set up an AWS Glue ETL job to enrich the data.
Store the enriched data in Amazon S3.
C. Configure the application to write the logs locally and use Amazon Kinesis Agent to send the data to Amazon Kinesis
Data Streams. Configure a Kinesis Data Analytics SQL application with the Kinesis data stream as the source. Join the
SQL application input stream with DynamoDB records, and then store the enriched output stream in Amazon S3 using
Amazon Kinesis Data Firehose.
D. Export the raw logs to Amazon S3 on an hourly basis using the AWS CLI. Use Apache Spark SQL on Amazon EMR
to read the logs from Amazon S3 and enrich the records with the data from DynamoDB. Store the enriched data in
Amazon S3.
Correct Answer: C

QUESTION 10
A technology company is creating a dashboard that will visualize and analyze time-sensitive data. The data will come in
through Amazon Kinesis Data Firehose with the butter interval set to 60 seconds. The dashboard must support nearreal-time data.
Which visualization solution will meet these requirements?
A. Select Amazon Elasticsearch Service (Amazon ES) as the endpoint for Kinesis Data Firehose. Set up a Kibana
dashboard using the data in Amazon ES with the desired analyses and visualizations.
B. Select Amazon S3 as the endpoint for Kinesis Data Firehose. Read data into an Amazon SageMaker Jupyter
notebook and carry out the desired analyses and visualizations.
C. Select Amazon Redshift as the endpoint for Kinesis Data Firehose. Connect Amazon QuickSight with SPICE to
Amazon Redshift to create the desired analyses and visualizations.
D. Select Amazon S3 as the endpoint for Kinesis Data Firehose. Use AWS Glue to catalog the data and Amazon
Athena to query it. Connect Amazon QuickSight with SPICE to Athena to create the desired analyses and
visualizations.
Correct Answer: A

QUESTION 11
A company needs to store objects containing log data in JSON format. The objects are generated by eight applications
running in AWS. Six of the applications generate a total of 500 KiB of data per second, and two of the applications can
generate up to 2 MiB of data per second.
A data engineer wants to implement a scalable solution to capture and store usage data in an Amazon S3 bucket. The
usage data objects need to be reformatted, converted to .csv format, and then compressed before they are stored in
Amazon S3. The company requires the solution to include the least custom code possible and has authorized the data
engineer to request a service quota increase if needed.
Which solution meets these requirements?
A. Configure an Amazon Kinesis Data Firehose delivery stream for each application. Write AWS Lambda functions to
read log data objects from the stream for each application. Have the function perform reformatting and .csv conversion.
Enable compression on all the delivery streams.
B. Configure an Amazon Kinesis data stream with one shard per application. Write an AWS Lambda function to read
usage data objects from the shards. Have the function perform .csv conversion, reformatting, and compression of the
data. Have the function store the output in Amazon S3.
C. Configure an Amazon Kinesis data stream for each application. Write an AWS Lambda function to read usage data
objects from the stream for each application. Have the function perform .csv conversion, reformatting, and compression
of the data. Have the function store the output in Amazon S3.
D. Store usage data objects in an Amazon DynamoDB table. Configure a DynamoDB stream to copy the objects to an
S3 bucket. Configure an AWS Lambda function to be triggered when objects are written to the S3 bucket. Have the
function convert the objects into .csv format.
Correct Answer: B

QUESTION 12
An online retail company is migrating its reporting system to AWS. The company\\’s legacy system runs data processing
on online transactions using a complex series of nested Apache Hive queries. Transactional data is exported from the
online system to the reporting system several times a day. Schemas in the files are stable between updates.
A data analyst wants to quickly migrate the data processing to AWS, so any code changes should be minimized. To
keep storage costs low, the data analyst decides to store the data in Amazon S3. It is vital that the data from the reports
and associated analytics is completely up to date based on the data in Amazon S3.
Which solution meets these requirements?
A. Create an AWS Glue Data Catalog to manage the Hive metadata. Create an AWS Glue crawler over Amazon S3 that
runs when data is refreshed to ensure that data changes are updated. Create an Amazon EMR cluster and use the
metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR.
B. Create an AWS Glue Data Catalog to manage the Hive metadata. Create an Amazon EMR cluster with consistent
view enabled. Run emrfs sync before each analytics step to ensure data changes are updated. Create an EMR cluster
and use the metadata in the AWS Glue Data Catalog to run Hive processing queries in Amazon EMR.
C. Create an Amazon Athena table with CREATE TABLE AS SELECT (CTAS) to ensure data is refreshed from
underlying queries against the raw dataset. Create an AWS Glue Data Catalog to manage the Hive metadata over the
CTAS table. Create an Amazon EMR cluster and use the metadata in the AWS Glue Data Catalog to run Hive
processing queries in Amazon EMR.
D. Use an S3 Select query to ensure that the data is properly updated. Create an AWS Glue Data Catalog to manage
the Hive metadata over the S3 Select table. Create an Amazon EMR cluster and use the metadata in the AWS Glue
Data Catalog to run Hive processing queries in Amazon EMR.
Correct Answer: A

QUESTION 13
A media company wants to perform machine learning and analytics on the data residing in its Amazon S3 data lake.
There are two data transformation requirements that will enable the consumers within the company to create reports:
1.
Daily transformations of 300 GB of data with different file formats landing in Amazon S3 at a scheduled time.
2.
One-time transformations of terabytes of archived data residing in the S3 data lake.
Which combination of solutions cost-effectively meets the company\\’s requirements for transforming the data? (Choose
three.)
A. For daily incoming data, use AWS Glue crawlers to scan and identify the schema.
B. For daily incoming data, use Amazon Athena to scan and identify the schema.
C. For daily incoming data, use Amazon Redshift to perform transformations.
D. For daily incoming data, use AWS Glue workflows with AWS Glue jobs to perform transformations.
E. For archived data, use Amazon EMR to perform data transformations.
F. For archived data, use Amazon SageMaker to perform data transformations.
Correct Answer: BCD

Pass4itsure Amazon exam dumps coupon code 2021

Pass4itsure Amazon exam dumps coupon code 2021

DAS-C01 pdf free share https://drive.google.com/file/d/18Pv4W7ZW0JumeS8hAHSg5Sh2lk0ZJ3Jx/view?usp=sharing

Valid Amazon ANS-C00 Practice Questions Free Share
[2021.5] ANS-C00 Questions https://www.examdemosimulation.com/valid-amazon-aws-ans-c00-practice-questions-free-share-from-pass4itsure-2/

Valid Amazon DBS-C01 Practice Questions Free Share
[2021.5] DBS-C01 Questions https://www.examdemosimulation.com/valid-amazon-aws-dbs-c01-practice-questions-free-share-from-pass4itsure/

ps.

Pass4itSure provides updated Amazon DAS-C01 dumps as the practice test and pdf https://www.pass4itsure.com/das-c01.html (Updated: Aug 02, 2021). Pass4itSure DAS-C01 dumps help you prepare for the Amazon DAS-C01 exam quickly!