SAA-C03 Exam Dumps
Exam Number: SAA-C03 | Length of test: 130 mins |
Exam Name: AWS Certified Solutions Architect – Associate | Number of questions in the actual exam: 65 |
Format: PDF, VPLUS | Passing Score: 720/1000 |
Total Questions: 918
Premium PDF file 2 months updates Last updated: 03-12-2024 |
Total Questions: 918 FREE Premium VPLUS file Last updated: 03-12-2024 |
Download practice test questions
Title | Size | Hits | Download |
---|---|---|---|
Amazon.SAA-C03.vAug-2024.by.Runi.214q | 314.00 KB | 52 | Download |
Amazon.SAA-C03.vAug-2024.by.Runi.214q | 1.56 MB | 81 | Download |
Amazon.SAA-C03.vOct-2023.by.HerryWary.185q | 15.99 MB | 90 | Download |
Amazon.SAA-C03.vNov-2023..by.Pen.212q | 1.38 MB | 102 | Download |
Amazon,SAA-C03.vSep-2023.by.Peter.177q | 10.48 MB | 83 | Download |
Amazon.SAA-C03.vJuly-2023.by.Musk.191q | 10.29 MB | 80 | Download |
Amazon.SAA-C03.vJun-2023.by.Roy.213q | 6.25 MB | 86 | Download |
Some new sample questions:
Question:
A company has developed an API using Amazon API Gateway REST API and AWS Lambd
a. How can latency be reduced for users worldwide?
A. Deploy the REST API as an edge-optimized API endpoint. Enable caching. Enable content encoding to compress data in transit.
B. Deploy the REST API as a Regional API endpoint. Enable caching. Enable content encoding to compress data in transit.
C. Deploy the REST API as an edge-optimized API endpoint. Enable caching. Configure reserved concurrency for Lambda functions.
D. Deploy the REST API as a Regional API endpoint. Enable caching. Configure reserved concurrency for Lambda functions.
Question:
A company needs to ingest and analyze telemetry data from vehicles at scale for machine learning and reporting.
Which solution will meet these requirements?
A. Use Amazon Timestream for LiveAnalytics to store data points. Grant Amazon SageMaker permission to access the data. Use Amazon QuickSight to visualize the data.
B. Use Amazon DynamoDB to store data points. Use DynamoDB Connector to ingest data into Amazon EMR for processing. Use Amazon QuickSight to visualize the data.
C. Use Amazon Neptune to store data points. Use Amazon Kinesis Data Streams to ingest data into a Lambda function for processing. Use Amazon QuickSight to visualize the data.
D. Use Amazon Timestream for LiveAnalytics to store data points. Grant Amazon SageMaker permission to access the data. Use Amazon Athena to visualize the data.
Question:
A company needs a cloud-based solution for backup, recovery, and archiving while retaining encryption key material control.
Which combination of solutions will meet these requirements? (Select TWO)
A. Create an AWS Key Management Service (AWS KMS) key without key material. Import the company’s key material into the KMS key.
B. Create an AWS KMS encryption key that contains key material generated by AWS KMS.
C. Store the data in Amazon S3 Standard-Infrequent Access (S3 Standard-IA). Use S3 Bucket Keys with AWS KMS keys.
D. Store the data in an Amazon S3 Glacier storage class. Use server-side encryption with customer-provided keys (SSE-C).
E. Store the data in AWS Snowball devices. Use server-side encryption with AWS KMS keys (SSE-KMS).
Question:
How can a company detect and notify security teams about PII in S3 buckets?
A. Use Amazon Macie. Create an EventBridge rule for SensitiveData findings and send an SNS notification.
B. Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SNS notification.
C. Use Amazon Macie. Create an EventBridge rule for SensitiveData:S3Object/Personal findings and send an SQS notification.
D. Use Amazon GuardDuty. Create an EventBridge rule for CRITICAL findings and send an SQS notification.
Question:
How can trade data from DynamoDB be ingested into an S3 data lake for near real-time analysis?
A. Use DynamoDB Streams to invoke a Lambda function that writes to S3.
B. Use DynamoDB Streams to invoke a Lambda function that writes to Data Firehose, which writes to S3.
C. Enable Kinesis Data Streams on DynamoDB. Configure it to invoke a Lambda function that writes to S3.
D. Enable Kinesis Data Streams on DynamoDB. Use Data Firehose to write to S3.
…….
Some new questions:
Q
A company is building a new furniture inventory application. The company has deployed the application on a fleet of Amazon EC2 instances across multiple Availability Zones. The EC2 instances run behind an Application Load Balancer (ALB) in their VPC.
A solutions architect has observed that incoming traffic seems to favor one EC2 instance, resulting in latency for some requests.
What should the solutions architect do to resolve this issue?
A. Disable session affinity (sticky sessions) on the ALB.
B. Replace the ALB with a Network Load Balancer.
C. Increase the number of EC2 instances in each Availability Zone.
D. Adjust the frequency of the health checks on the ALB’s target group.
Q
A solutions architect is designing an application that helps users fill out and submit registration forms. The solutions architect plans to use a two-tier architecture that includes a web application server tier and a worker tier.
The application needs to process submitted forms quickly. The application needs to process each form exactly once. The solution must ensure that no data is lost.
Which solution will meet these requirements?
A. Use an Amazon Simple Queue Service {Amazon SQS) FIFO queue between the web application server tier and the worker tier to store and forward form data.
B. Use an Amazon API Gateway HTTP API between the web application server tier and the worker tier to store and forward form data.
C. Use an Amazon Simple Queue Service (Amazon SQS) standard queue between the web application server tier and the worker tier to store and forward form data.
D. Use an AWS Step Functions workflow. Create a synchronous workflow between the web application server tier and the worker tier that stores and forwards form data.
Q
A company runs an application that stores and shares photos. Users upload the photos to an Amazon S3 bucket. Every day, users upload approximately 150 photos. The company wants to design a solution that creates a thumbnail of each new photo and stores the thumbnail in a second S3 bucket.
Which solution will meet these requirements MOST cost-effectively?
A. Configure an Amazon EventBridge scheduled rule to invoke a scrip! every minute on a long-running Amazon EMR cluster. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.
B. Configure an Amazon EventBridge scheduled rule to invoke a script every minute on a memory-optimized Amazon EC2 instance that is always on. Configure the script to generate thumbnails for the photos that do not have thumbnails. Configure the script to upload the thumbnails to the second S3 bucket.
C. Configure an S3 event notification to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to the second S3 bucket.
D. Configure S3 Storage Lens to invoke an AWS Lambda function each time a user uploads a new photo to the application. Configure the Lambda function to generate a thumbnail and to upload the thumbnail to a second S3 bucket.
Q
A company is deploying a new gaming application on Amazon EC2 instances. The gaming application needs to have access to shared storage.
The company requires a high-performance solution to give the application the ability to use an existing custom protocol to access shared storage. The solution must ensure low latency and must be operationally efficient.
Which solution will meet these requirements?
A. Create an Amazon FSx File Gateway. Create a file share that uses the existing custom protocol. Connect the EC2 instances that host the application to the file share.
B. Create an Amazon EC2 Windows instance. Install and configure a Windows file share role on the instance. Connect the EC2 instances that host the application to the file share.
C. Create an Amazon Elastic File System (Amazon EFS) file system. Configure the file system to support Lustre. Connect the EC2 instances that host the application to the file system.
D. Create an Amazon FSx for Lustre file system. Connect the EC2 instances that host the application to the file system.
Some new questions:
Q
A company is migrating its databases to Amazon RDS for PostgreSQL. The company is migrating its applications to Amazon EC2 instances. The company wants to optimize costs for long-running workloads.
Which solution will meet this requirement MOST cost-effectively?
A. Use On-Demand Instances for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year Compute Savings Plan with the No Upfront option for the EC2 instances.
B. Purchase Reserved Instances for a 1 year term with the No Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the No Upfront option for the EC2 instances.
C. Purchase Reserved Instances for a 1 year term with the Partial Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 1 year EC2 Instance Savings Plan with the Partial Upfront option for the EC2 instances.
D. Purchase Reserved Instances for a 3 year term with the All Upfront option for the Amazon RDS for PostgreSQL workloads. Purchase a 3 year EC2 Instance Savings Plan with the All Upfront option for the EC2 instances.
Q
A company recently migrated a monolithic application to an Amazon EC2 instance and Amazon RDS. The application has tightly coupled modules. The existing design of the application gives the application the ability to run on only a single EC2 instance.
The company has noticed high CPU utilization on the EC2 instance during peak usage times. The high CPU utilization corresponds to degraded performance on Amazon RDS for read requests. The company wants to reduce the high CPU utilization and improve read request performance.
Which solution will meet these requirements?
A. Resize the EC2 instance to an EC2 instance type that has more CPU capacity. Configure an Auto Scaling group with a minimum and maximum size of 1. Configure an RDS read replica for read requests.
B. Resize the EC2 instance to an EC2 instance type that has more CPU capacity. Configure an Auto Scaling group with a minimum and maximum size of 1. Add an RDS read replica and redirect all read/write traffic to the replica.
C. Configure an Auto Scaling group with a minimum size of 1 and maximum size of 2. Resize the RDS DB instance to an instance type that has more CPU capacity.
D. Resize the EC2 instance to an EC2 instance type that has more CPU capacity Configure an Auto Scaling group with a minimum and maximum size of 1. Resize the RDS DB instance to an instance type that has more CPU capacity.
Q
A company stores data in an on-premises Oracle relational database. The company needs to make the data available in Amazon Aurora PostgreSQL for analysis The company uses an AWS Site-to-Site VPN connection to connect its on-premises network to AWS.
The company must capture the changes that occur to the source database during the migration to Aurora PostgreSQL.
Which solution will meet these requirements?
A. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL schema. Use the AWS Database Migration Service (AWS DMS) full-load migration task to migrate the data.
B. Use AWS DataSync to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
C. Use the AWS Schema Conversion Tool (AWS SCT) to convert the Oracle schema to Aurora PostgreSQL schema. Use AWS Database Migration Service (AWS DMS) to migrate the existing data and replicate the ongoing changes.
D. Use an AWS Snowball device to migrate the data to an Amazon S3 bucket. Import the S3 data to Aurora PostgreSQL by using the Aurora PostgreSQL aws_s3 extension.
Q
A company has stored millions of objects across multiple prefixes in an Amazon S3 bucket by using the Amazon S3 Glacier Deep Archive storage class. The company needs to delete all data older than 3 years except for a subset of data that must be retained. The company has identified the data that must be retained and wants to implement a serverless solution.
Which solution will meet these requirements?
A. Use S3 Inventory to list all objects. Use the AWS CLI to create a script that runs on an Amazon EC2 instance that deletes objects from the inventory list.
B. Use AWS Batch to delete objects older than 3 years except for the data that must be retained
C. Provision an AWS Glue crawler to query objects older than 3 years. Save the manifest file of old objects. Create a script to delete objects in the manifest.
D. Enable S3 Inventory. Create an AWS Lambda function to filter and delete objects. Invoke the Lambda function with S3 Batch Operations to delete objects by using the inventory reports.
……………
Some new questions:
Q
A company uses 50 TB of data for reporting The company wants to move this data from on premises to AWS A custom application in the company’s data center runs a weekly data transformation job The company plans to pause the application until the data transfer is complete and needs to begin the transfer process as soon as possible
The data center does not have any available network bandwidth for additional workloads. A solutions architect must transfer the data and must configure the transformation job to continue to run in the AWS Cloud.
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS DataSync to move the data Create a custom transformation job by using AWS Glue.
B. Order an AWS Snowcone device to move the data Deploy the transformation application to the device.
C. Order an AWS Snowball Edge Storage Optimized device. Copy the data to the device. Create a custom transformation Job by using AWS Glue.
D. Order an AWS Snowball Edge Storage Optimized device that includes Amazon EC2 compute Copy the data to the device Create a new EC2 instance on AWS to run the transformation application.
Q
A company runs a stateful production application on Amazon EC2 instances The application requires at least two EC2 instances to always be running.
A solutions architect needs to design a highly available and fault-tolerant architecture for the application. The solutions architect creates an Auto Scaling group of EC2 instances.
Which set of additional steps should the solutions architect take to meet these requirements?
A. Set the Auto Scaling group’s minimum capacity to two. Deploy one On-Demand Instance in one Availability Zone and one On-Demand Instance in a second Availability Zone.
B. Set the Auto Scaling group’s minimum capacity to four Deploy two On-Demand Instances in one Availability Zone and two On-Demand Instances in a second Availability Zone
C. Set the Auto Scaling group’s minimum capacity to two. Deploy four Spot Instances in one Availability Zone.
D. Set the Auto Scaling group’s minimum capacity to four Deploy two On-Demand Instances in one Availability Zone and two Spot Instances in a second Availability Zone.
Q
A company is designing an event-driven order processing system Each order requires multiple validation steps after the order is created. An independent AWS Lambda function performs each validation step. Each validation step is independent from the other validation steps Individual validation steps need only a subset of the order event information.
The company wants to ensure that each validation step Lambda function has access to only the information from the order event that the function requires The components of the order processing system should be loosely coupled to accommodate future business changes.
Which solution will meet these requirements?
A. Create an Amazon Simple Queue Service (Amazon SQS> queue for each validation step. Create a new Lambda function to transform the order data to the format that each validation step requires and to publish the messages to the appropriate SQS queues Subscribe each validation step Lambda function to its corresponding SQS queue
B. Create an Amazon Simple Notification Service {Amazon SNS) topic. Subscribe the validation step Lambda functions to the SNS topic. Use message body filtering to send only the required data to each subscribed Lambda function.
C. Create an Amazon EventBridge event bus. Create an event rule for each validation step Configure the input transformer to send only the required data to each target validation step Lambda function.
D. Create an Amazon Simple Queue Service {Amazon SQS) queue Create a new Lambda function to subscribe to the SQS queue and to transform the order data to the format that each validation step requires. Use the new Lambda function to perform synchronous invocations of the validation step Lambda functions in parallel on separate threads.
Q
A company runs an application that uses Amazon RDS for PostgreSQL The application receives traffic only on weekdays during business hours The company wants to optimize costs and reduce operational overhead based on this usage.
Which solution will meet these requirements?
A. Use the Instance Scheduler on AWS to configure start and stop schedules.
B. Turn off automatic backups. Create weekly manual snapshots of the database.
C. Create a custom AWS Lambda function to start and stop the database based on minimum CPU utilization.
D. Purchase All Upfront reserved DB instances
Q
A company wants to isolate its workloads by creating an AWS account for each workload. The company needs a solution that centrally manages networking components for the workloads. The solution also must create accounts with automatic security controls (guardrails).
Which solution will meet these requirements with the LEAST operational overhead?
A. Use AWS Control Tower to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
B. Use AWS Organizations to deploy accounts. Create a networking account that has a VPC with private subnets and public subnets. Use AWS Resource Access Manager (AWS RAM) to share the subnets with the workload accounts.
C. Use AWS Control Tower to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
D. Use AWS Organizations to deploy accounts. Deploy a VPC in each workload account. Configure each VPC to route through an inspection VPC by using a transit gateway attachment.
Q
A company runs database workloads on AWS that are the backend for the company’s customer portals. The company runs a Multi-AZ database cluster on Amazon RDS for PostgreSQL.
The company needs to implement a 30-day backup retention policy. The company currently has both automated RDS backups and manual RDS backups. The company wants to maintain both types of existing RDS backups that are less than 30 days old.
Which solution will meet these requirements MOST cost-effectively?
A. Configure the RDS backup retention policy to 30 days tor automated backups by using AWS Backup. Manually delete manual backups that are older than 30 days.
B. Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days. Configure the RDS backup retention policy to 30 days tor automated backups.
C. Configure the RDS backup retention policy to 30 days for automated backups. Manually delete manual backups that are older than 30 days
D. Disable RDS automated backups. Delete automated backups and manual backups that are older than 30 days automatically by using AWS CloudFormation. Configure the RDS backup retention policy to 30 days for automated backups.
I downloaded exam.
It best. Thanks admin