AWS-Solution Architect Associate-Q & A-PART 2

1) A health club is developing a mobile fitness app that allows customers to upload statistics and view their progress. Amazon Cognito is being used for authentication, authorization and user management and users will sign-in with Facebook IDs.
In order to securely store data in DynamoDB, the design should use temporary AWS credentials. What feature of Amazon Cognito is used to obtain temporary credentials to access AWS services?

SAML Identity Providers

User Pools

Identity Pools

Key Pairs

ANs:- Identity Pools

Explanation
• With an identity pool, users can obtain temporary AWS credentials to access AWS services, such as Amazon S3 and DynamoDB
• A user pool is a user directory in Amazon Cognito. With a user pool, users can sign in to web or mobile apps through Amazon Cognito, or federate through a third-party identity provider (IdP)
• SAML Identity Providers are supported IDPs for identity pools but cannot be used for gaining temporary credentials for AWS services
• Key pairs are used in Amazon EC2 for access to instances

2) When using throttling controls with API Gateway what happens when request submissions exceed the steady-state request rate and burst limits?

API Gateway fails the limit-exceeding requests and returns “500 Internal Server Error” error responses to the client

The requests will be buffered in a cache until the load reduces

API Gateway fails the limit-exceeding requests and returns “429 Too Many Requests” error responses to the client

API Gateway drops the requests and does not return a response to the client

Ans:- API Gateway fails the limit-exceeding requests and returns “429 Too Many Requests” error responses to the client

Explanation
• You can throttle and monitor requests to protect your backend. Resiliency through throttling rules based on the number of requests per second for each HTTP method (GET, PUT). Throttling can be configured at multiple levels including Global and Service Call
• When request submissions exceed the steady-state request rate and burst limits, API Gateway fails the limit-exceeding requests and returns 429 Too Many Requests error responses to the client

3) You have an Amazon RDS Multi-AZ deployment across two availability zones. An outage of the availability zone in which the primary RDS DB instance is running occurs. What actions will take place in this circumstance? (choose 2)

A failover will take place once the connection draining timer has expired

Due to the loss of network connectivity the process to switch to the standby replica cannot take place

The primary DB instance will switch over automatically to the standby replica

A manual failover of the DB instance will need to be initiated using Reboot with failover

The failover mechanism automatically changes the DNS record of the DB instance to point to the standby DB instance

Ans:- The primary DB instance will switch over automatically to the standby replica

The failover mechanism automatically changes the DNS record of the DB instance to point to the standby DB instance

Explanation
• Multi-AZ RDS creates a replica in another AZ and synchronously replicates to it (DR only)
• A failover may be triggered in the following circumstances:
o Loss of primary AZ or primary DB instance failure
o Loss of network connectivity on primary
o Compute (EC2) unit failure on primary
o Storage (EBS) unit failure on primary
o The primary DB instance is changed
o Patching of the OS on the primary DB instance
o Manual failover (reboot with failover selected on primary)
• During failover RDS automatically updates configuration (including DNS endpoint) to use the second node
• The process to failover is not reliant on network connectivity as it is designed for fault tolerance
• Connection draining timers are applicable to ELBs not RDS
• You do not need to manually failover the DB instance, multi-AZ has an automatic process as outlined above

4) You have been assigned the task of moving some sensitive documents into the AWS cloud. You need to ensure that the security of the documents is maintained. Which AWS features can help ensure that the sensitive documents cannot be read even if they are compromised? (choose 2)

S3 Server-Side Encryption

IAM Access Policy

S3 cross region replication

EBS snapshots

EBS encryption with Customer Managed Keys

Ans:- S3 Server-Side Encryption

EBS encryption with Customer Managed Keys

Explanation
• It is not specified what types of documents are being moved into the cloud or what services they will be placed on. Therefore we can assume that options include S3 and EBS. To prevent the documents from being read if they are compromised we need to encrypt them. Both of these services provide native encryption functionality to ensure security of the sensitive documents. With EBS you can use KMS-managed or customer-managed encryption keys. With S3 you can use client-side or server-side encryption
• IAM access policies can be used to control access but if the documents are somehow compromised they will not stop the documents from being read. For this we need encryption, and IAM access policies are not used for controlling encryption
• EBS snapshots are used for creating a point-in-time backup or data. They do maintain the encryption status of the data from the EBS volume but are not used for actually encrypting the data in the first place
• S3 cross-region replication can be used for fault tolerance but does not apply any additional security to the data

5) You have created a new VPC and setup an Auto Scaling Group to maintain a desired count of 2 EC2 instances. The security team has requested that the EC2 instances be located in a private subnet. To distribute load, you have to also setup an Internet-facing Application Load Balancer (ALB).
With your security team’s wishes in mind what else needs to be done to get this configuration to work? (choose 2)

Add an Elastic IP address to each EC2 instance in the private subnet

Add a NAT gateway to the private subnet

Attach an Internet Gateway to the private subnets

For each private subnet create a corresponding public subnet in the same AZ

Associate the public subnets with the ALB

Ans:- For each private subnet create a corresponding public subnet in the same AZ

Associate the public subnets with the ALB

Explanation
• ELB nodes have public IPs and route traffic to the private IP addresses of the EC2 instances. You need one public subnet in each AZ where the ELB is defined and the private subnets are located
• Attaching an Internet gateway (which is done at the VPC level, not the subnet level) or a NAT gateway will not assist as these are both used for outbound communications which is not the goal here
• ELBs talk to the private IP addresses of the EC2 instances so adding an Elastic IP address to the instance won’t help. Additionally Elastic IP addresses are used in public subnets to allow Internet access via an Internet Gateway

6) There is a problem with an EC2 instance that was launched by AWS Auto Scaling. The EC2 status checks have reported that the instance is “Impaired”. What action will AWS Auto Scaling take?

It will launch a new instance immediately and then mark the impaired one for replacement

Auto Scaling will wait for 300 seconds to give the instance a chance to recover

Auto Scaling performs its own status checks and does not integrate with EC2 status checks

It will mark the instance for termination, terminate it, and then launch a replacement

Ans:- It will mark the instance for termination, terminate it, and then launch a replacement

Explanation
• If any health check returns an unhealthy status the instance will be terminated. Unlike AZ rebalancing, termination of unhealthy instances happens first, then Auto Scaling attempts to launch new instances to replace terminated instances
• AS will not launch a new instance immediately as it always terminates unhealthy instance before launching a replacement
• Auto Scaling does not wait for 300 seconds, once the health check has failed the configured number of times the instance will be terminated
• Auto Scaling does integrate with EC2 status checks as well as having its own status checks

7) In your AWS VPC, you need to add a new subnet that will allow you to host a total of 20 EC2 instances.
Which of the following IPv4 CIDR blocks can you use for this scenario?

172.0.0.0/27

172.0.0.0/30

172.0.0.0/28

172.0.0.0/29

Ans:- 172.0.0.0/27

Explanation
• When you create a VPC, you must specify an IPv4 CIDR block for the VPC
• The allowed block size is between a /16 netmask (65,536 IP addresses) and /28 netmask (16 IP addresses)
• The CIDR block must not overlap with any existing CIDR block that’s associated with the VPC
• A /27 subnet mask provides 32 addresses
• The first four IP addresses and the last IP address in each subnet CIDR block are not available for you to use, and cannot be assigned to an instance
• The following list shows total addresses for different subnet masks: /32 = 1 ; /31 = 2 ; /30 = 4 ; /29 = 8 ; /28 = 16 ; /27 = 32

8) You are putting together a design for a web-facing application. The application will be run on EC2 instances behind ELBs in multiple regions in an active/passive configuration. The website address the application runs on is digitalcloud.training. You will be using Route 53 to perform DNS resolution for the application.
How would you configure Route 53 in this scenario based on AWS best practices? (choose 2)

Use a Weighted Routing Policy

Use a Failover Routing Policy

Connect the ELBs using CNAME records

Set Evaluate Target Health to “No” for the primary

Connect the ELBs using Alias records

Ans:- Use a Failover Routing Policy

Connect the ELBs using Alias records

Explanation:
The failover routing policy is used for active/passive configurations. Alias records can be used to map the domain apex (digitalcloud.training) to the Elastic Load Balancers.
Weighted routing is not an active/passive routing policy. All records are active and the traffic is distributed according to the weighting
You cannot use CNAME records for the domain apex record, you must use Alias records
For Evaluate Target Health choose Yes for your primary record and choose No for your secondary record. For your primary record choose Yes for Associate with Health Check. Then for Health Check to Associate select the health check that you created for your primary resource

9) You just attempted to restart a stopped EC2 instance and it immediately changed from a pending state to a terminated state. What are the most likely explanations? (choose 2)

An EBS snapshot is corrupt

The AMI is unsupported

You have reached the limit on the number of instances that you can launch in a region

AWS does not currently have enough available On-Demand capacity to service your request

You’ve reached your EBS volume limit

Ans:- An EBS snapshot is corrupt

You’ve reached your EBS volume limit

Explanation
• The following are a few reasons why an instance might immediately terminate:
• – You’ve reached your EBS volume limit
• – An EBS snapshot is corrupt
• – The root EBS volume is encrypted and you do not have permissions to access the KMS key for decryption
• – The instance store-backed AMI that you used to launch the instance is missing a required part (an image.part.xx file)
• It is possible that an instance type is not supported by an AMI and this can cause an “UnsupportedOperation” client error. However, in this case the instance was previously running (it is in a stopped state) so it is unlikely that this is the issue
• If AWS does not have capacity available a InsufficientInstanceCapacity error will be generated when you try to launch a new instance or restart a stopped instance
• If you’ve reached the limit on the number of instances you can launch in a region you get an InstanceLimitExceeded error when you try to launch a new instance or restart a stopped instance

10) You’re trying to explain to a colleague typical use cases where you can use the Simple Workflow Service (SWF). Which of the scenarios below would be valid? (choose 2)

Managing a multi-step and multi-decision checkout process for a mobile application

Providing a reliable, highly-scalable, hosted queue for storing messages in transit between EC2 instances

For web applications that require content delivery networks

Sending notifications via SMS when an EC2 instance reaches a certain threshold

Coordinating business process workflows across distributed application components

Ans:- Managing a multi-step and multi-decision checkout process for a mobile application

Coordinating business process workflows across distributed application components

Explanation
• Amazon Simple Workflow Service (SWF) is a web service that makes it easy to coordinate work across distributed application components
• SWF enables applications for a range of use cases, including media processing, web application back-ends, business process workflows, and analytics pipelines, to be designed as a coordination of tasks
• You should use Amazon SNS for sending SMS messages
• You should use CloudFront if you need a CDN
• Yo should use SQS for storing messages in a queue

11) A Solutions Architect needs to attach an Elastic Network Interface (ENI) to an EC2 instance. This can be performed when the instance is in different states. What state does “warm attach” refer to?

Attaching an ENI to an instance when it is idle

Attaching an ENI to an instance when it is running

Attaching an ENI to an instance during the launch process

Attaching an ENI to an instance when it is stopped

Ans:- Attaching an ENI to an instance when it is stopped

Explanation
• ENIs can be “hot attached” to running instances
• ENIs can be “warm-attached” when the instance is stopped
• ENIs can be “cold-attached” when the instance is launched

12) You are configuring Route 53 for a customer’s website. Their web servers are behind an Internet-facing ELB. What record set would you create to point the customer’s DNS zone apex record at the ELB?

Create an A record pointing to the DNS name of the load balancer

Create an A record that is an Alias, and select the ELB DNS as a target

Create a PTR record pointing to the DNS name of the load balancer

Create a CNAME record that is an Alias, and select the ELB DNS as a target

Ans:- Create an A record that is an Alias, and select the ELB DNS as a target

Explanation
• An Alias record can be used for resolving apex or naked domain names (e.g. example.com). You can create an A record that is an Alias that uses the customer’s website zone apex domain name and map it to the ELB DNS name
• A CNAME record can’t be used for resolving apex or naked domain names
• A standard A record maps the DNS domain name to the IP address of a resource. You cannot obtain the IP of the ELB so you must use an Alias record which maps the DNS domain name of the customer’s website to the ELB DNS name (rather than its IP)
• PTR records are reverse lookup records where you use the IP to find the DNS name

13) You need to create an EBS volume to mount to an existing EC2 instance for an application that will be writing structured data to the volume. The application vendor suggests that the performance of the disk should be up to 3 IOPS per GB. You expect the capacity of the volume to grow to 2TB.
Taking into account cost effectiveness, which EBS volume type would you select?

General Purpose (GP2)

Cold HDD (SC1)

Throughput Optimized HDD (ST1)

Provisioned IOPS (IO1)

Ans:- General Purpose (GP2)

Explanation
• SSD, General Purpose (GP2) provides enough IOPS to support this requirement and is the most economical option that does. Using Provisioned IOPS would be more expensive and the other two options do not provide an SLA for IOPS
• More information on the volume types:
• – SSD, General Purpose (GP2) provides 3 IOPS per GB up to 16,000 IOPS. Volume size is 1 GB to 16 TB
• – Provisioned IOPS (Io1) provides the IOPS you assign up to 50 IOPS per GiB and up to 64,000 IOPS per volume. Volume size is 4 GB to 16TB
• – Throughput Optimized HDD (ST1) provides up to 500 IOPS per volume but does not provide an SLA for IOPS
• – Cold HDD (SC1) provides up to 250 IOPS per volume but does not provide an SLA for IOPS

14) You are a Solutions Architect at Digital Cloud Training. A new client who has not used cloud computing has asked you to explain how AWS works. The client wants to know what service is provided that will provide a virtual network infrastructure that loosely resembles a traditional data center but has the capacity to scale more easily?

Elastic Compute Cloud

Virtual Private Cloud

Elastic Load Balancing

Direct Connect

Ans:- Virtual Private Cloud

Explanation
• Amazon VPC lets you provision a logically isolated section of the Amazon Web Services (AWS) cloud where you can launch AWS resources in a virtual network that you define. It is analogous to having your own DC inside AWS and provides complete control over the virtual networking environment including selection of IP ranges, creation of subnets, and configuration of route tables and gateways. A VPC is logically isolated from other VPCs on AWS
• Elastic Load Balancing automatically distributes incoming application traffic across multiple targets, such as Amazon EC2 instances, containers, IP addresses, and Lambda functions
• Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides secure, resizable compute capacity in the cloud
• AWS Direct Connect is a cloud service solution that makes it easy to establish a dedicated network connection from your premises to AWS

15) Your organization has a data lake on S3 and you need to find a solution for performing in-place queries of the data assets in the data lake. The requirement is to perform both data discovery and SQL querying, and complex queries from a large number of concurrent users using BI tools.
What is the BEST combination of AWS services to use in this situation? (choose 2)

Amazon Athena for the ad hoc SQL querying

AWS Glue for the ad hoc SQL querying

RedShift Spectrum for the complex queries

AWS Lambda for the complex queries

Ans:- Amazon Athena for the ad hoc SQL querying

RedShift Spectrum for the complex queries

Explanation
• Performing in-place queries on a data lake allows you to run sophisticated analytics queries directly on the data in S3 without having to load it into a data warehouse
• You can use both Athena and Redshift Spectrum against the same data assets. You would typically use Athena for ad hoc data discovery and SQL querying, and then use Redshift Spectrum for more complex queries and scenarios where a large number of data lake users want to run concurrent BI and reporting workloads
• AWS Lambda is a serverless technology for running functions, it is not the best solution for running analytics queries
• AWS Glue is an ETL service

16) You created a second ENI (eth1) interface when launching an EC2 instance. You would like to terminate the instance and have not made any changes.
What will happen to the attached ENIs?

Both eth0 and eth1 will be terminated with the instance

Both eth0 and eth1 will persist

eth1 will be terminated, but eth0 will persist

eth1 will persist but eth0 will be terminated

Ans:- eth1 will persist but eth0 will be terminated

Explanation
• By default Eth0 is the only Elastic Network Interface (ENI) created with an EC2 instance when launched. You can add additional interfaces to EC2 instances (number dependent on instances family/type). Default interfaces are terminated with instance termination. Manually added interfaces are not terminated by default

17) An application you manage in your VPC uses an Auto Scaling Group that spans 3 AZs and there are currently 4 EC2 instances running in the group. What actions will Auto Scaling take, by default, if it needs to terminate an EC2 instance? (choose 2)

Terminate the instance with the least active network connections. If multiple instances meet this criterion, one will be randomly selected

Send an SNS notification, (if configured)

Terminate an instance in the AZ which currently has 2 running EC2 instances

Randomly select one of the 3 AZs, and then terminate an instance in that AZ

Wait for the cooldown period and then terminate the instance that has been running the longest

Ans:- Send an SNS notification, (if configured)

Terminate an instance in the AZ which currently has 2 running EC2 instances

Explanation
• Auto Scaling can perform rebalancing when it finds that the number of instances across AZs is not balanced. Auto Scaling rebalances by launching new EC2 instances in the AZs that have fewer instances first, only then will it start terminating instances in AZs that had more instances
• Auto Scaling can be configured to send an SNS email when:
• – An instance is launched
• – An instance is terminated
• – An instance fails to launch
• – An instance fails to terminate
• Auto Scaling does not terminate the instance that has been running the longest
• Auto Scaling will only terminate an instance randomly after it has first gone through several other selection steps. Please see the AWS article below for detailed information on the process

18) Your manager has asked you to explain the benefits of using IAM groups. Which of the below statements are valid benefits? (choose 2)

Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users

Provide the ability to nest groups to create an organizational hierarchy

Provide the ability to create custom permission policies

Enables you to attach IAM permission policies to more than one user at a time

You can restrict access to the subnets in your VPC

ANs:- Groups let you specify permissions for multiple users, which can make it easier to manage the permissions for those users

Enables you to attach IAM permission policies to more than one user at a time

Explanation
• Groups are collections of users and have policies attached to them. A group is not an identity and cannot be identified as a principal in an IAM policy. Use groups to assign permissions to users. Use the principal of least privilege when assigning permissions. You cannot nest groups (groups within groups)
• You cannot use groups to restrict access to subnet in your VPC
• Custom permission policies are created using IAM policies. These are then attached to users, groups or roles

19) An application you are designing will gather data from a website hosted on an EC2 instance and write the data to an S3 bucket. The application will use API calls to interact with the EC2 instance and S3 bucket.
Which Amazon S3 access control method will be the the MOST operationally efficient? (choose 2)

Ue key pairs

Grant AWS Management Console access

Grant programmatic access

Create an IAM policy

Create a bucket policy

Ans:- Grant programmatic access

Create an IAM policy

Explanation
• Policies are documents that define permissions and can be applied to users, groups and roles. Policy documents are written in JSON (key value pair that consists of an attribute and a value)
• Within an IAM policy you can grant either programmatic access or AWS Management Console access to Amazon S3 resources
• Key pairs are used for access to EC2 instances; a bucket policy would not assist with access control with EC2 and granting management console access will not assist the application which is making API calls to the services
• AWS recommend using IAM policies instead of S3 bucket policies in the following circumstances:
o You need to control access to AWS services other than S3. IAM policies will be easier to manage since you can centrally manage all of your permissions in IAM, instead of spreading them between IAM and S3
o You have numerous S3 buckets each with different permissions requirements. IAM policies will be easier to manage since you don’t have to define a large number of S3 bucket policies and can instead rely on fewer, more detailed IAM policies
o You prefer to keep access control policies in the IAM environment

20) You are creating a CloudFormation template that will provision a new EC2 instance and new EBS volume. What do you need to specify to associate the block store with the instance?

The EC2 physical ID

Both the EC2 logical ID and the EBS logical ID

Both the EC2 physical ID and the EBS physical ID

The EC2 logical ID

Ans:- Both the EC2 logical ID and the EBS logical ID

Explanation
• Logical IDs are used to reference resources within the template
• Physical IDs identify resources outside of AWS CloudFormation templates, but only after the resources have been created

21) A Solutions Architect is designing a three-tier web application that includes an Auto Scaling group of Amazon EC2 Instances running behind an ELB Classic Load Balancer. The security team requires that all web servers must be accessible only through the Elastic Load Balancer and that none of the web servers are directly accessible from the Internet. How should the Architect meet these requirements?

Install a Load Balancer on an Amazon EC2 instance

Create an Amazon CloudFront distribution in front of the Elastic Load Balancer

Configure the web tier security group to allow only traffic from the Elastic Load Balancer

Configure the web servers’ security group to deny traffic from the Internet

Ans:- Configure the web tier security group to allow only traffic from the Elastic Load Balancer

Explanation
• The web servers must be kept private so they will be not have public IP addresses. The ELB is Internet-facing so it will be publicly accessible via it’s DNS address (and corresponding public IP). To restrict web servers to be accessible only through the ELB you can configure the web tier security group to allow only traffic from the ELB. You would normally do this by adding the ELBs security group to the rule on the web tier security group
• This scenario is using an ELB Classic Load Balancer and these cannot be installed on EC2 instances (at least not by you, in reality all ELBs are actually running on EC2 instances but these are transparent to the AWS end user)
• You cannot create deny rules in security groups
• CloudFront distributions are used for caching content to improve performance for users on the Internet. They are not security devices to be used for restricting access to EC2 instances

22) You need to run a production batch process quickly that will use several EC2 instances. The process cannot be interrupted and must be completed within a short time period.
What is likely to be the MOST cost-effective choice of EC2 instance type to use for this requirement?

Flexible instances

Spot instances

On-demand instances

Reserved instances

Ans:- On-demand instances

Explanation
• The key requirements here are that you need to deploy several EC2 instances quickly to run the batch process and you must ensure that the job completes. The on-demand pricing model is the best for this ad-hoc requirement as though spot pricing may be cheaper you cannot afford to risk that the instances are terminated by AWS when the market price increases
• Spot instances provide a very low hourly compute cost and are good when you have flexible start and end times. They are often used for use cases such as grid computing and high-performance computing (HPC)
• Reserved instances are used for longer more stable requirements where you can get a discount for a fixed 1 or 3 year term. This pricing model is not good for temporary requirements
• There is no such thing as a “flexible instance”

23) You would like to implement a method of automating the the creation, retention, and deletion of backups for the EBS volumes in your VPC. What is the easiest way to automate these tasks using AWS tools?

Configure EBS volume replication to create a backup on S3

Create a scheduled job and run the AWS CLI command “create-snapshot”

Use the EBS Data Lifecycle Manager (DLM) to manage snapshots of the volumes

Create a scheduled job and run the AWS CLI command “create-backup”

Ans:- Use the EBS Data Lifecycle Manager (DLM) to manage snapshots of the volumes

Explanation
• You backup EBS volumes by taking snapshots. This can be automated via the AWS CLI command “create-snapshot”. However the question is asking for a way to automate not just the creation of the snapshot but the retention and deletion too. The EBS Data Lifecycle Manager (DLM) is a new feature that can automate all of these actions for you and this can be performed centrally from within the management console
• Snapshots capture a point-in-time state of an instance and are stored on Amazon S3. They do not provide granular backup (not a replacement for backup software)
• You cannot configure volume replication for EBS volumes using AWS tools

24) You are discussing EC2 with a colleague and need to describe the differences between EBS-backed instances and Instance store-backed instances. Which of the statements below would be valid descriptions? (choose 2)

EBS volumes can be detached and reattached to other EC2 instances

For both types of volume rebooting the instances will result in data loss

Instance store volumes can be detached and reattached to other EC2 instances

By default, root volumes for both types will be retained on termination unless you configured otherwise

On an EBS-backed instance, the default action is for the root EBS volume to be deleted upon termination

Ans:- EBS volumes can be detached and reattached to other EC2 instances

Ans:- On an EBS-backed instance, the default action is for the root EBS volume to be deleted upon termination

Explanation
• On an EBS-backed instance, the default action is for the root EBS volume to be deleted upon termination
• EBS volumes can be detached and reattached to other EC2 instances
• Instance store volumes cannot be detached and reattached to other EC2 instances
• When rebooting the instances for both types data will not be lost
• By default, root volumes for both types will be deleted on termination unless you configured otherwise

25) You are designing solutions that will utilize CloudFormation templates and your manager has asked how much extra will it cost to use CloudFormation to deploy resources?

Amazon charge a flat fee for each time you use CloudFormation

CloudFormation is charged per hour of usage

The cost is based on the size of the template

There is no additional charge for AWS CloudFormation, you only pay for the AWS resources that are created

Ans:- There is no additional charge for AWS CloudFormation, you only pay for the AWS resources that are created

Explanation
• There is no additional charge for AWS CloudFormation. You pay for AWS resources (such as Amazon EC2 instances, Elastic Load Balancing load balancers, etc.) created using AWS CloudFormation in the same manner as if you created them manually. You only pay for what you use, as you use it; there are no minimum fees and no required upfront commitments
• There is no flat fee, per hour usage costs or charges applicable to templates

26) You are creating a series of environments within a single VPC. You need to implement a system of categorization that allows for identification of EC2 resources by business unit, owner, or environment.
Which AWS feature allows you to do this?

Tags

Metadata

Parameters

Custom filters

Ans:- Tags

Explanation
• A tag is a label that you assign to an AWS resource. Each tag consists of a key and an optional value, both of which you define. Tags enable you to categorize your AWS resources in different ways, for example, by purpose, owner, or environment
• Instance metadata is data about your instance that you can use to configure or manage the running instance
• Parameters and custom filters are not used for categorization

27) You recently noticed that your Network Load Balancer (NLB) in one of your VPCs is not distributing traffic evenly between EC2 instances in your AZs. There are an odd number of EC2 instances spread across two AZs. The NLB is configured with a TCP listener on port 80 and is using active health checks.
What is the most likely problem?

NLB can only load balance within a single AZ

Health checks are failing in one AZ due to latency

Cross-zone load balancing is disabled

There is no HTTP listener

Ans:- Cross-zone load balancing is disabled

Explanation
• Without cross-zone load balancing enabled, the NLB will distribute traffic 50/50 between AZs. As there are an odd number of instances across the two AZs some instances will not receive any traffic. Therefore enabling cross-zone load balancing will ensure traffic is distributed evenly between available instances in all AZs
• If health checks fail this will cause the NLB to stop sending traffic to these instances. However, the health check packets are very small and it is unlikely that latency would be the issue within a region
• Listeners are used to receive incoming connections. An NLB listens on TCP not on HTTP therefore having no HTTP listener is not the issue here
• An NLB can load balance across multiple AZs just like the other ELB types

28) To increase the resiliency of your RDS DB instance, you decided to enable Multi-AZ. Where will the new standby RDS instance be created?

In the same AWS Region but in a different AZ for high availability

You must specify the location when configuring Multi-AZ

In a different AWS Region to protect against Region failures

In another subnet within the same AZ

Ans:- In the same AWS Region but in a different AZ for high availability

Explanation
• Multi-AZ RDS creates a replica in another AZ within the same region and synchronously replicates to it (DR only). You cannot choose which AZ in the region will be chosen to create the standby DB instance

29) You are building a small web application running on EC2 that will be serving static content. The user base is spread out globally and speed is important. Which AWS service can deliver the best user experience cost-effectively and reduce the load on the web server?

Amazon S3

Amazon RedShift

Amazon CloudFront

Amazon EBS volume

Ans:- Amazon CloudFront

Explanation
• This is a good use case for CloudFront as the user base is spread out globally and CloudFront can cache the content closer to users and also reduce the load on the web server running on EC2
• Amazon S3 is very cost-effective however a bucket is located in a single region and therefore performance is
• EBS is not the most cost-effective storage solution and the data would be located in a single region to latency could be an issue
• Amazon RedShift is a data warehouse and is not suitable in this solution

30) You are running an application on EC2 instances in a private subnet of your VPC. You would like to connect the application to Amazon API Gateway. For security reasons, you need to ensure that no traffic traverses the Internet and need to ensure all traffic uses private IP addresses only.
How can you achieve this?

Create a private API using an interface VPC endpoint

Create a public VIF on a Direct Connect connection

Create a NAT gateway

Add the API gateway to the subnet the EC2 instances are located in

Ans:- Create a private API using an interface VPC endpoint

Explanation
• An Interface endpoint uses AWS PrivateLink and is an elastic network interface (ENI) with a private IP address that serves as an entry point for traffic destined to a supported service. Using PrivateLink you can connect your VPC to supported AWS services, services hosted by other AWS accounts (VPC endpoint services), and supported AWS Marketplace partner services
• You do not need to implement Direct Connect and create a public VIF. Public IP addresses are used in public VIFs and the question requests that only private addresses are used
• You cannot add API Gateway to the subnet the EC2 instances are in, it is a public service with a public endpoint
• NAT Gateways are used to provide Internet access for EC2 instances in private subnets so are of no use in this solution

31) A client has made some updates to their web application. The application uses an Auto Scaling Group to maintain a group of several EC2 instances. The application has been modified and a new AMI must be used for launching any new instances.
What do you need to do to add the new AMI?

Suspend Auto Scaling and replace the existing AMI

Create a new launch configuration that uses the AMI and update the ASG to use the new launch configuration

Create a new target group that uses a new launch configuration with the new AMI

Modify the existing launch configuration to add the new AMI

Ans:- Create a new launch configuration that uses the AMI and update the ASG to use the new launch configuration

Explanation
• A launch configuration is the template used to create new EC2 instances and includes parameters such as instance family, instance type, AMI, key pair and security groups
• You cannot edit a launch configuration once defined. In this case you can create a new launch configuration that uses the new AMI and any new instances that are launched by the ASG will use the new AMI
• Suspending scaling processes can be useful when you want to investigate a configuration problem or other issue with your web application and then make changes to your application, without invoking the scaling processes. It is not useful in this situation
• A target group is a concept associated with an ELB not Auto Scaling

32) You are creating a design for a two-tier application with a MySQL RDS back-end. The performance requirements of the database tier are hard to quantify until the application is running and you are concerned about right-sizing the database.
What methods of scaling are possible after the MySQL RDS database is deployed? (choose 2)

Horizontal scaling for write capacity by enabling Multi-AZ

Horizontal scaling for read capacity by creating a read-replica

Vertical scaling for read and write by choosing a larger instance size

Vertical scaling for read and write by using Transfer Acceleration

Horizontal scaling for read and write by enabling Multi-Master RDS DB

Ans:- Horizontal scaling for read capacity by creating a read-replica

Vertical scaling for read and write by choosing a larger instance size

Explanation
• Relational databases can scale vertically (e.g. upgrading to a larger RDS DB instance)
• For read-heavy use cases, you can scale horizontally using read replicas
• There is no such thing as a Multi-Master MySQL RDS DB (there is for Aurora)
• You cannot scale write capacity by enabling Multi-AZ as only one DB is active and can be written to
• Transfer Acceleration is a feature of S3 for fast uploads of objects

33) You are trying to clean up your unused EBS volumes and snapshots to save some space and cost. How many of the most recent snapshots of an EBS volume need to be maintained to guarantee that you can recreate the full EBS volume from the snapshot?

Only the most recent snapshot. Snapshots are incremental, but the deletion process will ensure that no data is lost

Two snapshots, the oldest and most recent snapshots

The oldest snapshot, as this references data in all other snapshots

You must retain all snapshots as the process is incremental and therefore data is required from each snapshot

Ans:- Only the most recent snapshot. Snapshots are incremental, but the deletion process will ensure that no data is lost

Explanation
• Snapshots capture a point-in-time state of an instance. If you make periodic snapshots of a volume, the snapshots are incremental, which means that only the blocks on the device that have changed after your last snapshot are saved in the new snapshot
• Even though snapshots are saved incrementally, the snapshot deletion process is designed so that you need to retain only the most recent snapshot in order to restore the volume

34) Which AWS service does API Gateway integrate with to enable users from around the world to achieve the lowest possible latency for API requests and responses?

S3 Transfer Acceleration

Direct Connect

CloudFront

Lambda

Ans:- CloudFront

Explanation
• CloudFront is used as the public endpoint for API Gateway and provides reduced latency and distributed denial of service protection through the use of CloudFront
• Direct Connect provides a private network into AWS from your data center
• S3 Transfer Acceleration is not used with API Gateway, it is used to accelerate uploads of S3 objects
• Lambda is not used to reduce latency for API requests

35) Your company has multiple AWS accounts for each environment (Prod, Dev, Test etc.). You would like to copy an EBS snapshot from DEV to PROD. The snapshot is from an EBS volume that was encrypted with a custom key.
What steps do you need to take to share the encrypted EBS snapshot with the Prod account? (choose 2)

Make a copy of the EBS volume and unencrypt the data in the process

Use CloudHSM to distribute the encryption keys use to encrypt the volume

Share the custom key used to encrypt the volume

Create a snapshot of the unencrypted volume and share it with the Prod account

Modify the permissions on the encrypted snapshot to share it with the Prod account

Ans:- Share the custom key used to encrypt the volume

Modify the permissions on the encrypted snapshot to share it with the Prod account

Explanation
• When an EBS volume is encrypted with a custom key you must share the custom key with the PROD account. You also need to modify the permissions on the snapshot to share it with the PROD account. The PROD account must copy the snapshot before they can then create volumes from the snapshot
• You cannot share encrypted volumes created using a default CMK key and you cannot change the CMK key that is used to encrypt a volume
• CloudHSM is used for key management and storage but not distribution
• You do not need to decrypt the data as there is a workable solution that keeps the data secure at all times

36) You are a Solutions Architect at Digital Cloud Training. One of your customers runs an application on-premise that stores large media files. The data is mounted to different servers using either the SMB or NFS protocols. The customer is having issues with scaling the storage infrastructure on-premise and is looking for a way to offload the data set into the cloud whilst retaining a local cache for frequently accessed content.
Which of the following is the best solution?

Use the AWS Storage Gateway File Gateway

Create a script that migrates infrequently used data to S3 using multi-part upload

Establish a VPN and use the Elastic File System (EFS)

Use the AWS Storage Gateway Volume Gateway in cached volume mode

Ans:- Use the AWS Storage Gateway File Gateway

Explanation
• File gateway provides a virtual on-premises file server, which enables you to store and retrieve files as objects in Amazon S3. It can be used for on-premises applications, and for Amazon EC2-resident applications that need file storage in S3 for object based workloads. Used for flat files only, stored directly on S3. File gateway offers SMB or NFS-based access to data in Amazon S3 with local caching
• The AWS Storage Gateway Volume Gateway in cached volume mode is a block-based (not file-based) solution so you cannot mount the storage with the SMB or NFS protocols With Cached Volume mode – the entire dataset is stored on S3 and a cache of the most frequently accessed data is cached on-site
• You could mount EFS over a VPN but it would not provide you a local cache of the data
• Creating a script the migrates infrequently used data to S3 is possible but that data would then not be indexed on the primary filesystem so you wouldn’t have a method of retrieving it without developing some code to pull it back from S3. This is not the best solution

37) A three-tier application running in your VPC uses Auto Scaling for maintaining a desired count of EC2 instances. One of the EC2 instances just reported an EC2 Status Check status of Impaired. Once this information is reported to Auto Scaling, what action will be taken?

A new instance will immediately be launched, then the impaired instance will be terminated

Auto Scaling waits for the health check grace period and then terminates the instance

Auto Scaling must verify with the ELB status checks before taking any action

The impaired instance will be terminated, then a replacement will be launched

Ans:- The impaired instance will be terminated, then a replacement will be launched

Explanation
• By default Auto Scaling uses EC2 status checks
• Unlike AZ rebalancing, termination of unhealthy instances happens first, then Auto Scaling attempts to launch new instances to replace terminated instances
• Auto Scaling does not wait for the health check grace period or verify with ELB before taking any action

38) You are developing some code that uses a Lambda function and you would like to enable the function to connect to an ElastiCache cluster within a VPC that you own. What VPC-specific information must you include in your function to enable this configuration? (choose 2)

VPC Security Group IDs

VPC Peering IDs

VPC Logical IDs

VPC Subnet IDs

VPC Route Table IDs

Ans:- VPC Security Group IDs

VPC Subnet IDs

Explanation
• To enable your Lambda function to access resources inside your private VPC, you must provide additional VPC-specific configuration information that includes VPC subnet IDs and security group IDs. AWS Lambda uses this information to set up elastic network interfaces (ENIs) that enable your function
• Please see the AWS article linked below for more details on the requirements

39) You work for a large multinational retail company. The company has a large presence in AWS in multiple regions. You have established a new office and need to implement a high-bandwidth, low-latency connection to multiple VPCs in multiple regions within the same account. The VPCs each have unique CIDR ranges.
What would be the optimum solution design using AWS technology? (choose 2)

Create a Direct Connect gateway, and create private VIFs to each region

Provision an MPLS network

Implement a Direct Connect connection to the closest AWS region

Implement Direct Connect connections to each AWS region

Configure AWS VPN CloudHub

Ans:- Create a Direct Connect gateway, and create private VIFs to each region

Implement a Direct Connect connection to the closest AWS region

Explanation
• You should implement an AWS Direct Connect connection to the closest region. You can then use Direct Connect gateway to create private virtual interfaces (VIFs) to each AWS region. Direct Connect gateway provides a grouping of Virtual Private Gateways (VGWs) and Private Virtual Interfaces (VIFs) that belong to the same AWS account and enables you to interface with VPCs in any AWS Region (except AWS China Region). You can share a private virtual interface to interface with more than one Virtual Private Cloud (VPC) reducing the number of BGP sessions required
• You do not need to implement multiple Direct Connect connections to each region. This would be a more expensive option as you would need to pay for an international private connection
• AWS VPN CloudHub is not the best solution as you have been asked to implement high-bandwidth, low-latency connections and VPN uses the Internet so is not reliable
• An MPLS network could be used to create a network topology that gets you closer to AWS in each region but you would still need use Direct Connect or VPN for the connectivity into AWS. Also, the question states that you should use AWS technology and MPLS is not offered as a service by AWS

40) An event in CloudTrail is the record of an activity in an AWS account. What are the two types of events that can be logged in CloudTrail? (choose 2)

Platform Events which are also known as hardware level operations

Data Events which are also known as data plane operations

Management Events which are also known as control plane operations

System Events which are also known as instance level operations

Ans:- Data Events which are also known as data plane operations

Management Events which are also known as control plane operations

Explanation
• Trails can be configured to log Data events and management events:
o Data events: These events provide insight into the resource operations performed on or within a resource. These are also known as data plane operations
o Management events: Management events provide insight into management operations that are performed on resources in your AWS account. These are also known as control plane operations. Management events can also include non-API events that occur in your account

41) The development team in a media organization is moving their SDLC processes into the AWS Cloud. Which AWS service is primarily used for software version control?

CodeCommit

Step Functions

CloudHSM

CodeStar

Ans:- CodeCommit

Explanation
• AWS CodeCommit is a fully-managedsource control service that hosts secure Git-based repositiories
• AWS CodeStar enables you to quickly develop, build, and deploy applications on AWS
• AWS CloudHSM is a cloud-based hardware security module (HSM) that enables you to easily generate and use your own encryption keys on the AWS Cloud
• AWS Step Functions lets you coordinate multiple AWS services into serverless workflows so you can build and update apps quickly

42) You are a Solutions Architect for Digital Cloud Training. A client has asked for some assistance in selecting the best database for a specific requirement. The database will be used for a data warehouse solution and the data will be stored in a structured format. The client wants to run complex analytics queries using business intelligence tools.
Which AWS database service will you recommend?

RedShift

DynamoDB

Aurora

RDS

Ans:- RedShift

Explanation
• Amazon Redshift is a fast, fully managed data warehouse that makes it simple and cost-effective to analyze all your data using standard SQL and existing Business Intelligence (BI) tools. RedShift is a SQL based data warehouse used for analytics applications. RedShift is an Online Analytics Processing (OLAP) type of DB. RedShift is used for running complex analytic queries against petabytes of structured data, using sophisticated query optimization, columnar storage on high-performance local disks, and massively parallel query execution
• Amazon RDS does store data in a structured format but it is not a data warehouse. The primary use case for RDS is as a transactional database (not an analytics database)
• Amazon DynamoDB is not a structured database (schema-less / NoSQL) and is not a data warehouse solution
• Amazon Aurora is a type of RDS database so is also not suitable for a data warehouse use case

43) A company runs several web applications on AWS that experience a large amount of traffic. An Architect is considering adding a caching service to one of the most popular web applications. What are two advantages of using ElastiCache? (choose 2)

Can be used for storing session state data

Low latency network connectivity

Decoupling application components

Caching query results for improved performance

Multi-region HA

Ans:- Can be used for storing session state data

Caching query results for improved performance

Explanation
• The in-memory caching provided by ElastiCache can be used to significantly improve latency and throughput for many read-heavy application workloads or compute-intensive workloads
• Elasticache can also be used for storing session state
• You cannot enable multi-region HA with ElastiCache
• ElastiCache is a caching service, not a network service so it is not responsible for providing low-latency network connectivity
• Amazon SQS is used for decoupling application components

44) A Solutions Architect is designing the messaging and streaming layers of a serverless application. The messaging layer will manage communications between components and the streaming layer will manage real-time analysis and processing of streaming data.
The Architect needs to select the most appropriate AWS services for these functions. Which services should be used for the messaging and streaming layers? (choose 2)

Use Amazon Kinesis for collecting, processing and analyzing real-time streaming data

Use Amazon CloudTrail for collecting, processing and analyzing real-time streaming data

Use Amazon SWF for providing a fully managed messaging service

Use Amazon SNS for providing a fully managed messaging service

Use Amazon EMR for collecting, processing and analyzing real-time streaming data

Ans:- Use Amazon Kinesis for collecting, processing and analyzing real-time streaming data

Use Amazon SNS for providing a fully managed messaging service

Explanation
• Amazon Kinesis makes it easy to collect, process, and analyze real-time streaming data. With Amazon Kinesis Analytics, you can run standard SQL or build entire streaming applications using SQL
• Amazon Simple Notification Service (Amazon SNS) provides a fully managed messaging service for pub/sub patterns using asynchronous event notifications and mobile push notifications for microservices, distributed systems, and serverless applications
• Amazon Elastic Map Reduce runs on EC2 instances so is not serverless
• Amazon Simple Workflow Service is used for executing tasks not sending messages
• Amazon CloudTrail is used for recording API activity on your account

45) A Solutions Architect is conducting an audit and needs to query several properties of EC2 instances in a VPC. What two methods are available for accessing and querying the properties of an EC2 instance such as instance ID, public keys and network interfaces? (choose 2)

Use the EC2 Config service

Run the command “curl http://169.254.169.254/latest/dynamic/instance-identity/”

Run the command “curl http://169.254.169.254/latest/meta-data/”

Download and run the Instance Metadata Query Tool

Use the Batch command

Ans:- Run the command “curl http://169.254.169.254/latest/meta-data/”

Download and run the Instance Metadata Query Tool

Explanation
• This information is stored in the instance metadata on the instance. You can access the instance metadata through a URI or by using the Instance Metadata Query tool
• The instance metadata is available at http://169.254.169.254/latest/meta-data
• The Instance Metadata Query tool allows you to query the instance metadata without having to type out the full URI or category names
• The EC2 config service or batch command are not suitable for accessing this information

46) You are concerned that you may be getting close to some of the default service limits for several AWS services. What AWS tool can be used to display current usage and limits?

AWS Trusted Advisor

AWS Systems Manager

AWS CloudWatch

AWS Dashboard

Ans:- AWS Trusted Advisor

Explanation
• Trusted Advisor is an online resource to help you reduce cost, increase performance, and improve security by optimizing your AWS environment. Trusted Advisor provides real time guidance to help you provision your resources following AWS best practices. AWS Trusted Advisor offers a Service Limits check (in the Performance category) that displays your usage and limits for some aspects of some services
• AWS CloudWatch is used for performance monitoring not displaying usage limits
• AWS Systems Manager gives you visibility and control of your infrastructure on AWS. Systems Manager provides a unified user interface so you can view operational data from multiple AWS services and allows you to automate operational tasks across your AWS resources
• There is no service known as “AWS Dashboard”

47) A Solutions Architect is creating a design for a multi-tiered web application. The application will use multiple AWS services and must be designed with elasticity and high-availability in mind.
Which architectural best practices should be followed to reduce interdependencies between systems? (choose 2)

Enable graceful failure through AWS Auto Scaling

Implement asynchronous integration using Amazon SQS queues

Implement well-defined interfaces using a relational database

Implement service discovery using static IP addresses

Enable automatic scaling for storage and databases

Ans:- Enable graceful failure through AWS Auto Scaling

Implement asynchronous integration using Amazon SQS queues

Explanation
ul>
• Asynchronous integration – this is another form of loose coupling where an interaction does not need an immediate response (think SQS queue or Kinesis)
• Graceful failure – build applications such that they handle failure in a graceful manner (reduce the impact of failure and implement retries). Auto Scaling helps to reduce the impact of failure by launching replacement instances
• Well-defined interfaces – reduce interdependencies in a system by enabling interaction only through specific, technology-agnostic interfaces (e.g. RESTful APIs). A relational database is not an example of a well-defined interface
• Service discovery – disparate resources must have a way of discovering each other without prior knowledge of the network topology. Usually DNS names and a method of resolution are preferred over static IP addresses which need to be hardcoded somewhere
• Though automatic scaling for storage and database provides scalability (not necessarily elasticity), it does not reduce interdependencies between systemsReferences:

48) An EC2 instance on which you are running a video on demand web application has been experiencing high CPU utilization. You would like to take steps to reduce the impact on the EC2 instance and improve performance for consumers. Which of the steps below would help?

Create an ELB and place it in front of the EC2 instance

Create a CloudFront distribution and configure a custom origin pointing at the EC2 instance

Use ElastiCache as the web front-end and forward connections to EC2 for cache misses

Create a CloudFront RTMP distribution and point it at the EC2 instance

Ans:- Create a CloudFront distribution and configure a custom origin pointing at the EC2 instance

Explanation
• This is a good use case for CloudFront which is a content delivery network (CDN) that caches content to improve performance for users who are consuming the content. This will take the load off of the EC2 instances as CloudFront has a cached copy of the video files. An origin is the origin of the files that the CDN will distribute. Origins can be either an S3 bucket, an EC2 instance, and Elastic Load Balancer, or Route 53 – can also be external (non-AWS)
• ElastiCache cannot be used as an Internet facing web front-end
• For RTMP CloudFront distributions files must be stored in an S3 bucket
• Placing an ELB in front of a single EC2 instance does not help to reduce load

49) The development team at Digital Cloud Training have created a new web-based application that will soon be launched. The application will utilize 20 EC2 instances for the web front-end. Due to concerns over latency, you will not be using an ELB but still want to load balance incoming connections across multiple EC2 instances. You will be using Route 53 for the DNS service and want to implement health checks to ensure instances are available.
What two Route 53 configuration options are available that could be individually used to ensure connections reach multiple web servers in this configuration? (choose 2)

Use Route 53 weighted records and give equal weighting to all 20 EC2 instances

Use Route 53 multivalue answers to return up to 8 records with each DNS query

Use Route 53 Alias records to resolve using the zone apex

Use Route 53 simple load balancing which will return records in a round robin fashion

Use Route 53 failover routing in an active-active configuration

Ans:- Use Route 53 weighted records and give equal weighting to all 20 EC2 instances

Use Route 53 multivalue answers to return up to 8 records with each DNS query

Explanation
• The key requirement here is that you can load balance incoming connections to a series of EC2 instances using Route 53 AND the solution must support health checks. With multi-value answers Route 53 responds with up to eight health records (per query) that are selected at random The weighted record type is similar to simple but you can specify a weight per IP address. You create records that have the same name and type and assign each record a relative weight. In this case you could assign multiple records the same weight and Route 53 will essentially round robin between the records
• We cannot use the simple record type as it does not support health checks
• Alias records let you route traffic to selected AWS resources, such as CloudFront distributions and Amazon S3 buckets. They do not provide equal distribution to multiple endpoints or multi-value answers
• Failover routing is used for active/passive configurations only

50) A Solutions Architect is creating an application design with several components that will be publicly addressable. The Architect would like to use Alias records. Using Route 53 Alias records what targets can you specify? (choose 2)

CloudFront distribution

EFS filesystem

On-premise web server

Elastic BeanStalk environment

ElastiCache cluster

Ans:- CloudFront distribution

Elastic BeanStalk environment

Explanation
• Alias records are used to map resource record sets in your hosted zone to Amazon Elastic Load Balancing load balancers, API Gateway custom regional APIs and edge-optimized APIs, CloudFront Distributions, AWS Elastic Beanstalk environments, Amazon S3 buckets that are configured as website endpoints, Amazon VPC interface endpoints, and to other records in the same Hosted Zone
• You cannot point an Alias record directly at an on-premises web server (you can point to another record in a hosted zone, which could point to an on-premises web server though I’m not sure if this is supported)
• You cannot use an Alias to point at an ElastiCache cluster or VPC endpoint
• You cannot use an Alias to point to an EFS filesystem