Your Project application is deployed on an Auto Scaling Group of EC2 instances using an Application Load Balancer. The Auto Scaling Group has scaled to maximum capacity, but there are few requests(Customer’s requests) being lost. What will you do?
- The project has decided to use SQS with the Auto Scaling Group to ensure all messages are saved and processed.
- Approximate Number Of Messages Visible for target tracking is that the number of messages in the queue might not change proportionally to the size of the Auto Scaling Group that processes messages from the queue. That’s because the number of messages in your SQS queue does not solely define the number of instances needed.
- The number of instances in your Auto Scaling Group can be driven by multiple factors, including how long it takes to process a message and the acceptable amount of latency .
- The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain.
- You can calculate these numbers as follows: Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue (number of messages available for retrieval from the queue).
Your project manager is preparing for disaster recovery and upcoming DR drills of the MySQL database instances and their data. The Recovery Time Objective (RTO) is such that read replicas can be used to offload read traffic from the master database.What are the features of read replicas?
- You can create read replicas within AZ, cross-AZ, or cross-Region.
- Read replicas can be within AZ, cross-AZ, or cross-Region.
- You can have up to five read replicas per master, each with its own DNS endpoint.
- Read replica can be manually promoted as a standalone database instance.
- Amazon RDS uses the MariaDB, MySQL, Oracle, PostgreSQL, and Microsoft SQL Server DB engines’ built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance.
- Updates made to the source DB instance are asynchronously copied to the read replica.
- You can reduce the load on your source DB instance by routing read queries from your applications to the read replica.
- Using read replicas, you can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
When The ALB stops sending traffic to the instance?
- The load balancer routes requests only to the healthy instances.
- When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance.
- The load balancer resumes routing requests to the instance when it has been restored to a healthy state.
Your Project Manager need a storage service that provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Which AWS service can meet these requirements?
- Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
- It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision, and manage capacity to accommodate growth.
- Amazon EFS offers two storage classes: the Standard storage class, and the Infrequent Access storage class (EFS IA).
The company needs to be able to store files in several different formats, such as pdf, jpg, png, word, and several others. This storage needs to be highly durable. Which storage type will best meet this requirement?
- Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
- This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics.
- Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements.
What is hot attach in EC2?
- If you have two EC2 instances running in the same VPC, but in different subnets.
- You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance.
- You want this to be fast and with limited disruption.
- So you want to attach the ENI to the EC2 instance when it’s running.
- You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach).
- You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface.
- You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets.
- When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces.
- Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance.
- A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly.
- Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance.
- If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing.
What is launch templates?
- A launch template is similar to a launch configuration, in that it specifies instance configuration information.Defining a launch template instead of a launch configuration allows you to have multiple versions of a template.
- With versioning, you can create a subset of the full set of parameters and then reuse it to create other templates or template versions.
In your project , security team requires each Amazon ECS task to have an IAM policy that limits the task’s privileges. How can you achieve this?
- We can use IAM roles for Amazon ECS tasks to associate a specific IAM role with each ECS task definition.
- With IAM roles for Amazon ECS tasks, you can specify an IAM role that can be used by the containers in a task.
- Users must sign their AWS API requests with AWS credentials, and this feature provides a strategy for managing credentials for your applications to use, similar to the way that Amazon EC2 instance profiles provide credentials to EC2 instances.
- The applications in the task’s containers can then use the AWS SDK or CLI to make API requests to authorized AWS services.
In your project, application requires a persistent key-value store database that must service 150,000 reads/second. Your company is looking at 20% growth in traffic and data volume month over month for the next several years. Which service will you use here?
- DynamoDB is fully managed NoSQL solution and supports both key-value and document structures.
- DynamoDB Auto Scaling is a fully managed feature that automatically scales up or down provisioned read and write capacity of a DynamoDB table or a global secondary index, as application requests increase or decrease.
- DynamoDB supports key-value and document data structures.
How will you design a storage solution in AWS? When your input videos are required for a day, after which they should be archived. If required, the videos can be request with advance notices and expected to be available within 5 hours. However, in case of any breaking news the videos need to be made available within minutes.
- Glacier provides the most cost-effective archival solution.
- For normal requests, which default for standard retrieval, the videos can be retrieved within 3-5 hours.
- For express retrieval, Expedited retrieval request can be made with additional charges for the video to be available in 1-5 minutes.
Your project wants to use Redshift cluster for petabyte-scale data warehousing. Data for processing is stored on Amazon S3. As a security purpose, manager wants the data to be encrypted at rest. How will you implement this solution?
- Store the data in S3 with Server Side Encryption. Launch an encrypted Redshift cluster and copy the data to the cluster.
- In Amazon Redshift, we can enable database encryption for clusters to protect data at rest.
- When we enable encryption for a cluster, the data blocks and system metadata are encrypted for the cluster and its snapshots.
- If you want encryption, you can enable it during the cluster launch process.
- To go from an unencrypted cluster to an encrypted cluster or the other way around, unload your data from the existing cluster and reload it in a new cluster with the chosen encryption setting.
Your project storage database stores data coming from more than 20,000 sensors. Sometimes your manager wants to query information coming from a particular sensor for the past week very rapidly, after which the data is infrequently accessed for another week. Then the data needs to be archived. How will you do this?
Since the data fetch pattern is different for each week, it would be better to define different DynamoDB tables for each week with the current week having a higher provisioned throughput configured. The data can then be moved to Glacier and the old DynamoDB table can be dropped.
Your project has an application running on an Amazon EC2 instance in a VPC that needs to access external third-party services. A client running in another VPC in the same region must be able to communicate with this application. Security policies require that this application should not be accessible from the internet.
- We can Configure a VPC peering connection between the application VPC and the client VPC and can configure a NAT gateway in the VPC in the application VPC.
- VPC Peering connection would allow Client running in other VPC access the application.
- A VPC peering connection is a networking connection between two VPCs that enables you to route traffic between them using private IPv4 addresses or IPv6 addresses. Instances in either VPC can communicate with each other as if they are within the same network. You can create a VPC peering connection between your own VPCs, or with a VPC in another AWS account.
- NAT Gateway helps host the application in private subnets and still be able to access the external third-party services.
- You can use a network address translation (NAT) gateway to enable instances in a private subnet to connect to the internet or other AWS services, but prevent the internet from initiating a connection with those instances.