AWS Interview Question-1

Name some cloud service providers for public & private cloud ?

Public: Amazon web services, Microsoft Azure, Google Cloud, Oracle Cloud, Alibaba Cloud.

Private: Redhat-Openstack, Rackspace, VMware, IBM Private Cloud.

How to install AWS CLI?

sudo apt-get install -y python-dev python-pip
sudo pip install awscli
aws –version
aws configure

Which is an email platform that provides an easy, cost effective way?

A) SES B) SNS C) SQS D) SAS

install AWS CLIv2

sudo apt-get install -y python-dev python-pip
sudo pip install awscli
aws –version
aws configure
curl “https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip” -o “awscliv2.zip”
unzip awscliv2.zip
sudo ./aws/install

create a new trail

aws cloudtrail create-subscription \
–name awslog \
–s3-new-bucket awslog2016

What is the difference between Stack and Template in Cloud Formation ?

Ans: Stack : Cloud-based applications usually require a group of related resources—application servers, database servers. It must be created and managed collectively. This collection of instances is called a stack.

Which of the below mentioned service is provided by Cloud watch?

A) Monitor estimated AWS usage

B) Monitor EC2 log files

C) Monitor S3 storage

D) Monitor AWS calls using Cloud trail

list the names of all trails

aws cloudtrail describe-trails –output text | cut -f 8

Which of the following metrics cannot have a cloud watch alarm?

A) EC2 instance status check failed

B) EC2 CPU utilization

C) RRS lost object

D) Auto scaling group CPU utilization

get the status of a trail

aws cloudtrail get-trail-status \
–name awslog

list existing S3 buckets

aws s3 ls

create a bucket name, using the current date timestamp

bucket_name=test_$(date “+%Y-%m-%d_%H-%M-%S”)
echo $bucket_name

list all security groups

aws ec2 describe-security-groups

create a security group

aws ec2 create-security-group \
–vpc-id vpc-1aert3c4d \
–group-name web-access \
–description “web access”

list details about a securty group

aws ec2 describe-security-groups \
–group-id sg-0000000

create a public facing bucket

aws s3api create-bucket –acl “public-read-write” –bucket $bucket_name

delete a security group

aws ec2 delete-security-group \
–group-id sg-000000123

How to delete the security group for the database?
aws ec2 delete-security-group \
--group-id $DBSecurityGroupId01
stop an instance

aws ec2 terminate-instances \
–instance-ids

create new user

aws iam create-user \
–user-name aws-adminvikas

Which AWS IAM feature allows developers to access AWS services through the AWS CLI?

Ans:-Access keys

You would like to deploy an AWS lambda function using the AWS CLI. Before deploying what needs to be done?

Ans:-Package the local artefacts to S3 using cloudformation package CLI command

add a tag to an instance

aws ec2 create-tags \
–resources “ami-1a2b4d” \
–tags Key=name,Value=debian

delete a tag on an instance

aws ec2 delete-tags \
–resources “ami-1a2b3c4d” \
–tags Key=Name,Value=

create a log stream

aws logs create-log-stream \
–log-group-name “DefaultGroup” \
–log-stream-name “cloudaws”

delete a log stream

aws logs delete-log-stream \
–log-group-name “DefaultGroup” \
–log-stream-name “Default Stream”

create a new access key

aws iam create-access-key \
–user-name aws-adminvikas2 \
–output text | tee aws-adminvikas2.txt

deactivate an acccss key

aws iam update-access-key \
–access-key-id AKIAI44XAMPLEQH8DHBE \
–status Inactive \
–user-name aws-adminvikas

What are the Storage of classes of Amazon?

 Amazon S3
 Scalable Storage in Cloud
 Amazon EBS
 Block Storage for EC2
 AWS Elastic File System
 Managed File Storage for EC2
 Amazon Glacier
 Low-cost Achieve Storage in the
 cloud
 AWS Storage Gateway
 Hybrid Storage Integration
 Amazon Snowball
 Petabyte-Scale Data Transport
 AWS Snowball Edge
 Petabyte-scale Data to Transport with
 On-Demand Compute
 AWS Snowmobile
 Exabyte-scale Data to Transport

Explain AWS GLUE Crawler.

  • It is a program that connects to a data store (source or target), progresses through a prioritized list of classifiers to determine the schema for your data and then creates metadata tables in the Data Catalog.
  • It scan various data stores to infer schema and partition structure to populate the Glue Data Catalog with corresponding table definitions and statistics.
  • It can be scheduled to run periodically. Doing so, the metadata is always up-to-date and in-sync with the underlying data.
  • It automatically add new tables, new partitions to existing table, and new versions of table definitions.
  • We can determine the schema of complex unstructured or semi-structured data, which can save a ton of time. 
When do I use a Glue Classifier in project?
  • It reads the data in a data store.
  • If it identifies the format of the data then it generates a schema.
  • It provides classifiers for common file types, such as CSV, JSON, AVRO, XML, and others.
  • AWS Glue provides a set of built-in classifiers, but you can also create custom classifiers.
  • You can set up your crawler with an ordered set of classifiers.
  • When the crawler invokes a classifier, the classifier determines whether the data is recognized or not.

Your Project application is deployed on an Auto Scaling Group of EC2 instances using an Application Load Balancer. The Auto Scaling Group has scaled to maximum capacity, but there are few requests(Customer’s requests) being lost. What will you do?

  • The number of instances in your Auto Scaling Group can be driven by multiple factors, including how long it takes to process a message and the acceptable amount of latency .
  • The solution is to use a backlog per instance metric with the target value being the acceptable backlog per instance to maintain.
  • You can calculate these numbers as follows: Backlog per instance: To calculate your backlog per instance, start with the ApproximateNumberOfMessages queue attribute to determine the length of the SQS queue (number of messages available for retrieval from the queue). 

Your project manager is preparing for disaster recovery and upcoming DR drills of the MySQL database instances and their data. The Recovery Time Objective (RTO) is such that read replicas can be used to offload read traffic from the master database.What are the features of read replicas?

  • You can create read replicas within AZ, cross-AZ, or cross-Region.
  • Read replicas can be within AZ, cross-AZ, or cross-Region.
  • You can have up to five read replicas per master, each with its own DNS endpoint.
  • Read replica can be manually promoted as a standalone database instance.
  • Amazon RDS uses the MariaDB, MySQL, Oracle, PostgreSQL, and Microsoft SQL Server DB engines’ built-in replication functionality to create a special type of DB instance called a read replica from a source DB instance. Updates made to the source DB instance are asynchronously copied to the read replica. You can reduce the load on your source DB instance by routing read queries from your applications to the read replica. Using read replicas, you can elastically scale out beyond the capacity constraints of a single DB instance for read-heavy database workloads.
What is Trigger in AWS Glue?
  • It is an ETL job and we can define triggers based on a scheduled time or event.
John joined new company where he is working in migration project.His project moved into serverless Apache Spark-based platform from ETL.
Then which service is recommended for Streaming?
  • AWS Glue is recommended for Streaming when your use cases are primarily ETL and when you want to run jobs on a serverless Apache Spark-based platform.

When The ALB stops sending traffic to the instance?

  • The load balancer routes requests only to the healthy instances.
  • When the load balancer determines that an instance is unhealthy, it stops routing requests to that instance.
  • The load balancer resumes routing requests to the instance when it has been restored to a healthy state.

Your Project Manager need a storage service that provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources. Which AWS service can meet these requirements?

  • Amazon Elastic File System (Amazon EFS) provides a simple, scalable, fully managed elastic NFS file system for use with AWS Cloud services and on-premises resources.
  • It is built to scale on-demand to petabytes without disrupting applications, growing and shrinking automatically as you add and remove files, eliminating the need to provision, and manage capacity to accommodate growth.
  • Amazon EFS offers two storage classes: the Standard storage class, and the Infrequent Access storage class (EFS IA).

The company needs to be able to store files in several different formats, such as pdf, jpg, png, word, and several others. This storage needs to be highly durable. Which storage type will best meet this requirement?

  • Amazon Simple Storage Service (Amazon S3) is an object storage service that offers industry-leading scalability, data availability, security, and performance.
  • This means customers of all sizes and industries can use it to store and protect any amount of data for a range of use cases, such as websites, mobile applications, backup and restore, archive, enterprise applications, IoT devices, and big data analytics. Amazon S3 provides easy-to-use management features so you can organize your data and configure finely-tuned access controls to meet your specific business, organizational, and compliance requirements.

What is hot attach in EC2?

  • If you have two EC2 instances running in the same VPC, but in different subnets.
  • You are removing the secondary ENI from an EC2 instance and attaching it to another EC2 instance.
  • You want this to be fast and with limited disruption.
  • So you want to attach the ENI to the EC2 instance when it’s running.
  • You can attach a network interface to an instance when it’s running (hot attach), when it’s stopped (warm attach), or when the instance is being launched (cold attach).
  • You can detach secondary network interfaces when the instance is running or stopped. However, you can’t detach the primary network interface.
  • You can move a network interface from one instance to another if the instances are in the same Availability Zone and VPC but in different subnets.
  • When launching an instance using the CLI, API, or an SDK, you can specify the primary network interface and additional network interfaces.
  • Launching an Amazon Linux or Windows Server instance with multiple network interfaces automatically configures interfaces, private IPv4 addresses, and route tables on the operating system of the instance.
  • A warm or hot attach of an additional network interface may require you to manually bring up the second interface, configure the private IPv4 address, and modify the route table accordingly.
  • Instances running Amazon Linux or Windows Server automatically recognize the warm or hot attach and configure themselves. Attaching another network interface to an instance (for example, a NIC teaming configuration) cannot be used as a method to increase or double the network bandwidth to or from the dual-homed instance.
  • If you attach two or more network interfaces from the same subnet to an instance, you may encounter networking issues such as asymmetric routing. 
How will you import data from Hive Metastore to the AWS Glue Data Catalog?

Migration through Amazon S3:
Step 1: Run an ETL job to read data from your Hive Metastore
and it will export the data(Extract database, table, and partition objects) to an intermediate format in Amazon S3

Step 2:Import that data from S3 into the AWS Glue Data Catalog through AWS Glue ETL job.

Direct Migration:
You can set up an AWS Glue ETL job which extracts metadata from your Hive metastore and loads it into your AWS Glue Data Catalog through an AWS Glue connection.

What is launch templates?

A launch template is similar to a launch configuration, in that it specifies instance configuration information.Defining a launch template instead of a launch configuration allows you to have multiple versions of a template. With versioning, you can create a subset of the full set of parameters and then reuse it to create other templates or template versions.

What is AWS EC2?
  • Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud.
  • It is designed to make web-scale computing easier for developers
  • You are limited to running up to a total of 20 On-Demand instances across the instance family, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic spot limit per region (by default)
  • Amazon EC2 currently supports a variety of operating systems including: Amazon Linux, Ubuntu, Windows Server, Red Hat Enterprise Linux, SUSE Linux Enterprise Server, Fedora, Debian, CentOS, Gentoo Linux, Oracle Linux, and FreeBSD
  • EC2 compute units (ECU) provide the relative measure of the integer processing power of an Amazon EC2 instance
  • With help of EC2 you have full control at the operating system layer

Summary:

  • Virtual computing environment (known as instances)
    • Pre-configured templates for your instances (known as Amazon Machine Images – AMIs)
    • Amazon Machine Images (AMIs)
    • Amazon EC2 provides various configurations of CPU, memory, storage and networking capacity for your instances (known as instance type)
    • Secure login information for your instances using key pairs
    • Storage volumes of temporary data is deleted when you stop or terminate your instance (known as instance store volumes)
    • Amazon EC2 provides persistent storage volumes (using Amazon Elastic Block Store – EBS)
    • A firewall that enables you to specify the protocols, ports, and source IP ranges that can reach your instances using security groups
    • Static IP addresses for dynamic cloud computing (known as Elastic IP address)
    • Amazon EC2 provides metadata (known as tags)
    • Amazon EC2 provides virtual networks that are logically isolated from the rest of the AWS cloud, and that you can optionally connect to your own network (known as virtual private clouds – VPCs)
Consider you have to grant IAM access to your entire development team then how will you do this?

Instead of defining permissions for individual IAM users, it’s usually more convenient to create groups that relate to job functions. Next, we can define the relevant permissions for each group. Then, we can assign IAM users to those groups. All the users in an IAM group inherit the permissions assigned to the group.

A new company policy mandates that all S3 buckets use server-side encryption.What S3 encryption feature would you use?

Server-side encryption is about protecting data at rest. Using server-side encryption with customer-provided encryption keys (SSE-C) allows you to set your own encryption keys. AWS manages the encryption keys for SSE-S3 and stores the keys for SSE-KMS.

How can you Migrate data from AWS Glue to Hive Metastore through Amazon S3?

We can use two AWS Glue jobs here.
The first job extracts metadata from databases in AWS Glue Data Catalog and loads them into S3. The first job is run on AWS Glue Console.
The second job loads data from S3 into the Hive Metastore. The second can be run either on the AWS Glue Console or on a cluster with Spark installed.

How can you Migrate data from AWS Glue to AWS Glue?

We can use two AWS Glue jobs here.
The first extracts metadata from specified databases in an AWS Glue Data Catalog and loads them into S3.
The second loads data from S3 into an AWS Glue Data Catalog.

What is Time-Based Schedules for Jobs and Crawlers ?

We can define a time-based schedule for crawlers and jobs in AWS Glue. When the specified time is reached, the schedule activates and associated jobs to execute.

How to create EC2 Instance?

STEP 1: Login to Amazon Console. If you don’t have login access then register it and login to AWS console.

STEP 2: Click on Services-> EC2

STEP 3: We can see Resources and can create EC2 instance. Click on Launch Instance under Launch Instance section.

STEP 4: Choose Amazon Linux 2 AMI (HVM), SSD Volume Type. We can choose any option based on our needs.

STEP 5: Next Screen, we see Instance type page.

There are multiple types of Instances:

Now we select Free Tier eligible t2.micro  andproceed.

Click on Configure Instance Details.

STEP 6: Configure Instance Details

We have to check all details and have to select appropriate option as per our need.

Number of Instances: have to select 1 now but we can select as per our need.

We have option for Auto Scaling Group which we will study later.

Purchasing option: Based on availability zones and its price, we can request for Spot Instances.

Network: Default VPC will be added. We can create New VPC as well if we want to create.

Subnet: we can select default value as well or can select any subnet.

Similarly we can select values for –

Placement group- uncheck this field.

Capacity Reservation- select Open for this or we can create new capacity Reservation.

IAM role- Select none or create IAM role.

Shutdown behavior –             Select Stop value for this.

Enable termination protection- Check this field.

Fill other fields as per below value:

Click on Add Storage tab -Next field.

STEP 7: Add Storage

We can select storage type of any volume type. Even we can provide Storage size based on our needs. By Clicking on “ADD NEW VOLUME” new Volume type can be added.

Now click on Add Tags.

STEP 8: ADD Tags: We can add tags to created EC2 instance.

We can add another tag by clicking on “Add another tag”.

STEP 9: Configure Security Group

Next click on Review and Launch.

Step 10: Review Instance Launch

Let’s review it:

Click on Launch.

After clicking on Launch, popup appears for key pair:

Let’s create new key:

Click on Download Key Pair.

Save it in any folder because it needs to connect EC2 instance.

Now launch your instance by clicking on Launch Instance.

After few seconds, we can see EC2 instance:

It is in running status.

Now we will see its more details:

  1. Description Tab:

Under Description tab, we can see EC2 instance details.

  • Status Checks:
  • Monitoring: It can be monitored by Cloud watch.
  • Tags:

Following are the steps to create Amazon EC2 instance:

  • Open the Amazon EC2 console.
  • From the console dashboard, choose Launch Instance.
  • Choose an Amazon Machine Image (AMI).
  • Choose an Instance Type.
  • Click on Review and Launch to let the wizard complete the other configuration setting.
  • On the Review Instance Launch page, under Security Groups select a Security Group.
  • Click on Launch on the Review Instance Launch.
  • Select an Existing key pair when it prompt for key pair.
  • Click on View Instance to return on the console to see instance is launching.
Your manager wants to perform Penetration Testing on your entire AWS environment. How should you approach this?

It will depend on the service and the type of test they want to perform. Some will require permission.Penetration Testing is allowed with prior approval from AWS.

Which ELB response Codes indicates a normal, successful response from the registered instances?

A HTTPCode_Backend_2XX indicates a normal, successful response from the registered instances. 

In which service, your CloudTrail logs store?

Logs are stored in S3. We must specify a storage bucket name to enable CloudTrail. 

What is role of AWS Config?

AWS Config allows gives you a view of the configuration of your AWS infrastructure and compares it for compliance against rules you can define

What is the role of AWS Budgets?

AWS Cost Explorer lets you visualize, understand, and manage your AWS costs and usage over time. AWS Cost & Usage Report lists AWS usage for each service category used by an account and its IAM users and finally, Reserved Instance Reporting provides a number of RI-specific cost management solutions to help you better understand and manage RI Utilization and Coverage.


Which key must you have in your possession to manage a Windows instance?

Private

Symmetric

Ans – Private

You need to change an existing EC2 instance type. What must you do first?

Take an instance snapshot

Stop the instance

Ans – Stop the instance

What kind of a solution would give you near real-time visualizations of multiple EC2 instance metrics at once?

We can gather the necessary metrics together in CloudWatch Dashboards for complete operational visibility. 

You would like to run a Lambda function at the same time every night. How will you do this?

We can create rules that self-trigger on an automated schedule in CloudWatch Events using cron.

What does AWS Organizations offer?

AWS Organizations offers policy-based management for multiple AWS accounts as well as consolidated billing. Personal Health Dashboard provides alerts when AWS is experiencing outages and other events that may impact you. Inspector is used for vulnerability scanning of applications running on EC2. IAM is used for policy based access control for users under a single AWS account

Which of the following are valid EBS volume snapshot sources? Choose two.

Volume

Security group

Instance

Ans – Volume

Instance

You have selected a launch template. You need to create an instance from the template. What should you click on?

Launch/Instance from template

Actions / Launch instance from template

Ans – Actions / Launch instance from template

  How To Connect To Your Amazon Ec2 Instance?
  • Install PuTTY on your local machine.
  • Get your instance ID.
  • Get the public DNS name of the instance.
  • Locate the private key.
  • Enable inbound SSH traffic from your IP address to your instance.
  • Converting Your Private Key Using PuTTYgen.
  • Starting a PuTTY Session.
  • Now you are connected to your EC2 instance.

IN DETAIL:

  1. We have stored pem file in folder
  2. Open cmd and navigate to the folder where we have put pem file.

Run below command to provide full access:

CHMOD 400 LearningEC2.pem

  • Run below command to open EC2 instance:

ssh ec2-user@54.215.191.60 -i LearningEC2.pem

Type yes if any question is asked.

cloudvikas@personal  ~/Documents/AWS WEBSITE/test/SSH

$ ls

LearningEC2.pem

cloudvikas@personal  ~/Documents/AWS WEBSITE/test/SSH

$ CHMOD 400 LearningEC2.pem

$ ssh ec2-user@54.216.191.60 -i LearningEC2.pem

Please type ‘yes’, ‘no’ or the fingerprint: yes

8 package(s) needed for security, out of 17 available

Run “sudo yum update” to apply all updates.

[ec2-user@ip-172-11-1-111 ~]$

Which PowerShell cmdlet is used to tag an EC2 instance?

New-EC2Tag

Create-EC2Tag

Ans – New-EC2Tag

How to login EC2 in window envioronment?

            SSH Chrome Extension: we can use SSH client extension for chrome. Let’s understand how can we use SSH Client extension?

  1. Install ssh chrome extension

  • Click on Secure Shell App
  • Navigate to AWS Console and copy IP address.

For identity , navigate to pem file folder and convert into public key. We have kept pem file in test folder and open cmd and navigate to test folder.

Run below command to convert into public key.

ssh-keygen -y -f LearningEC2.pem > LearningEC2.pub

then run ren command to convert LearningEC2.pem into LearningEC2 (w/o extension).

C:\Users\Cloudvikas\Documents\AWS WEBSITE\test>ssh-keygen -y -f LearningEC2.pem > LearningEC2.pub

C:\Users\Cloudvikas\Documents\AWS WEBSITE\test>ren LearningEC2.pem LearningEC2

C:\Users\Cloudvikas\Documents\AWS WEBSITE\test>dir

12/19/2018  01:29 PM    <DIR>          .

12/19/2018  01:29 PM    <DIR>          ..

12/18/2018  08:16 PM             1,696 LearningEC2

12/19/2018  01:28 PM               382 LearningEC2.pub

Now press keyword  ENTER   and EC2 instance will be connected in backend.

Connecting to ec2-user@54.215.191.60…

The authenticity of host ‘54.215.191.60 (54.215.191.60)’ can’t be established.

https://aws.amazon.com/amazon-linux-2/
8 package(s) needed for security, out of 17 available

Run “sudo yum update” to apply all updates.

[ec2-user@ip-172-31-1-111 ~]$ sudo su

[root@ip-172-31-1-111 ec2-user]#

It is connected now.

How to check Http file through AWS EC2 Public ip?

Steps:

  1. Type command sudo su for admin

[ec2-user@ip-172-11-1-111 ~]$ sudo su

[root@ip-172-11-1-111 ec2-user]#

  • Type below command for window update:

yum update -y

  • Type below command for http

yum install httpd -y

It will install required packages for Apache.

  • Navigate to html directory:

cd /var/www/html

  • Type something and save it through nano command.

<html><h1>welcome to cloudvikas</h1></html>

  • Type ls and check file present or not.
[root@ip-172-31-1-111 ec2-user]# cd /var/www/html

[root@ip-172-31-1-111 html]# nano index.html

[root@ip-172-31-1-111 html]#ls

Index.html

Type below command to start service:

Service httpd start

Type below command if your EC2 instance is down or not running on:

Type ip address in browser and check:

It will redirect the page through public ip address and can see

Welcome to cloudvikas.

How will you find elastic IPs that are not in use and send details through email using Boto3?
import boto3
import os

ec2_client = boto3.client('ec2')
ses_lient = boto3.client('ses')

SOURCE_EMAIL = os.environ['SOURCE_EMAIL']
DEST_EMAIL = os.environ['DEST_EMAIL']

def lambda_handler(event,context):
    response = ec2_client.describe_addresses()
    unused_eips = []
    for address in response['Addresses']:
        if 'InstanceId' not in address  :
            unused_eips.append(address['PublicIp'])

    # send email using ses
    sesClient.send_email(
           Source = SOURCE_EMAIL,
           Destination={
            'ToAddresses': [
                DEST_EMAIL
            ]
          },
          Message={
            'Subject': {
                'Data': 'Unused  EIPS',
                'Charset': 'utf-8'
            },
            'Body': {
                'Text': {
                    'Data': str(eips),
                    'Charset': 'utf-8'
                }
            }
          }
        )
        
Q: What is Amazon Elastic Compute Cloud (Amazon EC2)?

Amazon Elastic Compute Cloud (Amazon EC2) is a web service that provides resizable compute capacity in the cloud.

Question : Why Key pairs are used in EC2 Instances?

Key pairs are used to securely connect to EC2 instances:

  • A key pair consists of a public key that AWS stores, and a private key file that you store
  • For Windows AMIs, the private key file is required to obtain the password used to log into your instance
  • For Linux AMIs, the private key file allows you to securely SSH into your instance
  • Why Metadata and User Data are used in EC2 Instance?

Metadata and User Data:

  • User data is data that is supplied by the user at instance launch in the form of a script
  • Instance metadata is data about your instance that you can use to configure or manage the running instance
  • User data is limited to 16KB
  • User data and metadata are not encrypted.
  • The Instance Metadata Query tool allows you to query the instance metadata without having to type out the full URI or category names
Q: How many instances can I run in Amazon EC2?

You are limited to running On-Demand Instances per your vCPU-based On-Demand Instance limit, purchasing 20 Reserved Instances, and requesting Spot Instances per your dynamic Spot limit per region. New AWS accounts may start with limits that are lower than the limits described here.
If you need more instances, complete the Amazon EC2 limit increase request form with your use case, and your limit increase will be considered.

Explain How To Launch Ec2 Instance In An Availability Zone ?

To launch a EC2 instance, we must select an AMI that’s in the same region (if the AMI is in another region then we can copy the AMI to the region). Now we can select an Availability Zone. After creating the EC2 instance, it will show up in selected Availability Zone.

Question: You are attempting to attach an EBS volume to an EC2 instance, but the Attach option is greyed out. What is the most likely cause of the problem?

The volume is already attached to an instance

The volume does not have a snapshot

Ans – The volume is already attached to an instance

Question: What is the default username when launching Amazon Linux instances?

ec2-user

admin

Ans – ec2-user

Question: Which AWS CLI command is used to launch a new instance?

aws ec2 run-instances

aws ec2 add-instances

Ans – aws ec2 run-instances

Which command in Red shift is efficient in loading large amounts of data ?

Answer : COPY. A COPY command loads large amounts of data much more efficiently than using INSERT statements, and stores the data more effectively as well. We can use a single COPY command to load data for one table from multiple files. Amazon Redshift then automatically loads the data in parallel.

You are working in Ecommerce Company where you have an order processing system in AWS. There are many EC2 Instances to pick up the orders from the application and these EC2 Instances are in an Auto scaling Group to process the orders. What will you do to ensure that the EC2 Processing instances are correctly scaled based on demand?

Answer : We can use SQS queues to decouple the architecture and can scale the processing servers based on the queue length. We know that SQS is a queue from which services pull data, and it supports only once delivery of messages. If no workers pull jobs from SQS, the messages still stay in the queue. SNS is a kind of publisher-subscriber system that pushes messages to subscribers. If there are no subscribers to an SNS topic, a given message is lost.

In your project, you have data in Dynamo DB tables and you have to perform complex data analysis queries on the data (stored In the Dynamo DB tables). How will you do this?

Answer : We can copy the data on AWS(Amazon Web Service) Red shift and then perform the complex queries.

Which service will you use to collect, process, and analyze video streams in realtime?

Answer : Amazon Red shift

In an AWS, EMR Cluster which node is resposible for running the YARN service?

Answer : Master Node

In your client big data project, You are trying to connect to the master node for your EMR cluster. What should be checked to ensure that the connection is successful?

Answer : We can check the Inbound rules for the Security Group for the master node. Under Security and access choose the Security groups for Master link. Choose ElasticMapReduce-master from the list. Choose Inbound, Edit. Check for an inbound rule that allows public access with the following settings.

Which AWS service will you use to perform ad-hoc analysis on log data?

Amazon Elasticsearch Service is a popular open-source search and analytics engine for use cases such as log analytics, real-time application monitoring, and clickstream analysis. You can search specific error codes and reference numbers quickly.

What will you do for query optimization after data has been ingested into a Red shift?

Answer : We can run the ANALYZE command so that the optimizer can generate up-to-date data statistics. Amazon Redshift monitors changes to your workload and automatically updates statistics in the background. In addition, the COPY command performs an analysis automatically when it loads data into an empty table. To explicitly analyze a table or the entire database, run the ANALYZE command.

An application is currently using the Elastic Search service in AWS. How can you take backups of a clusters data through Elastic Search?

Answer : Automated snapshots. By default, the AWS Elasticsearch Service already comes with regular automated snapshots. These snapshots can not be used for recovery or migration to a new Elasticsearch cluster.It can only be accessed as long as the Elasticsearch API of the cluster is available.

Which AWS Service can be used to monitor EMR Clusters and give reports of the performance of the cluster as a whole?

Answer : Cloudwatch logs.You can view the metrics that Amazon EMR reports to CloudWatch using the Amazon EMR console or the CloudWatch console.

Sometimes if you try to terminate an EMR Cluster but it does not happen. Which should be a possible reason for this?

Answer : The termination protection set on the cluster. If you are terminating a cluster which has termination protection set on then you must disable termination protection first.Then you can terminate the cluster. Clusters can be terminated using the console, the Amazon CLI, or programmatically using the TerminateJobFlows API.

Which node type is recommended when launching Red shift cluster ?

Answer : Dense Storage.DS2 allows a storage-intensive data warehouse with vCPU and RAM included for computation. DS2 nodes use HDD(Hard Disk Drive) for storage and as a rule of thumb, if its data more than 500 GB, then it will go for DS2 instances.

Where does the query results from Athena get stored?

Answer : In Amazon S3

How will you convert and migrate an on-premise Oracle database to AWS Aurora.

Answer : First we will convert database schema and code using AWS Schema Conversion Tool then will migrate data from the source database to the target database using AWS.

You expect a large number of GET and PUT requests on S3 bucket. You could expect around 300 PUT and 500 GET requests per second on the 53 bucket during a selling period on your web site. How will you do good design to ensure optimal performance?

Answer : We have to ensure the object names have appropriate key names.

Which AWS Service filter, transform messages (coming from sensor) and store them as time series data in Dynamo DB?

Answer : loT Rules Engine. The Rules Engine is a component of AWS IoT Core. The Rules Engine evaluates inbound messages published into AWS IoT Core and transforms and delivers them to another device or a cloud service, based on business rules you define.

Your Project is currently running an EMR cluster which is used to perform a processing task every day from 5pm to 10 pm. But the data admin has noticed that the cluster is being billed for the entire day. What will you do configuration here for the cluster to reduce the costs?

Answer : We can use transient clusters in EMR. There are two kinds of EMR clusters: transient and long-running. If you want to configure your cluster to be automatically terminated then it is terminated after all the steps complete.This is a transient cluster. Transient clusters are compute clusters that automatically shut down and stop billing when processing is finished.

Which storage types can be used with Amazon EMR?

Answer : Local file system

HDFS

EMRFS

Question: Which PowerShell cmdlet is used to add a new EC2 instance?

New-EC2Instance

Run-EC2Instance

Ans – New-EC2Instance

Question: Which port does SSH use?

25

22

Ans – 22

Question: You are using the AWS management console to launch a new EC2 Windows instance. You would like to have a script execute when the instance is launched. Into which field should you place the launch script commands?

VPC

User data

Ans – User data

Describe different types of Storage For Amazon Ec2?
  • Amazon EBS- Amazon EBS provides durable, block-level storage volumes that you can attach to a running instance. You can use Amazon EBS as a primary storage device for data that requires frequent and granular updates. For example, Amazon EBS is the recommended storage option when you run a database on an instance.
  • Amazon EC2 instance store- This disk storage is referred to as instance store. Instance store provides temporary block-level storage for instances. The data on an instance store volume persists only during the life of the associated instance; if you stop, hibernate, or terminate an instance, any data on instance store volumes is lost.
  • Amazon EFS file system- Amazon EFS provides scalable file storage for use with Amazon EC2. You can create an EFS file system and configure your instances to mount the file system.
  • Amazon S3- Amazon S3 provides access to reliable and inexpensive data storage infrastructure. It is designed to make web-scale computing easier by enabling you to store and retrieve any amount of data, at any time, from within Amazon EC2 or anywhere on the web.
  • Adding storage- The root storage device contains all the information necessary to boot the instance. You can specify storage volumes in addition to the root device volume when you create an AMI or launch an instance using block device mapping.
What is auto-scaling?
  • Autoscaling, also spelled auto scaling or auto-scaling, and sometimes also called automatic scaling, is a method used in cloud computing that dynamically adjusts the amount of computational resources in a server farm – typically measured by the number of active servers – automatically based on the load on the farm.
  • Amazon EC2 Auto Scaling helps you maintain application availability and allows you to automatically add or remove EC2 instances according to conditions you define.
  • Dynamic scaling responds to changing demand and predictive scaling automatically schedules the right number of EC2 instances based on predicted demand.
  • If you specify scaling policies, then Amazon EC2 Auto Scaling can launch or terminate instances as demand on your application increases or decreases. For example, the following Auto Scaling group has a minimum size of one instance, a desired capacity of two instances, and a maximum size of four instances.
  • When you use Amazon EC2 Auto Scaling, your applications gain the following benefits:
    • Better fault tolerance. Amazon EC2 Auto Scaling can detect when an instance is unhealthy, terminate it, and launch an instance to replace it.
    • Better availability.
    • Better cost management.
Q) How will you Terminate EC2 instance?

Step 1: Login to AWS console page and create one EC2 instance with  Termination protection as True.

Step 2: Navigate to Actions->Instance State -> Terminate

We have Stop ,Reboot and Terminate options. Click on Terminate option.

After clicking on Terminate option, we get Warning for Terminate Instances.

Step 3: First we have to disable Termination Protection. Then we can terminate EC2 Instance.

Navigate to  Actions -> Instance Settings -> Change Termination Protection

Step 4: Click on Yes,Disable button.

Step 5: Navigate to Actions->Instance State -> Terminate.

Click on Yes,Terminate button.

We can see status of EC2 Instance:

What is the difference between terminating and stopping an EC2 instance?
  • Terminate Instance-
    • When you terminate an EC2 instance, the instance will be shutdown and the virtual machine that was provisioned for you will be permanently taken away and you will no longer be charged for instance usage.
    • Any data that was stored locally on the instance will be lost.
    • Any attached EBS volumes will be detached and deleted.
    • However, if you attach an EBS Snapshot to an instance at boot time, the default option in the Dashboard is to delete the attached EBS volume upon termination.
  • Stop Instance-
    • When you stop an EC2 instance, the instance will be shutdown and the virtual machine that was provisioned for you will be permanently taken away and you will no longer be charged for instance usage.
    • The key difference between stopping and terminating an instance is that the attached bootable EBS volume will not be deleted.
    • The data on your EBS volume will remain after stopping while all information on the local (ephemeral) hard drive will be lost as usual.
    • The volume will continue to persist in its availability zone. Standard charges for EBS volumes will apply.

Question: Which port is used when managing Windows EC2 instances?

3389

389

Ans – 3389