Describe about Reserved Instances ?
Reserved Instances:
- Purchase (or agree to purchase) usage of EC2 instances in advance for significant discounts over On-Demand pricing
- It Provides a capacity reservation when used in a specific AZ
- AWS Billing automatically applies discounted rates when you launch an instance that matches your purchased RI
- Its Capacity is reserved for a term of 1 or 3 years
- Its EC2 has three RI types: Standard, Convertible,and Scheduled
- Standard = commitment of 1 or 3 years, charged whether it’s on or off
- Scheduled = reserved for specific periods of time,accrue charges hourly, billed in monthly increments over the term (1 year)
- Its RIs are used for steady state workloads and predictable usage
- It is Ideal for applications that need reserved capacity
- It Can change the instance size within the same instance type
- Its Instance type modifications are supported for Linux only
- It Cannot change the instance size of Windows RIs
- It is Billed whether running or not
- It Can sell reservations on the AWS marketplace
- It Can be used in Auto Scaling Groups
- It Can be used in Placement Groups
- It Can be shared across multiple accounts within Consolidated Billing
- If you don’t need your RI’s, you can try to sell them on the Reserved Instance Marketplace
This Chapter is related to AWS Security Groups.
What is Security Group in AWS?
Amazon web Service provides a big scope of IT foundation and distributed computing administrations.
Every customer needs product with some degree of security where system traffic can be sifted properly. For that we need AWS Security Group services.
AWS works with security groups that help some level of control of system traffic related with EC2 instances. In Short,
- A security group acts as a virtual firewall that controls the traffic for EC2 instances.
- When we launch an instance, we can specify security group; otherwise, we must use the default security group.
- We can add rules to customize security group that allow traffic to or from its instances.
- If required, then we can modify rules(Inbound/Outbound) at any time, and it reflects simultaneously.
- We can customize our own firewall on EC2 instance, as in some of the cases, our requirements are not met by the defined security groups then we can provide our own firewall on EC2 instance in addition to use security groups.
- Security groups act as a firewall for associated instances, controlling both inbound and outbound traffic at the instance level.
- We can add rules to a security group that enable us to connect to our instance from our IP address using SSH.
- We can also add rules that allow inbound and outbound HTTP and HTTPS access from anywhere.
If you have requirements that doesn’t meet by the defined security groups, you can customize your own firewall on any of your instances in addition to using security groups.
Now we will learn about Security Groups and its uses step by step:
Security Group Inbound Rule -ADD/EDIT/DELETE
Security Group OutBound Rule – ADD/EDIT/DELETE
Additional Security Group -ADD/EDIT/DELETE
What is Security Group Inbound Rule -ADD/EDIT/DELETE
Step 1: Create EC2 Instance (Already discussed in Chapter 1).
Step 2: You can see Security Group details under Description Tab:

Click on VIEW INBOUND RULES:

Step 3:
You can see Security groups in left side of page.

Click on Inbound tab to see Inbound rules:

Let’s EDIT this and delete that and observe the impact.

Delete this:

Save it.
Navigate to EC2 instance and open url(ip address).

URL wont work because you have removed its inbound condition.

Now add Rule in Inbound Logic.

Now check URL again in browser:

It will reflect immediately.

Security Group OutBound Rule – ADD/EDIT/DELETE
Step 1:
OutBound Rules:
As a security group includes an outbound rule that allows all outbound traffic. We can remove the rule and add outbound rules that allow specific outbound traffic only.
If your security group has no outbound standards, no outbound traffic starting from your instance will be permitted.
Let us practice it and will understand its basic.
Navigate to Outbound tab and EDIT it.
Remove the Outbound rule.


Click on Save.

Let us check the url:
It will work as its inbound rule is working now.

Additional Security Group -ADD/EDIT/DELETE
Step 1: You can add more than 1 Security Group to an EC2 instance.
Navigate to Actions ->Networking->Change Security Groups


Click on Assign Security Groups.

You can observe that two new assigned groups are reflecting here. In this way, we can add multiple Security Groups in EC2 Instance.
In view inbound rules, you can see its full details:

And you can remove Security Groups in same way.
Navigate to Actions ->Networking->Change Security Groups


Post clicking on Assign Security Groups, you can see its reflection on Instance page.

When do I use a Glue Classifier in project?
- It reads the data in a data store.
- If it identifies the format of the data then it generates a schema.
- It provides classifiers for common file types, such as CSV, JSON, AVRO, XML, and others.
- AWS Glue provides a set of built-in classifiers, but you can also create custom classifiers.
- You can set up your crawler with an ordered set of classifiers.
- When the crawler invokes a classifier, the classifier determines whether the data is recognized or not.
What is Trigger in AWS Glue?
It is an ETL job and we can define triggers based on a scheduled time or event.
John joined new company where he is working in migration project.His project moved into serverless Apache Spark-based platform from ETL.
Then which service is recommended for Streaming?
AWS Glue is recommended for Streaming when your use cases are primarily ETL and when you want to run jobs on a serverless Apache Spark-based platform.
How will you import data from Hive Metastore to the AWS Glue Data Catalog?
Migration through Amazon S3:
Step 1: Run an ETL job to read data from your Hive Metastore
and it will export the data(Extract database, table, and partition objects) to an intermediate format in Amazon S3
Step 2:Import that data from S3 into the AWS Glue Data Catalog through AWS Glue ETL job.
Direct Migration:
You can set up an AWS Glue ETL job which extracts metadata from your Hive metastore and loads it into your AWS Glue Data Catalog through an AWS Glue connection.
How Does IAM Work?
IAM works as per below process:
- we have entities as user, role or an application that can perform actions on an AWS resource.
- While performing action on resource, Authentication is required to recognize entity.
- It is required to provide its credentials or keys for authentication.
- Then a request is sent to AWS specifying the action and which resource should perform it.
- Authorization: By default, all resources are denied. IAM authorizes a request only if all parts of the request are allowed by a matching policy. After authenticating and authorizing the request, AWS approves the action.
- Actions are used to view, create, edit or delete a resource.
- Resources: A set of actions can be performed on a resource related to your AWS account.
What is aws iam role?
An IAM role is a set of permissions that define what actions are allowed and denied by an entity in the AWS console. Role permissions are temporary credentials.
Q: How do I get started with IAM?
We must subscribe to at least one of the AWS services that is integrated with IAM. Then we can create and manage users, groups, and permissions via IAM APIs, the AWS CLI. We can also use the visual editor to create policies.
Q: What problems does IAM solve?
IAM makes it easy to provide multiple users secure access to your AWS resources. IAM enables you to:
- Manage IAM users and their access: You can create users in AWS’s identity management system, assign users individual security credentials (such as access keys, passwords, multi-factor authentication devices), or request temporary security credentials to provide users access to AWS services and resources.
- You can specify permissions to control which operations a user can perform.
Q: Who can use IAM?
- Any AWS customer can use IAM. This service is offered at no additional charge.
- You will be charged only for the use of other AWS services by your users.
Q: What is a user?
- A user is a unique identity recognized by AWS services and applications.
- Like a login user in an operating system like Windows, a user has a unique name and can identify itself using familiar security credentials such as a password or access key.
- A user can be an individual, system, or application requiring access to AWS services.
Explain DynamoDB Items?
Item: A table may contain multiple items. An item is a unique group of attributes. Items are similar to rows or records in a traditional relational database. Items are limited to 400 KB.
Attribute: Fundamental data element. Similar to fields or columns in an RDBMS.
Explain Data Types.
Data Types
Scalar: Exactly one value — number, string, binary, boolean, and null. Applications must encode binary values in base64-encoded format before sending them to DynaboDB.
Document: Complex structure with nested attributes (e.g.. JSON) — list
and map.
Document Types
List: Ordered collection of values
FavoriteThings: [“Cookies”, “Coffee”, 3.14159]
Map: Unordered collection of name-value pairs (similar to JSON)
{
Day: ’Monday*,
UnreadEsalls: 42, lte«sOnMyOesk: |
“Coffee Cup”,
“Telephone”,
{
Pens: ( Quantity : 3},
Pencils: { Quantity : 2),
Erasers: { Quantity : 1>
}
]
}
Set: Multiple scalar values of the same type — string set, number set,
binary set.
[“Black”, “Green”, “Red”]
[42.2, -19, 7.5, 3.14]
[“U3Vubnk=”, “UmFpbnk=”, “U25vd3k=”]
Explain DynamoDB Table.
Creating a Table
- Table names must be unique per AWS account and region.
- Between 3 and 255 characters long
- UTF-8 encoded
- Case-sensitive
- Contain a-z. A-Z. 0-9, _ (underscore). • (dash), and. (dot)
- Primary key must consist of a partition key or a partition key and sort key.
- Only string, binary, and number data types are allowed for partition or sort keys
- Provisioned capacity mode is the default (free tier).
- For provisioned capacity mode, read/write throughput settings are required
- Secondary indexes creates a local secondary index.
- Must be created at the time of table creation
- Same partition key as the table, but a different sort key
- Provisioned capacity is set at the table level.
- Adjust at any time or enable auto scaling to modify them automatically
- On-demand mode has a default upper limit of 40.000 RCU/WCU — unlike auto scaling, which can be capped manually
Create DynamoDB table
DynamoDB is a schema-loss database that only requires a table name and primary key. The table’s primary key is made up of one or two attributes that uniquely identity items, partition the data, and sort data within each partition.
Explain DynamoDB Console Menu Items.
DynamoDB Console Menu Items
- Dashboard
- Tables
Storage size and item count are not real time
- Items: Manage items and perform queries and scans.
- Metrics: Monitor CloudWatch metrics.
- Alarms: Manage CloudWatch alarms.
- Capacity: Modify a table s provisioned capacity.
- Free tier allows 25 RCU, 25 WCU. and 25 GB for 12 months
- Cloud Sandbox within the Cloud Playground
- Indexes: Manage global secondary indexes.
- Global Tables: Multi region, multi master replicas
- Backups: On-demand backups and point in time recovery Triggers: Manage triggers to connect DynamoDB streams to Lambda functions.
- Access control: Set up fine grained access control with v/eb identity federation.
Tags: Apply tags to your resources to help organize and identify them.
- Backups
- Reserved capacity
- Preferences
- DynamoDB Accelerator (DAX)
How can you apply aws cli in DynamoDB?
Installing the AWS CLI
- Preinstalled on Amazon Linux and Amazon Linux 2
- Cloud Sandbox within the Cloud Playground
Obtaining IAM Credentials
- Option 1 : Create IAM access keys in your own AWS account.
- Option 2: Use Cloud Sandbox credentials.
- Note the access key ID and secret access key.
Configuring the AWS CLI
- aws configure
- aws sts get-caller-identity
- aws dynamodb help
Using DynamoDB with the AWS CLI
- aws dynamodb create-table
- aws dynamodb describe-table
- aws dynamodb put-item
- aws dynamodb scan
Object Persistence Interface
- Do not directly perform data plane operations
- Map complex data types to items in a DynamoDB table
- Create objects that represent tables and indexes
- Define the relationships between objects in your program and the tables that store those objects
- Call simple object methods, such as save. load, or delete
- Available in the AWS SDKs for Java and NET
How can we use cloudwatch in dynamodb?
CloudWatch monitors your AWS resources in real time, providing visibility into resource utilization, application performance, and operational health.
Q: What can a user do?
- A user can send requests to web services such as Amazon S3.
- A user’s ability to access web service APIs is according to control and responsibility of the AWS account under which it is defined.
- You can permit a user to access any or all the services that have been integrated with IAM.
- In addition, if the AWS account has access to resources from a different AWS account, its users may be able to access data under those AWS accounts.
Q: How do users call AWS services?
Users can make requests to AWS services using security credentials.
How will you list out all S3 buckets?
import boto3
client = boto3.client('s3')
response = client.list_buckets()
for bucket in response['Buckets']:
print(bucket['Name'])
Which PowerShell cmdlet changes an S3 object’s storage class?
Copy-S3Object
How will you select specific rows , columns from json filestored in S3?
import boto3
client = boto3.client(’s3’)
resp = client.select_object_content (
Bucket=’javahome—9090’,
Key=’fites/employees.json’,
Expression=’Select s.name1 s.email. from S3Object s’,
ExpressionType=’SQL’, I
InputSerialization = {‘json’: {‘FileHeaderlnfo’: ‘Use’)),
OutputSerialization = {‘JSON’: {}}
)
for event in resp['Paylload']:
if 'Records' in event:
print(event[’Records'] [‘PayLoad’l .decodeo())
Which Linux command can be used to connect to AWS Elastic File System?
mount
Explain Global Secondary Indexes
• It shares many of the same concepts as a Local secondary index, BUT, with a GSI we can have an alternative Partition & sort key
• Options for attribute projection
• KEYS.ONLY – New partition and sort keys, old partition key and if applicable, old sort key
• INCLUDE – Specify custom projection values
• ALL – Projects all attributes
• Unlike LSI’s where the performance is shared with the table, RCU and WCU are defined on the GSI – in the same way as the table
• As with LSI, changes are written to the GSI asynchronously
• GSI’s ONLY support eventually consistent reads
What is a DynamoDB stream ?
• When a stream is enabled on a table, it records changes to a table and stores those values for 24 hours
• A stream can be enabled on a table from the console or API
• But can only be read or processed via the streams endpoint and API requests
• streams.dynamodb.us-west-2.amazonaws.com
• AWS guarantee that each change to a Dynamo DB table occur in the stream once and only once AND….
• That ALL changes to the Table occur in the stream in near realtime
• A Lambda function triggered when items are added to a dynamo DB stream, performing analytics on data
• A Lambda function triggered when a new user signup happens on your web app and data is entered into a users table
How will you Put Item in DynamoDB through Boto3?
import boto3
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('employees')
table.put_item(
Item={
'emp_id': '3',
'name': 'vikas',
'salary': 2000
}
)
How will you get and delete item from DynamoDB through Boto3?
import boto3
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('employees')
resp = table.get_item(
#Key is dictionary
Key={
'emp_id': '3'
}
)
print(resp['Item'])
table.delete_item(
Key={
'emp_id': '3'
}
)
How will you insert batch records into Dynamodb through Boto3?
import boto3
dynamodb = boto3.resource('dynamodb')
table = dynamodb.Table('employees')
with table.batch_writer() as batch:
for x in range(100):
batch.put_item(
Item={
'emp_id': str(x),
'name': 'Name-{}'.format(x)
}
)