How will you create a RDS Subnet Group through AWS CLI?
create-db-subnet-group --db-subnet-group-name <value> --db-subnet-group-description <value> --subnet-ids <value> [--tags <value>] [--cli-input-json <value>] [--generate-cli-skeleton <value>] Example:
aws rds create-db-subnet-group \--db-subnet-group-name cloudvikas\ --db-subnet-group-description "cloudvikas subnet group" \ --subnet-ids $Subnet1ID $Subnet2ID
Explain few points about AWS Availability Zones.
- In AWS, each region has many availability zones
(usually 3, min is 2, max is 6). Example:
- Each availability zone (AZ) is one or more discrete data centers with redundant power,networking, and connectivity
- They’re separate from each other
- They’re isolated from disasters.
- They’re connected with high bandwidth,ultra-low latency networking.
How to create a RDS Parameter Group using AWS CLI?
aws rds create-db-cluster-parameter-group \ --db-cluster-parameter-group-name cloudvikas \ --db-parameter-group-family aurora-postgresql10 \ --description "cloudvikas DB Cluster parameter group"
How to Create a VPC security group for the database?
DBcloudSecurityGroupId=$(aws ec2 create-security-group \ --group-name AWScloudvikas \ --description "Aurora Serverless vikas Security Group" \ --vpc-id $VPCId --output text --query GroupId)
How to Create a database cluster using CLI?
aws rds create-db-cluster \ --db-cluster-identifier cloudvikasdb \ --engine aurora-postgresql \ --engine-mode serverless \ --engine-version 10.16 \ --db-cluster-parameter-group-name cloudvikasdbparamgroup \ --master-username user \ --master-user-password $MasterPassword \ --db-subnet-group-name cloudvikasdbsubnetgroup \ --vpc-security-group-ids $DBSecurityGroupId
How to delete the RDS database cluster?
aws rds delete-db-cluster \ --db-cluster-identifier cloudvikas01 \ --skip-final-snapshot
How to delete the RDS Subnet Group?
aws rds delete-db-subnet-group \ --db-subnet-group-name cloudvikas01
How to delete the security group for the database?
aws ec2 delete-security-group \ --group-id $DBSecurityGroupId01
You need a cost-effective solution to store a large collection of video files and have fully managed data warehouse service that can keep track of and analyze all your data efficiently using your existing business intelligence tools. How will you full fill the requirements?
Answer : Store the data in Amazon S3 and reference its location in Amazon Red shift. Amazon Red shift will keep track of metadata about your binary objects. but the large objects themselves would be stored in Amazon 53.
In Project, consider your EMR cluster uses ten m4.large instances and runs 24 hours per day, but it is only used for processing and reporting during working hours. How will you reduce the costs?
Answer : We can use Spot instances for tasks nodes when needed and we can migrate the data from HDFS to S3 using S3DispCp and turn off the cluster when not in use.
Your application generates a 2 KBJSON payload that needs to be queued and delivered EC2 instances for applications. At the end of the day, the application needs to replay the data for the past 24 hours, Which service would you use for this requirement?
Answer : Kinesis
Consider you are working in commercial deliver loT company where you have to track coordinates through enabled devices via GPS. You receive coordinates , which is transmitted from each device once every 8 seconds. Now you need to process these coordinates In real-time from multiple sources.Which tool should you use to digest the data?
Answer : Amazon Kinesis. Amazon Kinesis Data Streams is a scalable and durable real-time data streaming service that can continuously capture gigabytes of data per second from hundreds of thousands of sources.
Which command can be used to transfer the results of a query in Red shift to Amazon 53?
Answer : UNLOAD connects to Amazon S3 using an HTTPS connection.For unloading data from database tables to a set of files in an Amazon S3 bucket, we can use the UNLOAD command with a SELECT statement. As we know, Redshift splits the results of a select statement across a set of files, one or more files per node slice.
We have a set of web servers hosted on EC2 Instances and have to push the logs from these web servers onto a suitable storage device for subsequent analysis. How will you do this implementation process?
Answer : First we have to install and configure the Kinesis agents on the web servers. Then we have to ensure that Kinesis Fire hose is setup to take the data and send it across to Red shift for further processing
When estimating the cost of using EMR, which of the parameters should you consider.
Answer : The price of the underlying EC2 Instances.The price of the EMR service . The price of EBS storage if used.
Which services can be used for auditing 53 buckets?
Answer : Cloud trail and AWS Config. AWS CloudTrail is a service that enables governance, compliance, operational auditing, and risk auditing of your AWS account.
AWS Config is a service that enables us to assess, audit, and evaluate the configurations of AWS resources. Config continuously monitors and records AWS resource configurations and allows to automate the evaluation of recorded configurations against mentioned configurations.
Which managed service that can be used to deliver real-time streaming data to 53?
Answer : Kinesis Fire hose.Amazon Kinesis Data Firehose is a fully managed service for delivering real-time streaming data to destinations such as Amazon Simple Storage Service (Amazon S3), Amazon Redshift, Amazon Elasticsearch Service (Amazon ES), Splunk, and any custom HTTP endpoint or HTTP endpoints owned by supported third-party service.
In your client project, there is a requirement for a vendor to have access to an 53 bucket in your account. The vendor already has an AWS(Amazon Web Service) account. How can you provide access to the vendor on this bucket?
Answer : Create an S3 bucket policy that allows the vendor to read from the bucket from their AWS(Amazon Web Service) account. A bucket policy is a resource-based AWS Identity and Access Management policy.We can add a bucket policy to a bucket to grant other AWS accounts or IAM users access permissions for the bucket and the objects in it.
Which file format is supported in Athena by default?
Answer : Apache Parquet.Amazon Athena supports a wide variety of data formats like CSV, TSV, JSON, or Textfiles and also supports open source columnar formats such as Apache ORC and Apache Parquet. Athena also supports compressed data in Snappy, Zlib, LZO, and GZIP formats.