AWS Certified Machine Learning – Specialty Set 8 Author: CloudVikas Published Date: 19 March 2020 Welcome to AWS Certified Machine Learning - Specialty Set 8. Please enter your email details to get QUIZ Details on your email id. Click on Next Button to proceed. 1. You need to chain together three different algorithms for a model you are creating. You need to run PCA, RCF, and LDA in succession. What is the recommended way to do this?Use an Inference Pipeline to link together these algorithms.Use Lambda Step Functions to link together the separate training jobs.Use AWS Batch to create a script that will trigger each algorithm in sequence.2. Application Load Balancers support content-based routing, and supports applications that run in containers. They support a pair of industry-standard protocols (WebSocket and HTTP/2) and also provide additional visibility into the health of the target instances and containers.You have a number of application servers behind an Elastic Load Balancer and your security team is requesting that you provide detailed information for every request received by the load balancer, including client IP address and server responses. They would like these logs to be stored in an S3 bucket for future use. Which service can you use to provide this?CloudTrail LogsCloudWatch metricsAccess Logs3. You have been asked to build an automated chatbot for customer service. If the initial interaction with the customer seems negative or the customer is upset or unhappy, you want to immediately transfer that chat session over to a live human. What is the simplest way to implement this feature?Use Amazon Comprehend to take in the customer's initial comments, then process them through Amazon Personalize to determine sentiment. If sentiment is negative, hand the chat session over to a live customer support person.Use LDA to create an NLP model that can understand the sentiment of the customer's comments. Create a Lambda function to redirect the chat session over to a live customer support person.Use Amazon Lex to take in the customer's initial comments, then process them through Amazon Comprehend to determine sentiment. If sentiment is negative, hand the chat session over to a live customer support person.4. Which of the following services can you use to monitor API calls in AWS?SystemsManagerCloudTrailCloudWatch5. You are helping a client design a landscape for their mission critical ML model based on DeepAR deployed using SageMaker Hosting Services. Which of the following would you recommend they do to ensure high availability?Ensure that InitialInstanceCount is at least 2 or more in the endpoint production variant.Create a duplicate endpoint in another region using Amazon Forecast.Recommend that they deploy using EKS in addition to the SageMaker Hosting deployment.6. Your company has just discovered a security breach occurred in a division separate from yours but has ordered a full review of all access logs. You have been asked to provide the last 180 days of access to the three SageMaker Hosted Service models that you manage. When you set up these deployments, you left everything default. How will you be able to respond?Use SageMaker Detailed Logging to produce a CSV file of access from the past 180 days.Use CloudWatch along with IPInsights to analyse the logs for suspicious activity from the past 180 days then download these records.Use CloudTrail to pull a list of all access to the models for the last 90 days. Any data beyond 90 days is unavailable.7. AWS CLI is an important service in AWS.Via CLI, you try to fetch some metadata from a file from an S3 bucket but get back 404 Not Found. You then realize the mistake and upload the file. Immediately after the upload, you try again to fetch the metadata. What are potential outcomes we might expect and why? Because we did not use multi-part upload, we will not receive back metadata. Because the initial upload command creates an ETag header only, we will not receive back any metadata. We will receive back the requested metadata. Because the upload is not propagated fully, we will receive a 404 Not Found. Because the upload is not yet complete, we will receive a 404 Not Found.8. Which of the following services can be used to automate technical tasks, avoid mistakes caused by human error and ensure that processes in your organization are repeatable? (Choose 2) CodeDeploy API Gateway Elastic Beanstalk OpsWorks9. Your newly deployed model gets heavy usage on Monday then no usage the rest of the week. To accomodate this heavy usage, you make use of auto-scaling to adjust to the inbound request load. After several weeks in production, you notice a large number of scaled resources going unused and thus consuming money for no good reason. What might you do to resolve this?Change the cooldown period for scale-out to a lower value.Change the cooldown period for scale-in to a higher value.Manually adjust the maximum autoscale instances down to force a scale-in.10. Your company has just established a policy that says all data must be encrypted at rest. You are currently using SageMaker to host Jupyter Notebook instances for your data scientists. What is the most direct path for you to ensure you are compliant?Migrate the Notebooks into CodeCommit and redeploy the Notebook instances on-prem using encrypted storage.Create an EC2 instance using local volume encryption then migrate over the existing Jupyter Notebooks.Recreate the Notebook Instances and select an encryption key from KMS.11. Which feature allows you to organize your AWS resources according to user-defined tags?Resource GroupsAWS OrganizationsIAM Groups12. You are preparing to release an updated version of your latest machine learning model. It is provided to about 5,000 customers who use it in a SaaS capacity. You want to minimize customer disruption, minimize risk and be sure the new model is stable before full deployment. What is the best course of action?Perform offline validation then cut over all at once to the new version to minimize risk.Use a continuous integration process to preserve the stability of the new model and deploy in a "Big Bang" manner.Conduct an A/B test first then use a phased rollout.13. You need to increase the performance of your Image Classification inference endpoint and want to do so in the most cost-effective manner. What should you choose?Redeploy the endpoint using Elastic Inference added to the production variant.Offload some traffic to a less costly AWS region.Create a new endpoint deployment that uses a single-CPU instance given the algorithm being used.14. ######QUESTION#########your manager has asked you to investigate an EC2 web server hosting videos that is constantly running at over 80% CPU utilization. Which of the approaches below would you recommend to fix the issue?Create a CloudFront distribution and configure the Amazon EC2 instance as the originCreate an Auto Scaling group from the instance using the CreateAutoScalingGroup actionCreate an Elastic Load Balancer and register the EC2 instance to it15. You are helping a digital asset media company create a system which can automatically extract metadata from photographs submitted by freelance photographers. They want a solution that is robust, cost-effective and flexible but they don't want to manage lots of infrastructure. What would you recommend?Build a model using Object Detection to extract metadata from images and host it using EC2.Build a model using Image Analysis to extract metadata from images and host it using Lambda and the API Gateway.Make use of Amazon Rekognition for metadata extraction.16. Amazon ElastiCache allows you to seamlessly set up, run, and scale popular open-source compatible in-memory data stores in the cloud. Build data-intensive apps or boost the performance of your existing databases by retrieving data from high throughput and low latency in-memory data stores.Amazon Elasticache can fulfil a number of roles. Choose the operations from the following list which can be implemented using Elasticache for Redis. In-Memory Data Store Relational Data Store Sorted Sets Pub/Sub17. AWS offers the broadest range of 15 purpose-built database services. These databases are optimized to give you the performance, scale, and availability that you need to support your most demanding workloads.Which of these database approaches would be best for storing and analyzing the complex interpersonal relationships of people involved in organized crime. Elasticache Database on EC2 Redshift DynamoDB S3 Neptune18. As part of your disaster recovery preparation, you have decided to maintain a replica of your on-site data on AWS S3 using Storage Gateway. AWS Storage Gateway is a hybrid cloud storage service that gives you on-premises access to virtually unlimited cloud storage. Customers use Storage Gateway to simplify storage management and reduce costs for key hybrid cloud storage use cases.Which mode should you use?Gateway Transfer ModeGateway Cached Volume ModeVolume Gateway Cached ModeGateway Stored Volume ModeTape Gateway19. ######QUESTION#########In Cloud project, John is working on EC2 instance creation and he used to provide EC2 instance details to his team members for project purpose.Consider he is deploying an application on Amazon EC2 that must call AWS APIs. Which method of securely passing credentials to the application should he use?Assign IAM roles to the EC2 instancesStore the API credentials on the instance using instance metadataStore API credentials as an object in Amazon S320. ######QUESTION#########A cloud based company cloudvikas.com is generating large datasets with millions of rows that must be summarized by column. Each company is using some storage service.Here,Which storage service meets the requirements?Amazon DynamoDBAmazon ElastiCacheAmazon RedShift21. To make use of your published model in a custom application, what must you do?Create an entry in Route 53 to point your desired DNS name to the endpoint.Use the CloudTrail API to monitor for inference requests and trigger the SageMaker model endpoint.Use the SageMaker API InvokeEndpoint() method via SDK.22. For custom CloudWatch metrics, what is the minimum granularity in terms of time that CloudWatch can monitor.1 minute5 minutes2 minutes3 minutes23. You want to deploy an XGBoost-backed model to a fleet of traffic sensors using Raspberry Pis as the local compute component. Will this work?No, best practice says that you should not deploy ML models into the field but rather use a centralized inference landscape.No, XGBoost cannot be compiled to run on an ARM processor. It can only run on x86 architectures.Yes, you can use SageMaker Neo to compile the model into a format that is optimized for the ARM processor on the Raspberry Pi.24. You have decided to use SageMaker Hosting Services to deploy your newly created model. What is the next required set after you have created your model?Create an endpoint configuration.Nothing additional is required. SageMaker Hosting Services is enabled with every model created on SageMaker.Turn on CloudWatch logging for your model.25. Which of the following services would you use to check CPU utilization of your EC2 instances?CloudFormationCloudWatchConfig26 out of 25Please fill in the comment box below. Author: CloudVikas