AWS Certified Machine Learning – Specialty Set 8 Welcome to AWS Certified Machine Learning - Specialty Set 8. Please enter your email details to get QUIZ Details on your email id. Click on Next Button to proceed. 1. You need to chain together three different algorithms for a model you are creating. You need to run PCA, RCF, and LDA in succession. What is the recommended way to do this? Use an Inference Pipeline to link together these algorithms. Use AWS Batch to create a script that will trigger each algorithm in sequence. Use Lambda Step Functions to link together the separate training jobs.2. You are helping a digital asset media company create a system which can automatically extract metadata from photographs submitted by freelance photographers. They want a solution that is robust, cost-effective and flexible but they don't want to manage lots of infrastructure. What would you recommend? Build a model using Image Analysis to extract metadata from images and host it using Lambda and the API Gateway. Build a model using Object Detection to extract metadata from images and host it using EC2. Make use of Amazon Rekognition for metadata extraction.3. You are helping a client design a landscape for their mission critical ML model based on DeepAR deployed using SageMaker Hosting Services. Which of the following would you recommend they do to ensure high availability? Recommend that they deploy using EKS in addition to the SageMaker Hosting deployment. Ensure that InitialInstanceCount is at least 2 or more in the endpoint production variant. Create a duplicate endpoint in another region using Amazon Forecast.4. You need to increase the performance of your Image Classification inference endpoint and want to do so in the most cost-effective manner. What should you choose? Create a new endpoint deployment that uses a single-CPU instance given the algorithm being used. Redeploy the endpoint using Elastic Inference added to the production variant. Offload some traffic to a less costly AWS region.5. Your newly deployed model gets heavy usage on Monday then no usage the rest of the week. To accomodate this heavy usage, you make use of auto-scaling to adjust to the inbound request load. After several weeks in production, you notice a large number of scaled resources going unused and thus consuming money for no good reason. What might you do to resolve this? Change the cooldown period for scale-out to a lower value. Change the cooldown period for scale-in to a higher value. Manually adjust the maximum autoscale instances down to force a scale-in.6. You are preparing to release an updated version of your latest machine learning model. It is provided to about 3,000 customers who use it in a SaaS capacity. You want to minimize customer disruption, minimize risk and be sure the new model is stable before full deployment. What is the best course of action? Use a continuous integration process to preserve the stability of the new model and deploy in a "Big Bang" manner. Perform offline validation then cut over all at once to the new version to minimize risk. Conduct an A/B test first then use a phased rollout.7. You have decided to use SageMaker Hosting Services to deploy your newly created model. What is the next required set after you have created your model? Create an endpoint configuration. Nothing additional is required. SageMaker Hosting Services is enabled with every model created on SageMaker. Turn on CloudWatch logging for your model.8. You have been asked to build an automated chatbot for customer service. If the initial interaction with the customer seems negative or the customer is upset or unhappy, you want to immediately transfer that chat session over to a live human. What is the simplest way to implement this feature? Use LDA to create an NLP model that can understand the sentiment of the customer's comments. Create a Lambda function to redirect the chat session over to a live customer support person. Use Amazon Comprehend to take in the customer's initial comments, then process them through Amazon Personalize to determine sentiment. If sentiment is negative, hand the chat session over to a live customer support person. Use Amazon Lex to take in the customer's initial comments, then process them through Amazon Comprehend to determine sentiment. If sentiment is negative, hand the chat session over to a live customer support person.9. You want to deploy an XGBoost-backed model to a fleet of traffic sensors using Raspberry Pis as the local compute component. Will this work? No, XGBoost cannot be compiled to run on an ARM processor. It can only run on x86 architectures. No, best practice says that you should not deploy ML models into the field but rather use a centralized inference landscape. Yes, you can use SageMaker Neo to compile the model into a format that is optimized for the ARM processor on the Raspberry Pi.10. To make use of your published model in a custom application, what must you do? Create an entry in Route 53 to point your desired DNS name to the endpoint. Use the SageMaker API InvokeEndpoint() method via SDK. Use the CloudTrail API to monitor for inference requests and trigger the SageMaker model endpoint.11. Your company has just established a policy that says all data must be encrypted at rest. You are currently using SageMaker to host Jupyter Notebook instances for your data scientists. What is the most direct path for you to ensure you are compliant? Migrate the Notebooks into CodeCommit and redeploy the Notebook instances on-prem using encrypted storage. Create an EC2 instance using local volume encryption then migrate over the existing Jupyter Notebooks. Recreate the Notebook Instances and select an encryption key from KMS.12. Your company has just discovered a security breach occurred in a division separate from yours but has ordered a full review of all access logs. You have been asked to provide the last 180 days of access to the three SageMaker Hosted Service models that you manage. When you set up these deployments, you left everything default. How will you be able to respond? Use CloudTrail to pull a list of all access to the models for the last 90 days. Any data beyond 90 days is unavailable. Use CloudWatch along with IPInsights to analyse the logs for suspicious activity from the past 180 days then download these records. Use SageMaker Detailed Logging to produce a CSV file of access from the past 180 days.13 out of Please fill in the comment box below.