AWS Certified Machine Learning – Specialty Set 7

Welcome to AWS Certified Machine Learning - Specialty Set 7.

Please enter your email details to get QUIZ Details on your email id.

Click on Next Button to proceed.

1. In a regression problem, if we plot the residuals in a histogram and observe a distribution heavily skewed to the right of zero indicating mostly positive residuals, what does this mean?
2. We want to perform automatic model tuning on our linear learner model. We have chosen the tunable hyperparameter we want to use. What is our next step?
3. In your first training job of a regression problem, you observe an RMSE of 3.4. You make some adjustments and run the training job again, which results in an RMSE of 2.2. What can you conclude from this?
4. You are designing a testing plan for an update release of your company's mission critical loan approval model. Due to regulatory compliance, it is critical that the updates are not used in production until regression testing has shown that the updates perform as good as the existing model. Which validation strategy would you choose? (Choose 2)
5. A colleague is preparing for their very first training job using the XGBoost algorithm. They ask you how they can ensure that training metrics are captured during the training job. How do you direct them?
6. Which of the following metrics are recommended for tuning a Linear Learner model so that we can help avoid overfitting? (Choose 3)
7. We have just completed a validation job for a multi-class classification model that attempts to classify books into one of five genres. In reviewing the validation metrics, we observe a Macro Average F1 score of 0.28 with one genre, historic fiction, having an F1 score of 0.9. What can we conclude from this?
8. After training and validation sessions, we notice that the error rate is higher than we want for both sessions. Visualization of the data indicates that we don't seem to have any outliers. What else might we do? (Choose 3)
9. After training and validation sessions, we notice that the accuracy rate for training is acceptable but the accuracy rate for validation is very poor. What might we do? (Choose 3)
10. In your first training job of a binary classification problem, you observe an F1 score of 0.996. You make some adjustments and rerun the training job again, which results in an F1 score of 0.034. What can you conclude from this? (Choose 2)
11. After multiple training runs, you notice that the the loss function settles on different but similar values. You believe that there is potential to improve the model through adjusting hyperparameters. What might you try next?
12. In a binary classification problem, you observe that precision is poor. Which of the following most contribute to poor precision?
13. You are preparing for a first training run using a custom algorithm that you have prepared in a docker container. What should you do to ensure that the training metrics are visible to CloudWatch?


Leave a Reply

Your email address will not be published. Required fields are marked *