STUDENT

EVALUATING MODELS
Total Q: 62
Time: 70 Mins

Q 1.

Differentiate between Prediction and Reality.

Q 2.

What does the F1 Score represent in model evaluation?

Q 3.

What does a high precision value indicate in a classification model?

Q 4.

When the prediction matches the reality, the condition is termed as______.

Q 5.

Which dataset is used to train a machine learning model?

Q 6.

You built a model to detect COVID-19 cases. Which metric is more critical to reduce missed positive cases?

Q 7.

If a model has high accuracy but poor performance on minority classes, what does this indicate?

Q 8.

What is the primary need for evaluating an AI model's performance in the AI Model Development process?

Q 9.

______ is one of the parameter for evaluating a model's performance and is defined as the fraction of positive cases that are correctly identified.

Q 10.

What is the formula to calculate classification accuracy from the confusion matrix?

Q 11.

Which of the following statements is true for the term Evaluation?

Q 12.

What is the primary purpose of model evaluation in AI?

Q 13.

Two conditions when prediction matches with the reality are true positive and _______

Q 14.

In a medical diagnosis model, which metric is more important to reduce incorrect identification of a healthy person as sick?

Q 15.

When would you use the F1 Score over other metrics?

Q 16.

Which one of the following scenario result in a high false positive cost?

Q 17.

Priya was confused with the terms used in the evaluation stage. Suggest her the term used for the percentage of correct predictions out of all the observations.

Q 18.

Raunak was learning the conditions that make up the confusion matrix. He came across a scenario in which the machine that was supposed to predict an animal was always predicting not an animal. What is this condition called?

Q 19.

Which two evaluation methods are used to calculate F1 Score?

Q 20.

Statement1: The output given by the AI model is known as reality.
Statement2:The real scenario is known as Prediction.

Q 21.

If a model shows 90% accuracy in an unbalanced dataset, what should you do next?

Q 22.

Prediction and Reality can be easily mapped together with the help of :

Q 23.

Which of the following is NOT a classification metric?

Q 24.

Which one of the following scenario result in a high false negative cost?

Q 25.

While evaluating a model's performance, recall parameter considers
(i) False positive
(ii) True positive
(iii) False negative
(iv) True negative
Choose the correct option :

Q 26.

Statement 1 : Confusion matrix is an evaluation metric.
Statement 2 : Confusion Matrix is a record which helps in evaluation.

Q 27.

What is the ideal metric to use when dataset is unbalanced and you want to consider both FP and FN?

Q 28.

Which scenario best illustrates a False Negative (FN)?

Q 29.

F1 Score is the measure of the balance between

Q 30.

Which of the following talks about how true the predictions are by any model ?

Q 31.

What is the main purpose of a confusion matrix?

Q 32.

Rajat has made a model which predicts the performance of Indian Cricket players in upcoming matches. He collected the data of players' performance with respect to stadium, bowlers, opponent team and health. His model works with good accuracy and precision value. Which of the statement given below is incorrect?

Q 33.

What is the goal of model evaluation?

Q 34.

Recall-Evaluation method is

Q 35.

In the context of model evaluation, what does a high recall indicate?

Q 36.

When evaluating a model with a highly imbalanced dataset, which metric is generally more informative than accuracy?

Q 37.

________ helps to find the best model that represents our data and how well the chosen model will work in future.

Q 38.

For a disease detection model, which metric is more crucial to ensure that actual cases are not missed?

Q 39.

Why is the train-test split important in model evaluation?

Q 40.

Which of these scenarios best describes a False Negative (FN)?

Q 41.

Which of the following is NOT a correct pair in confusion matrix terminology?

Q 42.

Why is it not ideal to use the training dataset to evaluate a model?

Q 43.

Which of the following statements is not true about overfitting models?

Q 44.

Which of the following best describes model evaluation in AI?

Q 45.

____value is known as the perfect value for F1 Score.

Q 46.

In a spam detection system, which metric is more critical to minimize the chances of important emails being marked as spam?

Q 47.

Which of the following best describes a True Positive (TP) in a confusion matrix?

Q 48.

Statement 1 : To evaluate a models' performance, we need either precision or recall.
Statement 2 : When the value of both Precision and Recall is 1, the F1 score is 0.

Q 49.

____________ is used to record the result of comparison between the prediction and reality. It is not an evaluation metric but a record which can help in evaluation.

Q 50.

Which evaluation parameter takes into consideration all the correct predictions?

Q 51.

Which evaluation parameter takes into account the True Positives and False Positives?

Q 52.

You are building a model to detect spam emails. Which metric is more important to avoid marking important emails as spam?

Q 53.

Sarthak made a face mask detector system for which he had collected the dataset and used all the dataset to train the model. Then, he used the same data to evaluate the model which resulted in the correct answer all the time but was not able to perform with unknown dataset. Name the concept.

Q 54.

In spam email detection, which of the following will be considered as "False Negative" ?

Q 55.

Which of the following is defined as the measure of balance between precision and recall?

Q 56.

What does overfitting in a machine learning model mean?

Q 57.

The output given by the AI machine is known as ________

Q 58.

In a face recognition system used for school attendance, if the system misses some actual students, which metric should be improved?

Q 59.

What will be the outcome, if the Prediction is "Yes" and it matches with the Reality?
What will be the outcome, if the Prediction is "Yes" and it does not match the Reality?

Q 60.

Sarthak made a face mask detector system for which he had collected the dataset and used all the dataset to train the model. Then, he used some different data set to evaluate the model which resulted in the correct answer all the time. Name the concept.

Q 61.

Which ethical concern is most relevant during model evaluation?

Q 62.

Why is it important to consider both precision and recall in model evaluation?