STUDENT

EVALUATING MODELS
Total Q: 62
Time: 70 Mins

Q 1.

Which ethical concern is most relevant during model evaluation?

Q 2.

Which of the following statements is not true about overfitting models?

Q 3.

____value is known as the perfect value for F1 Score.

Q 4.

Which dataset is used to train a machine learning model?

Q 5.

If a model has high accuracy but poor performance on minority classes, what does this indicate?

Q 6.

Which of the following best describes a True Positive (TP) in a confusion matrix?

Q 7.

In the context of model evaluation, what does a high recall indicate?

Q 8.

What does overfitting in a machine learning model mean?

Q 9.

Which of the following statements is true for the term Evaluation?

Q 10.

What is the primary need for evaluating an AI model's performance in the AI Model Development process?

Q 11.

What is the ideal metric to use when dataset is unbalanced and you want to consider both FP and FN?

Q 12.

What will be the outcome, if the Prediction is "Yes" and it matches with the Reality?
What will be the outcome, if the Prediction is "Yes" and it does not match the Reality?

Q 13.

What is the formula to calculate classification accuracy from the confusion matrix?

Q 14.

In a spam detection system, which metric is more critical to minimize the chances of important emails being marked as spam?

Q 15.

Statement1: The output given by the AI model is known as reality.
Statement2:The real scenario is known as Prediction.

Q 16.

Sarthak made a face mask detector system for which he had collected the dataset and used all the dataset to train the model. Then, he used some different data set to evaluate the model which resulted in the correct answer all the time. Name the concept.

Q 17.

Which of the following is defined as the measure of balance between precision and recall?

Q 18.

What is the goal of model evaluation?

Q 19.

Which of the following talks about how true the predictions are by any model ?

Q 20.

Statement 1 : Confusion matrix is an evaluation metric.
Statement 2 : Confusion Matrix is a record which helps in evaluation.

Q 21.

You are building a model to detect spam emails. Which metric is more important to avoid marking important emails as spam?

Q 22.

Which two evaluation methods are used to calculate F1 Score?

Q 23.

The output given by the AI machine is known as ________

Q 24.

In a medical diagnosis model, which metric is more important to reduce incorrect identification of a healthy person as sick?

Q 25.

What is the main purpose of a confusion matrix?

Q 26.

Two conditions when prediction matches with the reality are true positive and _______

Q 27.

Differentiate between Prediction and Reality.

Q 28.

Why is the train-test split important in model evaluation?

Q 29.

Which one of the following scenario result in a high false negative cost?

Q 30.

You built a model to detect COVID-19 cases. Which metric is more critical to reduce missed positive cases?

Q 31.

______ is one of the parameter for evaluating a model's performance and is defined as the fraction of positive cases that are correctly identified.

Q 32.

Prediction and Reality can be easily mapped together with the help of :

Q 33.

____________ is used to record the result of comparison between the prediction and reality. It is not an evaluation metric but a record which can help in evaluation.

Q 34.

Which of these scenarios best describes a False Negative (FN)?

Q 35.

In a face recognition system used for school attendance, if the system misses some actual students, which metric should be improved?

Q 36.

While evaluating a model's performance, recall parameter considers
(i) False positive
(ii) True positive
(iii) False negative
(iv) True negative
Choose the correct option :

Q 37.

Which of the following is NOT a classification metric?

Q 38.

________ helps to find the best model that represents our data and how well the chosen model will work in future.

Q 39.

If a model shows 90% accuracy in an unbalanced dataset, what should you do next?

Q 40.

Which of the following is NOT a correct pair in confusion matrix terminology?

Q 41.

Recall-Evaluation method is

Q 42.

Which evaluation parameter takes into account the True Positives and False Positives?

Q 43.

Which one of the following scenario result in a high false positive cost?

Q 44.

Why is it not ideal to use the training dataset to evaluate a model?

Q 45.

When would you use the F1 Score over other metrics?

Q 46.

Why is it important to consider both precision and recall in model evaluation?

Q 47.

When evaluating a model with a highly imbalanced dataset, which metric is generally more informative than accuracy?

Q 48.

F1 Score is the measure of the balance between

Q 49.

What does the F1 Score represent in model evaluation?

Q 50.

Statement 1 : To evaluate a models' performance, we need either precision or recall.
Statement 2 : When the value of both Precision and Recall is 1, the F1 score is 0.

Q 51.

Sarthak made a face mask detector system for which he had collected the dataset and used all the dataset to train the model. Then, he used the same data to evaluate the model which resulted in the correct answer all the time but was not able to perform with unknown dataset. Name the concept.

Q 52.

Rajat has made a model which predicts the performance of Indian Cricket players in upcoming matches. He collected the data of players' performance with respect to stadium, bowlers, opponent team and health. His model works with good accuracy and precision value. Which of the statement given below is incorrect?

Q 53.

Which scenario best illustrates a False Negative (FN)?

Q 54.

When the prediction matches the reality, the condition is termed as______.

Q 55.

Which evaluation parameter takes into consideration all the correct predictions?

Q 56.

For a disease detection model, which metric is more crucial to ensure that actual cases are not missed?

Q 57.

Which of the following best describes model evaluation in AI?

Q 58.

Raunak was learning the conditions that make up the confusion matrix. He came across a scenario in which the machine that was supposed to predict an animal was always predicting not an animal. What is this condition called?

Q 59.

What does a high precision value indicate in a classification model?

Q 60.

What is the primary purpose of model evaluation in AI?

Q 61.

In spam email detection, which of the following will be considered as "False Negative" ?

Q 62.

Priya was confused with the terms used in the evaluation stage. Suggest her the term used for the percentage of correct predictions out of all the observations.