STUDENT
EVALUATING MODELS
Total Q: 62
Time: 70 Mins
Q 1.
Differentiate between Prediction and Reality.
Prediction is the input given to the machine to receive the expected result of the reality.
Prediction is the output given to match the reality.
The prediction is the output which is given by the machine and the reality is the real scenario in which the prediction has been made.
Prediction and reality both can be used interchangeably.
Q 2.
What does the F1 Score represent in model evaluation?
The average of accuracy and error
The difference between precision and recall
The harmonic mean of precision and recall
The sum of true positives and true negatives
Q 3.
What does a high precision value indicate in a classification model?
The model rarely misses positive cases
The model correctly identifies negative cases
The model makes very few false positive errors
The model is overfitted
Q 4.
When the prediction matches the reality, the condition is termed as______.
True Positive or True Negative
True Positive or False Negative
True Positive or False Positive
False Positive and False Negative
Q 5.
Which dataset is used to train a machine learning model?
Testing dataset
Evaluation dataset
Validation dataset
Training dataset
Q 6.
You built a model to detect COVID-19 cases. Which metric is more critical to reduce missed positive cases?
Accuracy
Precision
Recall
Error
Q 7.
If a model has high accuracy but poor performance on minority classes, what does this indicate?
The model is well-balanced
The model performs equally across all classes
The model may be biased towards majority classes
The model has high recall
Q 8.
What is the primary need for evaluating an AI model's performance in the AI Model Development process?
To increase the complexity of the model.
To visualize the data.
To assess how well the chosen model will work in future.
To reduce the amount of data used for training.
Q 9.
______ is one of the parameter for evaluating a model's performance and is defined as the fraction of positive cases that are correctly identified.
Precision
Recall
Accuracy
F1
Q 10.
What is the formula to calculate classification accuracy from the confusion matrix?
(TP + FN) / (TP + FP + TN + FN)
(TP + TN) / (TP + TN + FP + FN)
(FP + TN) / (TP + TN + FP + FN)
(TP + FP) / (TN + FN)
Q 11.
Which of the following statements is true for the term Evaluation?
Helps in classifying the type and genre of a document.
It helps in predicting the topic for a corpus.
Helps in understanding the reliability of any AI model
Process to extract the important information out of a corpus.
Q 12.
What is the primary purpose of model evaluation in AI?
To train the model with more data
To understand the model's performance and make improvements
To visualize the data
To deploy the model into production
Q 13.
Two conditions when prediction matches with the reality are true positive and _______
True Negative
False Positive
False Negative
Negative False
Q 14.
In a medical diagnosis model, which metric is more important to reduce incorrect identification of a healthy person as sick?
Recall
Accuracy
F1-Score
Precision
Q 15.
When would you use the F1 Score over other metrics?
When data is balanced and all predictions matter equally
When overfitting is a major issue
When you need a single measure that balances precision and recall
When only true negatives are important
Q 16.
Which one of the following scenario result in a high false positive cost?
Viral outbreak
Forest fire
Flood
Spam filter
Q 17.
Priya was confused with the terms used in the evaluation stage. Suggest her the term used for the percentage of correct predictions out of all the observations.
Accuracy
Precision
Recall
F1 Score
Q 18.
Raunak was learning the conditions that make up the confusion matrix. He came across a scenario in which the machine that was supposed to predict an animal was always predicting not an animal. What is this condition called?
False Positive
True Positive
False Negative
True Negative
Q 19.
Which two evaluation methods are used to calculate F1 Score?
Precision and Accuracy
Precision and Recall
Accuracy and Recall
Precision, F1 score
Q 20.
Statement1: The output given by the AI model is known as reality.
Statement2:The real scenario is known as Prediction.
Both Statement 1 and Statement 2 are correct
Both Statement1 and Statement 2 are incorrect
Statement 1 is correct but Statement 2 is incorrect
Statement 2 is correct but Statement 1 is incorrect
Q 21.
If a model shows 90% accuracy in an unbalanced dataset, what should you do next?
Accept it as a good model
Use a different training algorithm
Evaluate it with precision and recall
Remove false negatives
Q 22.
Prediction and Reality can be easily mapped together with the help of :
Prediction
Reality
Accuracy
Confusion Matrix
Q 23.
Which of the following is NOT a classification metric?
Accuracy
Recall
Mean Squared Error
Precision
Q 24.
Which one of the following scenario result in a high false negative cost?
Viral outbreak
Mining
Copyright detection
spam filter
Q 25.
While evaluating a model's performance, recall parameter considers
(i) False positive
(ii) True positive
(iii) False negative
(iv) True negative
Choose the correct option :
only (i)
(ii) and (iii)
(iii) and (iv)
(i) and (iv)
Q 26.
Statement 1 : Confusion matrix is an evaluation metric.
Statement 2 : Confusion Matrix is a record which helps in evaluation.
Both Statement 1 and Statement 2 are correct.
Both Statement 1 and Statement 2 are incorrect.
Statement 1 is correct and Statement 2 is incorrect.
Statement 2 is correct and Statement 1 is incorrect.
Q 27.
What is the ideal metric to use when dataset is unbalanced and you want to consider both FP and FN?
Accuracy
Precision
Recall
F1 Score
Q 28.
Which scenario best illustrates a False Negative (FN)?
Predicting a non-diseased person as diseased
Predicting a diseased person as non-diseased
Correctly predicting a diseased person
Correctly predicting a non-diseased person
Q 29.
F1 Score is the measure of the balance between
Accuracy and Precision
Precision and Recall
Recall and Accuracy
Recall and Reality
Q 30.
Which of the following talks about how true the predictions are by any model ?
Accuracy
Reliability
Recall
F1 score
Q 31.
What is the main purpose of a confusion matrix?
To reduce training time
To visualize data features
To evaluate classification model performance
To split data into training and testing
Q 32.
Rajat has made a model which predicts the performance of Indian Cricket players in upcoming matches. He collected the data of players' performance with respect to stadium, bowlers, opponent team and health. His model works with good accuracy and precision value. Which of the statement given below is incorrect?
Data gathered with respect to stadium, bowlers, opponent team and health is known as Testing Data.
Data given to an AI model to check accuracy and precision is Testing Data.
Training data and testing data are acquired in the Data Acquisition stage.
Training data is always larger as compared to testing data.
Q 33.
What is the goal of model evaluation?
To reduce the dataset size
To improve user interface design
To minimize error and maximize accuracy
To visualize training data
Q 34.
Recall-Evaluation method is
defined as the fraction of positive cases that are correctly identified.
defined as the percentage of true positive cases versus all the cases where the prediction is true.
defined as the percentage of correct predictions out of all the observations.
comparison between the prediction and reality
Q 35.
In the context of model evaluation, what does a high recall indicate?
The model has a low number of false positives
The model correctly identifies most of the actual positive cases
The model has a high number of false negatives
The model's predictions are random
Q 36.
When evaluating a model with a highly imbalanced dataset, which metric is generally more informative than accuracy?
Precision
Recall
F1 Score
All of the above
Q 37.
________ helps to find the best model that represents our data and how well the chosen model will work in future.
Problem Scoping
Data Acquisition
Data Exploration
Evaluation
Q 38.
For a disease detection model, which metric is more crucial to ensure that actual cases are not missed?
Precision
Recall
Accuracy
Specificity
Q 39.
Why is the train-test split important in model evaluation?
It allows the model to memorize data
It speeds up the training process
It helps assess model performance on unseen data
It reduces data storage size
Q 40.
Which of these scenarios best describes a False Negative (FN)?
Model predicts cancer when person doesn't have it
Model predicts no cancer when person actually has it
Model correctly predicts no cancer
Model correctly predicts cancer
Q 41.
Which of the following is NOT a correct pair in confusion matrix terminology?
TP – Model correctly predicts positive
TN – Model incorrectly predicts negative
FP – Model wrongly predicts positive
FN – Model wrongly predicts negative
Q 42.
Why is it not ideal to use the training dataset to evaluate a model?
Because it reduces the model's performance
Because it introduces randomness
Because the model might overfit and remember the training data
Because it makes the model too complex
Q 43.
Which of the following statements is not true about overfitting models?
This model learns the pattern and noise in the data to such extent that it harms the performance of the model on the new dataset
Training result is very good and the test result is poor
It interprets noise as patterns in the data
The training accuracy and test accuracy both are low
Q 44.
Which of the following best describes model evaluation in AI?
Feeding data into a model to learn from it
Using evaluation metrics to understand a model's performance
Collecting training data from multiple sources
Applying the model to solve real-world problems
Q 45.
____value is known as the perfect value for F1 Score.
1
2
0
100%
Q 46.
In a spam detection system, which metric is more critical to minimize the chances of important emails being marked as spam?
Recall
Precision
Accuracy
Error Rate
Q 47.
Which of the following best describes a True Positive (TP) in a confusion matrix?
The model incorrectly predicts the negative class as positive
The model correctly predicts the positive class
The model incorrectly predicts the positive class as negative
The model correctly predicts the negative class
Q 48.
Statement 1 : To evaluate a models' performance, we need either precision or recall.
Statement 2 : When the value of both Precision and Recall is 1, the F1 score is 0.
Both statement 1 and statement 2 are correct.
Both statement 1 and statement 2 are incorrect.
Statement 1 is correct, but statement 2 is incorrect.
Statement 1 is incorrect, but statement 2 is correct.
Q 49.
____________ is used to record the result of comparison between the prediction and reality. It is not an evaluation metric but a record which can help in evaluation.
Confusion Matrix
F1 Score
Precision
Accuracy
Q 50.
Which evaluation parameter takes into consideration all the correct predictions?
Accuracy
Precision
Recall
F1 Score
Q 51.
Which evaluation parameter takes into account the True Positives and False Positives?
Precision
Recall
F1 Score
Accuracy
Q 52.
You are building a model to detect spam emails. Which metric is more important to avoid marking important emails as spam?
Recall
Precision
Accuracy
Error
Q 53.
Sarthak made a face mask detector system for which he had collected the dataset and used all the dataset to train the model. Then, he used the same data to evaluate the model which resulted in the correct answer all the time but was not able to perform with unknown dataset. Name the concept.
Underfitting
Perfect Fit
Overfitting
True Positive
Q 54.
In spam email detection, which of the following will be considered as "False Negative" ?
When a legitimate email is accurately identified as not spam.
When a spam email is mistakenly identified as legitimate.
When an email is accurately recognised as spam.
When an email is inaccurately labelled as important.
Q 55.
Which of the following is defined as the measure of balance between precision and recall?
Accuracy
F1 Score
Reliability
Punctuality
Q 56.
What does overfitting in a machine learning model mean?
The model performs well on new data
The model generalizes data patterns
The model memorizes the training data and performs poorly on new data
The model ignores training data
Q 57.
The output given by the AI machine is known as ________
Prediction
Reality
True
False
Q 58.
In a face recognition system used for school attendance, if the system misses some actual students, which metric should be improved?
Recall
Precision
Accuracy
Error
Q 59.
What will be the outcome, if the Prediction is "Yes" and it matches with the Reality?
What will be the outcome, if the Prediction is "Yes" and it does not match the Reality?
True Positive, True Negative
True Negative, False Negative
True Negative, False Positive
True Positive, False Positive
Q 60.
Sarthak made a face mask detector system for which he had collected the dataset and used all the dataset to train the model. Then, he used some different data set to evaluate the model which resulted in the correct answer all the time. Name the concept.
Perfect Fit
Under Fitting
Over Fitting
Correct Fit
Q 61.
Which ethical concern is most relevant during model evaluation?
Using open-source tools
Ignoring user interface design
Ensuring fairness and reducing bias in model results
Increasing model complexity
Q 62.
Why is it important to consider both precision and recall in model evaluation?
Because they are the only metrics available
Because they provide a complete picture of the model's performance, especially in imbalanced datasets
Because they are easier to calculate than accuracy
Because they are not influenced by the dataset size