DistillPrep
Python
GenAI
GenAI Frameworks
NLP
Deep Learning
Machine Learning
ML Libraries
Statistics
SQL
MLOps
Cloud
System Design
Blog
Learn
Practice
Test
Live Engine
Select Topic
ML Fundamentals
(15)
Linear Regression
(15)
Logistic Regression
(15)
Decision Trees
(15)
Random Forest
(15)
Gradient Boosting
(15)
Support Vector Machines
(15)
K Nearest Neighbors
(15)
Naive Bayes
(14)
Pca Dimensionality Reduction
(13)
Clustering
(13)
Anomaly Detection
(12)
Ensemble Methods
(12)
Model Evaluation And Metrics
(12)
Bias Variance Tradeoff
(11)
Regularization
(10)
Feature Selection And Engineering
(12)
easy
K Nearest Neighbors
A KNN classifier with k=1 achieves 100% training accuracy. A colleague immediately concludes the model is excellent. What is wrong with this reasoning?
A
k=1 is always the optimal value — 100% training accuracy confirms this
B
With k=1, every training point is its own nearest neighbor, so the model always predicts the correct class for any training point — this is guaranteed regardless of signal; training accuracy with k=1 is trivially 100% and reveals nothing about generalization
C
k=1 KNN cannot achieve 100% accuracy on training data due to tie-breaking rules
D
100% training accuracy means the model has no variance, which is always desirable
Previous
Back
Next