DistillPrep
PythonGenAIGenAI FrameworksNLPDeep LearningMachine LearningML LibrariesStatisticsSQLMLOpsCloudSystem Design
Blog

Skill Assessments

Validate your expertise with timed, industry-standard tests.

Tests Completed
0 / 14
Best Score
0%
Avg. Accuracy
0%
🔥
Recommended Next
NumPy Core
mixed

NumPy Core

Arrays, dtypes, views vs copies, axis operations, and broadcasting — the two foundational NumPy topics. Tests both conceptual clarity and the edge cases that trip up experienced engineers.

15 mins
12 Questions
mixed

Pandas Essentials

Series, DataFrame indexing, dtypes, groupby, merge, and the traps that cause production bugs. Covers the distinction between views and copies and every variant of SettingWithCopyWarning.

15 mins
12 Questions
mixed

Pandas for ML + Visualization

Feature engineering, missing values, categorical encoding, time-series splits — then visualization: figure anatomy, confusion matrix heatmaps, learning curves, and publication-quality formatting.

15 mins
12 Questions
mixed

SciPy + Scikit-learn

Statistical testing, sparse matrices, distance calculations, then the full sklearn Estimator API — Pipeline, ColumnTransformer, cross-validation, and hyperparameter search. Essential for every ML interview.

15 mins
12 Questions
mixed

PyTorch Core

Tensors, autograd, computation graphs, then the full training loop: zero_grad, backward, mixed precision, model.train/eval modes. The two topics every ML engineer must master before any PyTorch interview.

15 mins
12 Questions
mixed

TensorFlow & Data Pipelines

Keras model lifecycle, tf.function tracing, BatchNorm training modes, then PyTorch DataLoader internals and tf.data pipelines — prefetch, shuffle buffers, DistributedSampler, and bottleneck detection.

15 mins
12 Questions
easy

ML Libraries — Easy Interview Mock 1

Your first ML libraries interview simulation. Covers fundamentals across NumPy, Pandas, sklearn, PyTorch, and TensorFlow. Tests the concepts interviewers ask in every round — common traps included.

12 mins
10 Questions
easy

ML Libraries — Easy Interview Mock 2

Second easy-difficulty interview simulation. A fresh set of fundamental questions across all 12 ML library topics. Complete both Easy mocks before moving to Medium for maximum coverage.

12 mins
10 Questions
medium

ML Libraries — Medium Interview Mock 1

First medium-difficulty mock interview. Covers applied reasoning — broadcasting bugs, leakage in CV pipelines, groupby transform vs agg, gradient accumulation, and tf.data prefetch strategy. FAANG-style questions.

18 mins
12 Questions
medium

ML Libraries — Medium Interview Mock 2

Second medium-difficulty mock. A different angle on applied reasoning: view flags, left vs inner merge semantics, apply bottleneck, pairplot scale, Welch vs Student t-test, DataLoader Windows crash, and Batch Norm training mode.

18 mins
12 Questions
hard

ML Libraries — Hard Interview Mock 1

First hard-difficulty mock. Covers int32 overflow, fancy indexing diagonal trap, temporal leakage with KFold, BFGS finite differences, custom sklearn transformer protocol, double backward error, full reproducibility checklist, GradientTape disconnected graph, and DistributedSampler epoch seeding.

25 mins
15 Questions
hard

ML Libraries — Hard Interview Mock 2

Second hard-difficulty mock. Covers reshape copy after transpose, covariance centering error, broadcast_to read-only trap, non-monotonic index slicing, concat O(N²) anti-pattern, ROC below diagonal diagnosis, nested CV selection bias, DataParallel .module access, NaN loss debugging, @tf.function retrace, scaler leakage in cross-validation, and KS test for production drift.

25 mins
15 Questions
hard

ML Libraries Elite — Memory, Graphs & Production Traps

Elite assessment for senior engineers. Tests deep internals: NumPy memory layout and overflow, broadcast_to read-only semantics, fancy indexing paired vs grid selection, covariance centering, pandas Copy-on-Write, concat O(N²), sparse matrix memory math, custom sklearn clone protocol, nested CV bias, PyTorch double backward, full reproducibility sources, GradientTape disconnection, @tf.function retrace cost, and einsum contraction. Expect multi-step reasoning on every question.

35 mins
18 Questions
hard

ML Libraries Elite — Debugging, Architecture & Edge Cases

Second elite assessment. A production-failure focused gauntlet: non-monotonic index slicing, ROC curve inversion diagnosis, twinx legend bug, Pipeline double-underscore naming, DataParallel .module access, NaN loss systematic diagnosis, DistributedSampler epoch seeding, padded_batch for variable-length sequences, scaler leakage in cross-validation, seaborn mask upper triangle, KS test for production drift, FeatureUnion vs ColumnTransformer, model.half() dtype mismatch, Keras save/load inference drift, TFRecord cache strategy, Categorical dtype add_categories trap, df.query @variable scope failure, and DataLoader __len__ contract.

35 mins
18 Questions