🤖 ML Algorithm Complexity Explorer

Interactive cheatsheet for understanding time complexity of popular ML algorithms

📈

Linear Regression (OLS)

moderate
Training
O(nm² + m³)
Inference
O(m)

Uses ordinary least squares to fit a linear relationship between features and target.

Prediction with linear relationships
📉

Linear Regression (SGD)

low
Training
O(n_epoch × nm)
Inference
O(m)

Uses stochastic gradient descent for iterative optimization.

Large datasets where batch methods are too slow
📊

Logistic Regression (Binary)

low
Training
O(n_epoch × nm)
Inference
O(m)

Binary classification using logistic function.

Binary classification problems
🎯

Logistic Regression (Multiclass)

moderate
Training
O(n_epoch × nmc)
Inference
O(mc)

Extended logistic regression for multiple classes.

Multi-class classification
🌳

Decision Tree

low
Training
O(n × log(n) × m)
Inference
O(d_tree)

Tree-based model that splits data based on feature values.

Interpretable models, feature selection
🌲

Random Forest Classifier

high
Training
O(n_trees × n × log(n) × m)
Inference
O(n_trees × d_tree)

Ensemble of decision trees with voting.

High accuracy, robust to overfitting

Support Vector Machines

high
Training
O(n²m + n³)
Inference
O(m × n_sv)

Finds optimal hyperplane for classification/regression.

High-dimensional data, kernel tricks
🎪

k-Nearest Neighbors

low
Training
Inference
O(nm)

Lazy learning algorithm that stores all training data.

Simple baseline, non-parametric problems
📝

Naive Bayes

low
Training
O(nm)
Inference
O(mc)

Probabilistic classifier based on Bayes theorem.

Text classification, spam filtering
🎨

KMeans Clustering

moderate
Training
O(i × k × nm)
Inference
??

Partitioning method that groups data into k clusters.

Customer segmentation, data exploration

Variable Glossary

n
Number of training samples
10,000 data points
m
Number of features/dimensions
50 features
c
Number of classes
5 categories
n_epoch
Number of training epochs
100 iterations
n_trees
Number of trees in forest
100 trees
d_tree
Depth of decision tree
10 levels deep
n_sv
Number of support vectors
500 support vectors
k
Number of clusters/neighbors
5 clusters
i
Number of iterations
50 iterations

Legend

Low complexity - Fast, scalable
Moderate complexity - Balanced
High complexity - Slower, limited scale

Loading ML Algorithm Complexity Explorer...