Random Forest
An ensemble learning method that builds many decision trees using random subsets of data and features, then combines their predictions for robust, accurate results.
How It Works
Each tree is trained on a bootstrap sample of the data. At each split, only a random subset of features is considered. For prediction, average all trees (regression) or take majority vote (classification).
Strengths
Works well out of the box with minimal tuning. Handles mixed data types, missing values, and nonlinear relationships. Provides feature importance scores. Resistant to overfitting. Still competitive on tabular data despite deep learning advances.