Distance-Weighted KNN
- Type: Classification and Regression
- Concept: A variation of KNN where neighbors closer to the test point are given more weight in determining the output.
- How it works:
- Similar to KNN, but instead of simply taking a majority vote (for classification) or averaging (for regression), each neighbor’s contribution is weighted by its distance.
- The closer the neighbor, the greater its influence on the prediction.
- Common weighting functions include inverse distance (1/d) or Gaussian decay.
Pros:
- Gives more importance to closer neighbors, improving predictions in some cases.
- Helps mitigate cases where outliers are in close proximity to the decision boundary.
Cons:
- Computational complexity similar to KNN.
- Requires a good choice of weighting function.
Applications:
- Similar to KNN, but especially useful where the importance of proximity is critical (e.g., medical diagnosis).