50% found this document useful (2 votes)
4K views8 pages

Distance-Based Methods - KNN

- Distance-based machine learning methods classify queries by computing the distance between queries and stored exemplars, which are examples closest to the query that influence its classification. - K-nearest neighbors (KNN) is a common distance-based algorithm that classifies new data based on the labels of the k training examples closest in distance. It works by finding the k training samples nearest to the new data and predicting the label based on these neighbors. - The value of k affects the algorithm's accuracy, and generally a low value like k=1 can be noisy while too high a value may include too many examples from other classes. There is no perfect way to select k.

Uploaded by

shreya sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
50% found this document useful (2 votes)
4K views8 pages

Distance-Based Methods - KNN

- Distance-based machine learning methods classify queries by computing the distance between queries and stored exemplars, which are examples closest to the query that influence its classification. - K-nearest neighbors (KNN) is a common distance-based algorithm that classifies new data based on the labels of the k training examples closest in distance. It works by finding the k training samples nearest to the new data and predicting the label based on these neighbors. - The value of k affects the algorithm's accuracy, and generally a low value like k=1 can be noisy while too high a value may include too many examples from other classes. There is no perfect way to select k.

Uploaded by

shreya sarkar
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

1.

Distance Based Methods

It classifies the queries by computing the distance between the queries.

In this concept Similarity, Euclidian Space, Euclidian Distance, Manhattan distance is


considered and how they are used let us see.

2. Nearest Neighbour

It is based on supervised learning and also called as KNN. Here we need to identify the
closest path using the distance measure like Euclidian Distance.

3. Decision Trees

Decision tree is a tree in which the branch node is representing the choice between the
numbers of alternatives. Representation of data in the form of tree to predict the unseen
circumstances.

4. Naïve Bayes

Naïve Bayes is a widely used method. In Naïve Bayes algorithm we are having a particular
dataset & it is generally used for constructing the classifiers where we are modelling and
assigning the class labels.
These are also available for classification.

• Linear Models

In linear models linear regression is very popular in statics and machine learning where the
linear relationship is considered in between the input variable and the output variable. Linear
regression is a statistical regression method which is used for predictive analysis

Logistic regression is another supervised learning algorithm which is used to solve the
classification problems. Logistic regression is used for binary classification where we are
having data set as 0 or 1.

In Generalized Linear models we are having multiple regression analysis.

• Support Vector Machine is a supervised learning technique which is used in


classification and regression both. Here we have nonlinearity and kernel methods.

• Beyond Binary Classification is also known as Multiclass classification. Here we need


to divide a particular problem into multi class problem with more than two possible
discrete outcomes we need to classify the particular problem.
Distance – Based Machine Learning Methods

1. Similarity
Similarity is a machine learning method that uses a nearest neighbour approach to
identify the similarity of two or more objects to each other based on algorithmic
distance functions. In order for similarity to operate at the speed and scale of machine
learning standards, two critical capabilities are required – high-speed indexing &
metric and non-metric distance functions.
Metric Or Non-Metric Distance Functions Provides significant improvements in accuracy over
traditional Euclidean distance.

Similarity is the basic building blocks for the particular activity.

It is a measure of how much two objects are alike.

Similarity measure is a distance with dimension representing the features of the objects.

Similarity is highly dependent on domain and application.

Similarity is measured in terms of x and y. If similarity 1 then we can say x=y else 0.

• Distance based methods are the machine learning algorithms that classify queries by
computing the distance between the queries and the number of stored exemplars.

Exemplars are nothing but the closures to the query that have the largest influence on the
classification assigned to the query. (Suppose we are having particular query and the
exemplars are nothing but the closest one which are having the largest inflation on the
classification which are assigned to that particular query).
Euclidean space is the fundamental space of classical geometry. Originally, it was the three-
dimensional space of Euclidean geometry, but in modern mathematics there are Euclidean
spaces of any nonnegative integer dimension, including the three-dimensional space and
the Euclidean plane (dimension two).

Distance-based models are the second class of Geometric models.

As the name implies, distance-based models work on the concept of distance.

In the context of Machine learning, the concept of distance is not based on merely the
physical distance between two points. Instead, we could think of the distance between two
points considering the mode of transport between two points. Travelling between two cities
by plane covers less distance physically than by train because a plane is unrestricted.
Similarly, in chess, the concept of distance depends on the piece used – for example, a
Bishop can move diagonally.

Thus, depending on the entity and the mode of travel, the concept of distance can be
experienced differently.

The distance metrics commonly used are Euclidean, Minkowski, Manhattan. Distance is
applied through the concept of neighbours and exemplars. Neighbours are points in proximity
with respect to the distance measure expressed through exemplars. Exemplars are either
centroids that find a centre of mass according to a chosen distance metric or medoids that find
the most centrally located data point. The most commonly used centroid is the arithmetic
mean, which minimises squared Euclidean distance to all other points.
Notes:

• The centroid represents the geometric centre of a plane figure, i.e., the arithmetic mean
position of all the points in the figure from the centroid point. This definition extends to any
object in n-dimensional space: its centroid is the mean position of all the points.

•Medoids are similar in concept to means or centroids. Medoids are most commonly used on
data when a mean or centroid cannot be defined. They are used in contexts where the centroid
is not representative of the dataset, such as in image data. Examples of distance-based models
include the nearest-neighbour models, which use the training data as exemplars – for
example, in classification. The K-means clustering algorithm also uses exemplars to create
clusters of similar data points.
K-Nearest Neighbour (KNN) Algorithm for Machine Learning

o K-Nearest Neighbour is one of the simplest Machine Learning algorithms based on


Supervised Learning technique.
o K-NN algorithm assumes the similarity between the new case/data and available cases and
put the new case into the category that is most similar to the available categories.
o K-NN algorithm stores all the available data and classifies a new data point based on the
similarity. This means when new data appears then it can be easily classified into a well suite
category by using K- NN algorithm.
o K-NN algorithm can be used for Regression as well as for Classification but mostly it is used
for the Classification problems.
o K-NN is a non-parametric algorithm, which means it does not make any assumption on
underlying data.
o It is also called a lazy learner algorithm because it does not learn from the training set
immediately instead it stores the dataset and at the time of classification, it performs an action
on the dataset.
o KNN algorithm at the training phase just stores the dataset and when it gets new data, then it
classifies that data into a category that is much similar to the new data.
o Example: Suppose, we have an image of a creature that looks similar to cat and dog, but we
want to know either it is a cat or dog. So for this identification, we can use the KNN
algorithm, as it works on a similarity measure. Our KNN model will find the similar features
of the new data set to the cats and dogs images and based on the most similar features it will
put it in either cat or dog category.

Why do we need a K-NN Algorithm?

Suppose there are two categories, i.e., Category A and Category B, and we have a new data point x1,
so this data point will lie in which of these categories? To solve this type of problem, we need a K-NN
algorithm. With the help of K-NN, we can easily identify the category or class of a particular dataset.
Consider the below diagram:
How does K-NN work?

The K-NN working can be explained on the basis of the below algorithm:

o Step-1: Select the number K of the neighbours


o Step-2: Calculate the Euclidean distance of K number of neighbours
o Step-3: Take the K nearest neighbours as per the calculated Euclidean distance.
o Step-4: Among these K neighbours, count the number of the data points in each category.
o Step-5: Assign the new data points to that category for which the number of the neighbour is
maximum.
o Step-6: Our model is ready.

Suppose we have a new data point and we need to put it in the required category. Consider the below
image:

o Firstly, we will choose the number of neighbours, so we will choose the k=5.
o Next, we will calculate the Euclidean distance between the data points. The Euclidean
distance is the distance between two points, which we have already studied in geometry. It
can be calculated as:

o By calculating the Euclidean distance we got the nearest neighbours, as three nearest
neighbours in category A and two nearest neighbours in category B. Consider the below
image:

o As we can see the 3 nearest neighbours are from category A, hence this new data point must
belong to category A.
How to select the value of K in the K-NN Algorithm?

Below are some points to remember while selecting the value of K in the K-NN algorithm:

o There is no particular way to determine the best value for "K", so we need to try some values
to find the best out of them. The most preferred value for K is 5.
o A very low value for K such as K=1 or K=2, can be noisy and lead to the effects of outliers in
the model.
o Large values for K are good, but too large K might include majority points from other classes.
o Rule of thumb is K<sqrt(n), n is number of examples.

Advantages of KNN Algorithm:

o It is simple to implement.
o It is robust to the noisy training data.
o It can be more effective if the training data is large.

Disadvantages of KNN Algorithm:

o Always needs to determine the value of K which may be complex some time.
o Takes more time to classify a new example (need to calculate and compare distance from new
example to all other examples).
o Needs large number of samples for accuracy.

Common questions

Powered by AI

Distance plays a critical role in distance-based machine learning models like KNN because it is the primary mechanism by which similarity is determined between different data points. The choice of distance metric, whether Euclidean, Manhattan, or Minkowski, affects the scaling and dimensional reflection of the data, which can impact model accuracy and performance . Common challenges associated with distance-based models include the curse of dimensionality, where high-dimensional spaces can dilute the effectiveness of distance measurements, making it difficult to distinguish between nearby and distant points. Also, the choice of K (number of nearest neighbors) in KNN is crucial, as a very small K can lead to overfitting due to noise, while a very large K might underfit by oversmoothing the decision boundary . Additionally, KNN's computational intensity increases with larger datasets, as it requires distance computation for possibly every stored instance at prediction time .

K-Nearest Neighbour (KNN) is a non-parametric, instance-based learning algorithm primarily used for classification and regression tasks. It functions by storing all available cases and classifying new data based on similarity measures, such as Euclidean distance. KNN is considered a lazy learner because it doesn't learn a model before receiving new data but rather performs computation at the classification stage . On the other hand, decision trees model data by creating a tree structure where nodes represent tests on features, branches represent the outcomes of these tests, and leaf nodes represent class labels or outcomes. Decision trees actively build a model by learning simple decision rules from the features available in the training data, making them more of an eager learning algorithm . Key differences include KNN's reliance on distance calculations, leading to a higher computation burden during classification, while decision trees precompute their model making them faster in prediction once the model is constructed.

The choice of distance metric in the k-means clustering algorithm significantly impacts the clustering outcome as it determines how similarity between data points is measured. Commonly used metrics include Euclidean, Manhattan, and Minkowski distances, each of which measures distance differently and can lead to different cluster shapes and compositions . Euclidean distance is sensitive to scaling, which means that the range of different dimensions can affect clustering. On the other hand, Manhattan distance might be preferable in high-dimensional spaces due to its robustness against outliers. The selection of a distance metric should consider the data distribution and the importance of each feature's scale, with standardization or normalization often necessary to ensure that all dimensions contribute equally to the distance calculation. Additionally, when the dataset includes categorical data or features with different units or scales, more sophisticated metrics, or data preprocessing steps might be required to ensure meaningful clustering results .

In a KNN algorithm, selecting the value of 'k' is crucial as it affects the model's performance and sensitivity to noise. A small k (e.g., k=1 or k=2) can lead to a model that captures noise in the data, making predictions more sensitive and likely to overfit. Conversely, a larger k value can smooth the decision boundary, reducing variance, but if k is too large, it risks including too many outlier points, leading to underfitting . A common approach is to use cross-validation to test various k values and choose the one that offers the best performance on validation datasets. A practical rule of thumb suggests setting k to less than the square root of the number of samples. Balancing the bias-variance tradeoff is essential when choosing k, as it directly impacts the classifier's responsiveness to sparse data and its stability against transient noise .

Centroids and medoids are central concepts in cluster analysis within distance-based models. A centroid is the arithmetic mean position of all the points in a dataset or a cluster in Euclidean space and represents the geometric center of the data. It is useful in methods like K-means clustering, where the goal is to partition data into clusters such that each cluster's members are more similar to its centroid than to those of other clusters . In contrast, a medoid is the most centrally located point in a dataset, which is conceptually similar to centroids but is used in cases where mean calculations may not be appropriate, such as with categorical data or in scenarios where the data are not symmetrically distributed. Medoids are useful for algorithms like PAM (Partitioning Around Medoids) or clustering problems where using actual data points as exemplars is required . The key difference lies in centroids being an abstraction that may not reflect any actual data point, while medoids are always actual data points.

Naïve Bayes classifiers operate under the assumption that all features are independent of each other given the class label, which simplifies the computation of the conditional probabilities required for classification. This assumption simplifies the model and enhances computational efficiency, as it allows the likelihood of features to be calculated as the product of individual probabilities . However, this assumption often does not hold in real-world datasets, where features can be correlated, leading to model inaccuracies when feature dependence significantly impacts the target variable. Despite these pitfalls, naïve Bayes can perform surprisingly well even when the independence assumption is violated, though performance generally degrades with increasing feature correlation, which can lead to suboptimal decision boundaries . In domains where features exhibit strong dependencies or where detailed interactions between features dictate outcomes, ignoring these complexities could result in significant errors or reduced predictive power.

The 'curse of dimensionality' affects distance-based methods, such as KNN, predominantly in high-dimensional spaces where the distance between points becomes increasingly uniform. This phenomenon makes distinguishing between near and distant points difficult since all points tend to look equally far from each other, thereby reducing the effectiveness of distance-based measures. This is problematic in domains with large feature sets or when using high-dimensional data like images or text . To mitigate this impact, dimensionality reduction techniques like Principal Component Analysis (PCA) or t-distributed Stochastic Neighbor Embedding (t-SNE) can be employed to reduce the number of dimensions while preserving as much variance as possible. Feature selection methods to identify and use the most informative input variables can also help. Additionally, using sophisticated distance metrics that are more robust to high dimensions, such as Mahalanobis distance, can alleviate some of the challenges posed by high-dimensional datasets .

'Beyond binary classification,' or multi-class classification, extends binary classification problems to scenarios where more than two class labels are involved. This can be implemented using several strategies, such as one-vs-all (OVA) and one-vs-one (OVO) methods. OVA involves training one classifier per class, with the samples of that class as positive samples and all other samples as negatives. Meanwhile, OVO trains one classifier per pair of classes and delegates the voting outcomes to classify among multiple classes . Challenges in multi-class classification include class imbalance, higher computational overhead due to multiple classifiers, and increased complexity in error analysis. Additionally, ensuring that classifiers work cooperatively in OVA and OVO strategies to yield a coherent final prediction can be non-trivial, as discrepancies between classifier outputs can complicate the decision boundary .

Support Vector Machines (SVM) use kernel methods to transform non-linearly separable data into a higher-dimensional space where a linear separator (hyperplane) can be more easily found. This is achieved through mathematical functions known as kernels, which implicitly perform the mapping of input data to a higher-dimensional space without the need to compute coordinates explicitly in that space. Common kernel functions include polynomial kernels and radial basis function (RBF) kernels . The use of kernels in SVMs allows them to handle complex patterns by transforming the input space and effectively finding decision boundaries that are non-linear in the original space but linear in the transformed feature space. This method increases model flexibility and complexity, allowing SVMs to fit wide varieties of datasets; however, it also increases computational cost and the risk of overfitting, particularly with high-dimensional kernels, necessitating careful model regularization and selection .

Linear regression and logistic regression are both linear models used in machine learning but are applied in different contexts. Linear regression is used for predicting continuous outcomes by fitting a linear equation to observed data, thus assuming a linear relationship between input variables and the output . Logistic regression, however, is used for classification problems where the target is categorical, specifically binary classification. It uses the logistic function to model the probability that a given input belongs to a certain class, presenting predictions as probabilities that can then be thresholded for classification . While linear regression provides direct quantitative predictions, logistic regression offers probabilities and is more suited for classification tasks due to its ability to bound outputs between 0 and 1.

You might also like