How To Prepare Your Dataset For Machine Learning in Python
How To Prepare Your Dataset For Machine Learning in Python
Learning In Python
By Krunal Last updated Jul 25, 2018
2 2,304
Share
By clicking the subscribe button you will never miss the new articles!
Subscribe
How To Prepare Dataset For Machine Learning in Python. Machine Learning is
all about train your model based on current data to predict the future values. So we
need the proper amounts to train our model. So in real life, we do not always have
the correct data to work with. If the data is not processed correctly, then we need to
prepare it and then start training our model. So in this post, we will see step by step
to transform our initial data into Training and Test data. For this example, we use
python libraries like scikit learn, numpy, and pandas.
Content Overview [hide]
1 Prepare Dataset For Machine Learning in Python
2 #Steps To Prepare The Data.
3 #1: Get The Dataset.
4 #2: Handle Missing Data.
5 #3: Encode Categorical data.
6 #4: Split the dataset into Training Set and Test Set.
7 #5: Feature Scaling
So, we will be all the steps on the dataset one by one and prepare the final dataset
on which we can apply regression and different algorithms.
Download File: patientData
Now, we need to create a project directory. So let us build using the following
command.
mkdir predata
cd predata
Now, launch the Spyder application and navigate to your project folder. You can
see, we have already moved the patientData.csv file so that you can see that file
over there.
Okay, now we need to create one Python file called datapre.py and start importing
the mathematical libraries.
Write the following code inside datapre.py file. So, your file looks like this.
Remember, we are usingPython 3
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Jul 25 18:52:15 2018
Now, select code of three import statements and hit the command + enter and
you can see at the right side down, the code is running successfully.
That means, we have successfully imported the libraries. If you found any error
then possibly the numpy, pandas, or matplotlib library is missing. So you need to
install that, and that is it.
Okay, now write the following code after the importing the libraries.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Jul 25 18:52:15 2018
@author: krunal
"""
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
dataset = pd.read_csv('patientData.csv')
dataset = pd.read_csv('patientData.csv')
Okay, so we have included our initial dataset, and you can see here.
Here, you can see that if the value is empty, then nan is displaying. So we need to
change it with theMEAN values. So let us do that.
X = dataset.iloc[:, :-1].values
Y = dataset.iloc[:, 3].values
So, here in the X, we have selected the first four columns and leave the fifth column.
It will be our Y.
Okay, now we need to handle the missing data. We will use a library Scikit learn.
...
So, here we have to use Imputer module to use the strategy ‘mean’ and fill the
missing values with the mean values. Run the above lines and type the X in the
Console. You can see something like below. Here, column 1 and 2 have missing
values, but we have written 1:3 because the upper bound is excluded that is why
we have taken 1 and 3, and it is working fine. Finally, transform the whole column
values which have NaN values, and now we have got the filled values.
Here, you can see that the mean values of that particular column fill the missing
values.
So, we have handled the missing data. Now, head over to the next step.
#3: Encode Categorical data.
In our dataset, there are two categorical columns.
1. Gender
2. Liver Disease
So, we need to encode this two columns of data.
Here, we have encoded the values of the first column. Now, here, we have only two
cases for the first column, and that is Female and Male. Now, after transform, the
values are 1 for Female, and 0for Male.
Run the above line and see the changes in categorical data. So, here for Female, it
is 1 and Male is0. It has created one more column and
replaces Male and Female according to 1 and 0. That is why it becomes from 3
columns to 4 columns.
# Split the data between the Training Data and Test Data
Run the code, and you can get the four more variables. So, we have the total
of seven variables.
So, here, we have split the both Axis X and Y into X_train and X_test
So, you have 80% data on the X_train and Y_train and 20% data on the X_test and
Y_test.
#5: Feature Scaling
In a general scenario, machine learning is based on Euclidean Distance. Here for the
column Albuminand Age column has an entirely different range of values. So we need
to convert those values and make it under the range of values. That is why this is called
feature scaling. We need to scale the values for Agecolumn. So let us scale the X_train
and X_test.
# Feature Scaling
Here, we do not need for Y because it is already in scaled. Now run the above code and
hit the following command.
Here, we can see that all the values are appropriately scaled and also you can check
the X_test variable as well.
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
"""
Created on Wed Jul 25 18:52:15 2018
@author: krunal
"""
# Importing Libraries
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
# Importing Dataset
dataset = pd.read_csv('patientData.csv')
X = dataset.iloc[:, :-1].values
Y = dataset.iloc[:, 3].values
# Split the data between the Training Data and Test Data
# Feature Scaling
from sklearn.preprocessing import StandardScaler
sc_X = StandardScaler()
X_train = sc_X.fit_transform(X_train)
X_test = sc_X.transform(X_test)
Machine Learning has very complex computation. It totally depends on how you get
the data and in which condition. Based on the condition of the data, you will start to
preprocess the data and split the data into Train and Test model.
Basically you have three data sets: training, validation and testing.
You train the classifier using 'training set', tune the parameters using 'validation set' and
then test the performance of your classifier on unseen 'test set'. An important point to
note is that during training the classifier only the training and/or validation set is
available. The test set must not be used during training the classifier. The test set will
only be available during testing the classifier.
There is no 'one' way of choosing the size of training/testing set and people apply
heuristics such as 10% testing and 90% training. However, doing so can bias the
classification results and the results may not be generalizable. A well accepted method is
N-Fold cross validation, in which you randomize the dataset and create N (almost) equal
size partitions. Then choose Nth partition for testing and N-1 partitions for training the
classifier. Within the training set you can further employ another K-fold cross validation
to create a validation set and find the best parameters. And repeat this process N times
to get an average of the metric. Since we want to get rid of classifier 'bias' we repeat this
above process M times (by randomizing data and splitting into N fold) and take average
of the metric. Cross-validation is almost unbiased, but it can also be misused if training
and validation set comes from different populations and knowledge from training set is
used in the test set.