Python Libraries 2
Python Libraries 2
Other alternatives:
◦ Text Editor + Command line
◦ IDE (Integrated Development Environment): PyCharm, Vscode, …
What is Anaconda?
The open-source Anaconda is the easiest way to perform Python/R data science
and machine learning on Linux, Windows, and Mac OS X. With over 19 million
users worldwide, it is the industry standard for developing, testing, and training
on a single machine, enabling individual data scientists to:
▪ Quickly download 7,500+ Python/R data science packages
▪ Analyze data with scalability and performance with Dask, NumPy, pandas,
and Numba
▪ Visualize results with Matplotlib, Bokeh, Datashader, and Holoviews
▪ Develop and train machine learning and deep learning models with scikit-
learn, TensorFlow, and Theano
Anaconda Installation
Please follow the instruction here to install the Anaconda (for Python 3.7)
https://www.anaconda.com/distribution/#download-section
It provides different versions to suit different OS. Please select the one you are
using.
Just install according to the default setting, and the environment variables will
be automatically configured after installation.
What is Jupyter Notebook?
The Jupyter Notebook is an open-source web application that allows you to
create and share documents that contain live code, equations, visualizations
and narrative text. Uses include: data cleaning and transformation, numerical
simulation, statistical modeling, data visualization, machine learning, and much
more.
Link: http://www.numpy.org/
Python Libraries for Data Scientists
SciPy:
▪ collection of algorithms for linear algebra, differential equations,
numerical integration, optimization, statistics and more
▪ built on NumPy
Link: https://www.scipy.org/scipylib/
Python Libraries for Data Scientists
Pandas:
▪ adds data structures and tools designed to work with table-like data
(similar to Series and Data Frames in R)
Link: http://pandas.pydata.org/
Python Libraries for Data Scientists
matplotlib:
▪ python 2D plotting library which produces publication quality figures in a
variety of hardcopy formats
Link: https://matplotlib.org/
Python Libraries for Data Scientists
Seaborn:
▪ based on matplotlib
Link: https://seaborn.pydata.org/
Python Libraries for Data Scientists
SciKit-Learn:
▪ provides machine learning algorithms: classification, regression, clustering,
model validation etc.
Link: http://scikit-learn.org/
Loading Python Libraries
19
Reading data using pandas
20
Exploring data frames
21
Data Frame data types
Pandas Type Native Python Type Description
object string The most general dtype. Will be assigned to your column
if column has mixed types (numbers and strings).
int64 int Numeric characters. 64 refers to the memory allocated to
hold this character.
float64 float Numeric characters with decimals. If a column contains
numbers and NaNs, pandas will default to float64, in case
your missing value has a decimal.
datetime64, N/A (but see Values meant to hold time data. Look into these for time
timedelta[ns] the datetime module in series experiments.
Python’s standard library)
22
Data Frame data types
23
Data Frames attributes
Python objects have attributes and methods.
df.attribute description
dtypes list the types of the columns
columns list the column names
axes list the row labels and column names
ndim number of dimensions
24
Data Frames attributes
25
Data Frames methods
Unlike attributes, python methods have parenthesis.
All attributes and methods can be listed with a dir() function: dir(df)
df.method() description
head( [n] ), tail( [n] ) first/last n rows
describe() generate descriptive statistics (for numeric columns only)
max(), min() return max/min values for all numeric columns
26
Data Frames methods
27
Data Frames methods
28
Selecting a column in a Data Frame
Note: If we want to select a column
with a name as the attribute in
DataFrames we should use method
1.
E.G., Since there is an attribute –
rank in DataFrame, if we want to
select the column ‘rank’, we should
use df[‘rank’], and cannot use
method 2, i.e., df.rank, which will
return the attribute rank of the data
frame instead of the column “rank”.
29
Data Frame: filtering
To subset the data we can apply Boolean indexing. This indexing is commonly
known as a filter. For example if we want to subset the rows in which the age
value is greater than 50:
34
Data Frames: Slicing
There are a number of ways to subset the Data Frame:
• one or more columns
• one or more rows
• a subset of rows and columns
35
Data Frames: Slicing
When selecting one column, it is possible to use single set of brackets, but the
resulting object will be a Series (not a DataFrame):
When we need to select more than one column and/or make the output to be a
DataFrame, we should use double brackets:
36
Data Frames: Selecting rows
If we need to select a range of rows, we can specify the range using ":"
Notice that the first row has a position 0, and the last value in the range is
omitted:
So for 0:10 range the first 10 rows are returned with the positions starting with 0
and ending with 9
37
Graphics to explore the data
Seaborn package is built on matplotlib but provides high level
interface for drawing attractive statistical graphics, similar to ggplot2
library in R. It specifically targets statistical data visualization
46
Graphics
description
histplot histogram
barplot estimate of central tendency for a numeric variable
violinplot similar to boxplot, also shows the probability density of the data
jointplot Scatterplot
regplot Regression plot
pairplot Pairplot
boxplot boxplot
swarmplot categorical scatterplot
factorplot General categorical plot
47
Draw Histogram Using Matplotlib
48
Draw Histogram Using Seaborn
49
Python for Machine Learning
Machine learning: the problem setting:
In general, a learning problem considers a set of n samples of data and then tries to
predict properties of unknown data. If each sample is more than a single number and,
for instance, a multi-dimensional entry (aka multivariate data), it is said to have
several attributes or features.
Machine learning is about learning some properties of a data set and applying
them to new data. This is why a common practice in machine learning to
evaluate an algorithm is to split the data at hand into two sets, one that we call
the training set on which we learn data properties and one that we call the
testing set on which we test these properties.
scikit-learn comes with a few standard datasets, for instance the iris and digits
datasets for classification and the boston house prices dataset for regression.
56
Loading an example dataset
A dataset is a dictionary-like object that holds all the data and some metadata
about the data. This data is stored in the .data member, which is a (n_samples,
n_features) array. In the case of supervised problem, one or more response
variables are stored in the .target member.
57
Loading an example dataset - digits
An example showing how the scikit-learn can be used to recognize images of
hand-written digits.
58
Loading an example dataset - digits
For instance, in the case of the digits dataset, digits.data gives access to the
features that can be used to classify the digits samples:
and digits.target gives the ground truth for the digit dataset, that is the number
corresponding to each digit image that we are trying to learn:
59
Learning and predicting
In the case of the digits dataset, the task is to predict, given an image, which
digit it represents. We are given samples of each of the 10 possible classes (the
digits zero through nine) on which we fit a classifier to be able to predict the
classes to which unseen samples belong.
60
Learning and predicting
For now, we will consider the classifier as a black box:
61
Learning and predicting
For the training set, we’ll use all the images from our dataset, except for the
last image, which we’ll reserve for our predicting. We select the training set
with the [:-1] Python syntax, which produces a new array that contains all but
the last item from digits.data:
62
Learning and predicting
Now you can predict new values. In this case, you’ll predict using the last image
from digits.data. By predicting, you’ll determine the image from the training set
that best matches the last image.
63
Text Processing
(English/Chinese)
Introduction
Hilarious😂😂 !!!!
Want to know more. Checkout www.h2o.ai for additional information
thnks for readin the notebook
香港是一個國際化的大都市 \o/ :-)
9月16日 (9月16日)
Objective
• To understand the various text preprocessing steps with code examples
• Some of the common text preprocessing / cleaning steps are:
English
Chinese
Lower casing
Conversion between Full / Half width
Removal of Punctuations
Conversion between Traditional /
Removal of Frequent / Rare words Simplified words
• This is more helpful for text featurization techniques like frequency, tfidf as it
helps to combine the same words together thereby reducing the duplication
and get correct counts / tfidf values.
• This may not be helpful when we do tasks like Part of Speech tagging (where
proper casing gives some information about Nouns and so on) and Sentiment
Analysis (where upper casing refers to anger and so on)
Lower Casing
These stopword lists are already compiled for different languages and we can safely use them. For
example, the stopword list for english language from the nltk package can be seen below.
• The goal of both stemming and lemmatization is to reduce inflectional forms and sometimes derivationally related forms of a word to a common
base form. For instance:
As a result, Lemmatization is
generally slower than stemming
process. So depending on the
speed requirement, we can choose
to use either stemming or
lemmatization.
Stemming and lemmatization
Advanced lemmatization
according to word’s Part-
of-Speech (POS) label.
Removal of Emojis / Emoticons
With more and more usage of social media platforms, there is an explosion in the usage of
emojis and emoticons in our day to day life as well. Probably we might need to remove these
emojis for some of our textual analysis.
From Grammarist.com, emoticon is built from keyboard characters that when put together in
a certain way represent a facial expression, an emoji is an actual image.
:-) is an emoticon
😀😀 is an emoji
Please note again that the removal of emojis / emoticons are not always preferred and
decision should be made based on the use case at hand.
Removal of Emojis / Emoticons
In emojis removal, we use
a useful python package
“re” (regular expression).
“re” allows us to find all
sub-strings match the query
regular expression.
In emoticons removal,
emoticons need to be
manually defined.
Removal of URLs / HTML tags
Next preprocessing step is to remove any URLs present in the data. For
example, if we are doing a twitter analysis, then there is a good chance that the
tweet will have some URL in it. Probably we might need to remove them for our
further analysis.
One another common preprocessing technique that will come handy in multiple
places is removal of html tags. This is especially useful, if we scrap the data from
different websites. We might end up having html strings as part of our text.
Removal of URLs / HTML tags
An alternative is to replace
url to a special token called
‘[url]’ such that the
sentence is still
grammatically correct.
Removal of URLs / HTML tags
9月16日
9月16日
Conversion between Full / Half width
Yang Haoran
2022-4-1
Overview
▪Movie Review Sentiment Classification
▪ train a classification model capable of predicting whether a given movie review is positive or negative.
▪Dataset
▪ Moview-review data collected by Cornell University
▪Text Preprocessing
▪ Mainly using NLTK and regular expression
▪ Converting Text to Numbers
▪Model training and Evaluation
▪ Naïve bayes, logistic regression…
▪ Confusion matrix…
Dataset
You can download the sentiment classification dataset from the below Link or course website
www.cs.cornell.edu/people/pabo/movie-review-data/review_polarity.tar.gz
You can also convert the original dataset to pandas’ format, so that you can use pandas to process it.
r+’pattern’
Text Preprocessing \s: match A whitespace character
max_features: maximum words we use. (Not using whole vocabulary since it may be very large)
we want to use 1500 most occurring words as features for training our classifier.
min_df: the minimum number of documents that should contain this feature. So we only include those
words that occur in at least 5 documents.
max_df: we should include only those words that occur in a maximum of 70% of all the documents. Words
that occur in almost every document are usually not suitable for classification because they do not provide
any unique information about the document.
Training and Evaluation
Split the whole dataset to train and test set. The split fraction is 0.2.