Statistics and Machine Learning in Python
Statistics and Machine Learning in Python
Python
Release 0.3 beta
1 Introduction 1
1.1 Python ecosystem for data-science . . . . . . . . . . . . . . . . . . . . . . . . . . 1
1.2 Introduction to Machine Learning . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 Data analysis methodology . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6
2 Python language 9
2.1 Import libraries . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.2 Basic operations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9
2.3 Data types . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10
2.4 Execution control statements . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.5 Functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.6 List comprehensions, iterators, etc. . . . . . . . . . . . . . . . . . . . . . . . . . . 20
2.7 Regular expression . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.8 System programming . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.9 Scripts and argument parsing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 28
2.10 Networking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
2.11 Modules and packages . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
2.12 Object Oriented Programming (OOP) . . . . . . . . . . . . . . . . . . . . . . . . 30
2.13 Exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 32
3 Scientific Python 33
3.1 Numpy: arrays and matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
3.2 Pandas: data manipulation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
3.3 Matplotlib: data visualization . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 54
4 Statistics 69
4.1 Univariate statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 69
4.2 Lab 1: Brain volumes study . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 107
4.3 Multivariate statistics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 117
4.4 Time Series in python . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 130
i
5.8 Gradient descent . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 233
ii
CHAPTER
ONE
INTRODUCTION
• Interpreted
• Garbage collector (do not prevent from memory leak)
• Dynamically-typed language (Java is statically typed)
1.1.2 Anaconda
Anaconda is a python distribution that ships most of python tools and libraries
Installation
1. Download anaconda (Python 3.x) http://continuum.io/downloads
2. Install it, on Linux
bash Anaconda3-2.4.1-Linux-x86_64.sh
export PATH="${HOME}/anaconda3/bin:$PATH"
Install additional packages. Those commands install qt back-end (Fix a temporary issue to run
spyder)
1
Statistics and Machine Learning in Python, Release 0.3 beta
conda list
Environments
• A conda environment is a directory that contains a specific collection of conda packages
that you have installed.
• Control packages environment for a specific purpose: collaborating with someone else,
delivering an application to your client,
• Switch between environments
List of all environments
:: conda info –envs
1. Create new environment
2. Activate
3. Install new package
Miniconda
Anaconda without the collection of (>700) packages. With Miniconda you download only the
packages you want with the conda command: conda install PACKAGENAME
1. Download anaconda (Python 3.x) https://conda.io/miniconda.html
2. Install it, on Linux
bash Miniconda3-latest-Linux-x86_64.sh
export PATH=${HOME}/miniconda3/bin:$PATH
2 Chapter 1. Introduction
Statistics and Machine Learning in Python, Release 0.3 beta
1.1.3 Commands
python: python interpreter. On the dos/unix command line execute wholes file:
python file.py
Interactive mode:
python
ipython
For neuroimaging:
1.1.4 Libraries
scipy.org: https://www.scipy.org/docs.html
Numpy: Basic numerical operation. Matrix operation plus some basic solvers.:
import numpy as np
X = np.array([[1, 2], [3, 4]])
#v = np.array([1, 2]).reshape((2, 1))
v = np.array([1, 2])
np.dot(X, v) # no broadcasting
X * v # broadcasting
np.dot(v, X)
X - X.mean(axis=0)
import scipy
import scipy.linalg
scipy.linalg.svd(X, full_matrices=False)
Matplotlib: visualization:
import numpy as np
import matplotlib.pyplot as plt
#%matplotlib qt
x = np.linspace(0, 10, 50)
sinus = np.sin(x)
plt.plot(x, sinus)
plt.show()
4 Chapter 1. Introduction
Statistics and Machine Learning in Python, Release 0.3 beta
• Linear model.
• Non parametric statistics.
• Linear algebra: matrix operations, inversion, eigenvalues.
6 Chapter 1. Introduction
Statistics and Machine Learning in Python, Release 0.3 beta
8 Chapter 1. Introduction
CHAPTER
TWO
PYTHON LANGUAGE
# import a function
from math import sqrt
sqrt(25) # no longer have to reference the module
# define an alias
import numpy as np
# Numbers
10 + 4 # add (returns 14)
10 - 4 # subtract (returns 6)
10 * 4 # multiply (returns 40)
10 ** 4 # exponent (returns 10000)
10 / 4 # divide (returns 2 because both types are 'int')
10 / float(4) # divide (returns 2.5)
5 % 4 # modulo (returns 1) - also known as the remainder
9
Statistics and Machine Learning in Python, Release 0.3 beta
# Boolean operations
# comparisons (these return True)
5 > 3
5 >= 3
5 != 3
5 == 5
2.3.1 Lists
Different objects categorized along a certain ordered sequence, lists are ordered, iterable, mu-
table (adding or removing objects changes the list size), can contain multiple data types ..
chunk-chap13-001
# create a list
simpsons = ['homer', 'marge', 'bart']
# examine a list
simpsons[0] # print element 0 ('homer')
len(simpsons) # returns the length (3)
# sort a list in place (modifies but does not return the list)
simpsons.sort()
simpsons.sort(reverse=True) # sort in reverse
simpsons.sort(key=len) # sort by a key
# return a sorted list (but does not modify the original list)
sorted(simpsons)
sorted(simpsons, reverse=True)
sorted(simpsons, key=len)
# examine objects
id(num) == id(same_num) # returns True
id(num) == id(new_num) # returns False
num is same_num # returns True
num is new_num # returns False
num == same_num # returns True
num == new_num # returns True (their contents are equivalent)
# conatenate +, replicate *
[1, 2, 3] + [4, 5, 6]
["a"] * 2 + ["b"] * 3
2.3.2 Tuples
Like lists, but their size cannot change: ordered, iterable, immutable, can contain multiple data
types
# create a tuple
digits = (0, 1, 'two') # create a tuple directly
digits = tuple([0, 1, 'two']) # create a tuple from a list
zero = (0,) # trailing comma is required to indicate it's a tuple
# examine a tuple
digits[2] # returns 'two'
len(digits) # returns 3
digits.count(0) # counts the number of instances of that value (1)
digits.index(1) # returns the index of the first instance of that value (1)
# concatenate tuples
digits = digits + (3, 4)
# create a single tuple with elements repeated (also works with lists)
(3, 4) * 2 # returns (3, 4, 3, 4)
# tuple unpacking
bart = ('male', 10, 'simpson') # create a tuple
2.3.3 Strings
# create a string
s = str(42) # convert another data type into a string
s = 'I like you'
# concatenate strings
s3 = 'The meaning of life is'
s4 = '42'
s3 + ' ' + s4 # returns 'The meaning of life is 42'
s3 + ' ' + str(42) # same thing
# string formatting
# more examples: http://mkaz.com/2012/10/10/python-string-format/
'pi is {:.2f}'.format(3.14159) # returns 'pi is 3.14'
Out:
first line
second line
Out:
first line\nfirst line
sequece of bytes are not strings, should be decoded before some operations
s = b'first line\nsecond line'
print(s)
print(s.decode('utf-8').split())
Out:
b'first line\nsecond line'
['first', 'line', 'second', 'line']
2.3.5 Dictionaries
Dictionaries are structures which can contain multiple data types, and is ordered with key-value
pairs: for each (unique) key, the dictionary outputs one value. Keys can be strings, numbers, or
tuples, while the corresponding values can be any Python object. Dictionaries are: unordered,
iterable, mutable
# create an empty dictionary (two ways)
empty_dict = {}
empty_dict = dict()
# examine a dictionary
family['dad'] # returns 'homer'
len(family) # returns 3
family.keys() # returns list: ['dad', 'mom', 'size']
family.values() # returns list: ['homer', 'marge', 6]
family.items() # returns list of tuples:
# [('dad', 'homer'), ('mom', 'marge'), ('size', 6)]
'mom' in family # returns True
'marge' in family # returns False (only checks keys)
Out:
Error 'grandma'
2.3.6 Sets
Like dictionaries, but with unique keys only (no corresponding values). They are: unordered, it-
erable, mutable, can contain multiple data types made up of unique elements (strings, numbers,
or tuples)
# create an empty set
empty_set = set()
# create a set
languages = {'python', 'r', 'java'} # create a set directly
snakes = set(['cobra', 'viper', 'python']) # create a set from a list
# examine a set
len(languages) # returns 3
'python' in languages # returns True
# set operations
languages & snakes # returns intersection: {'python'}
languages | snakes # returns union: {'cobra', 'r', 'java', 'viper', 'python'}
languages - snakes # returns set difference: {'r', 'java'}
snakes - languages # returns set difference: {'cobra', 'viper'}
Out:
Error 'c'
x = 3
# if statement
if x > 0:
print('positive')
# if/else statement
if x > 0:
print('positive')
else:
print('zero or negative')
# if/elif/else statement
if x > 0:
print('positive')
elif x == 0:
print('zero')
else:
print('negative')
Out:
positive
positive
positive
positive
2.4.2 Loops
Loops are a set of instructions which repeat until termination conditions are met. This can
include iterating through all values in an object, go through a range of values, etc
# range returns a list of integers
range(0, 3) # returns [0, 1, 2]: includes first value but excludes second value
range(3) # same thing: starting at zero is the default
range(0, 5, 2) # returns [0, 2, 4]: third argument specifies the 'stride'
# for loop
fruits = ['apple', 'banana', 'cherry']
for i in range(len(fruits)):
print(fruits[i].upper())
# use range when iterating over a large sequence to avoid actually creating the integer␣
˓→list in memory
v = 0
for i in range(10 ** 6):
v += 1
quote = """
our incomes are like our shoes; if too small they gall and pinch us
but if too large they cause us to stumble and to trip
"""
# use enumerate if you need to access the index value within the loop
for index, fruit in enumerate(fruits):
print(index, fruit)
# for/else loop
for fruit in fruits:
if fruit == 'banana':
print("Found the banana!")
break # exit the loop and skip the 'else' block
(continues on next page)
# while loop
count = 0
while count < 5:
print("This will print 5 times")
count += 1 # equivalent to 'count = count + 1'
Out:
APPLE
BANANA
CHERRY
APPLE
BANANA
CHERRY
dad homer
mom marge
size 6
0 apple
1 banana
2 cherry
Can't find the banana
Found the banana!
This will print 5 times
This will print 5 times
This will print 5 times
This will print 5 times
This will print 5 times
key = 'c'
try:
dct[key]
except:
print("Key %s is missing. Add it with empty value" % key)
dct['c'] = []
print(dct)
Out:
2.5 Functions
Functions are sets of instructions launched when called upon, they can have multiple input
values and a return value
#
def add(a, b):
return a + b
add(2, 3)
add("deux", "trois")
# default arguments
def power_this(x, power=2):
return x ** power
power_this(2) # 4
power_this(2, 3) # 8
2.5. Functions 19
Statistics and Machine Learning in Python, Release 0.3 beta
# return values can be assigned into multiple variables using tuple unpacking
min_num, max_num = min_max(nums) # min_num = 1, max_num = 3
Out:
this is text
3
3
Process which affects whole lists without iterating through loops. For more: http://
python-3-patterns-idioms-test.readthedocs.io/en/latest/Comprehensions.html
# for loop to create a list of cubes
nums = [1, 2, 3, 4, 5]
cubes = []
for num in nums:
cubes.append(num**3)
# set comprehension
fruits = ['apple', 'banana', 'cherry']
unique_lengths = {len(fruit) for fruit in fruits} # {5, 6}
# dictionary comprehension
fruit_lengths = {fruit:len(fruit) for fruit in fruits} # {'apple': 5, 'banana
˓→': 6, 'cherry': 6}
import re
Out:
Method/Attribute Purpose
match(string) Determine if the RE matches at the beginning of the string.
search(string) Scan through a string, looking for any location where this RE matches.
findall(string) Find all substrings where the RE matches, and returns them as a list.
finditer(string) Find all substrings where the RE matches, and returns them as an itera-
tor.
regex.sub("SUB-", "toto")
Out:
import os
Out:
/home/edouard/git/pystatsml/python_lang
Temporary directory
import tempfile
tmpdir = tempfile.gettempdir()
Join paths
# list containing the names of the entries in the directory given by path.
os.listdir(tmpdir)
Create a directory
if not os.path.exists(mytmpdir):
os.mkdir(mytmpdir)
# Write
lines = ["Dans python tout est bon", "Enfin, presque"]
# Read
## read one line at a time (entire file does not have to fit into memory)
f = open(filename, "r")
f.readline() # one string per line (including newlines)
f.readline() # next line
f.close()
## read one line at a time (entire file does not have to fit into memory)
f = open(filename, 'r')
f.readline() # one string per line (including newlines)
f.readline() # next line
f.close()
## use list comprehension to duplicate readlines without reading entire file at once
f = open(filename, 'r')
[line for line in f]
f.close()
Out:
/tmp/foobar/myfile.txt
Walk
import os
WD = os.path.join(tmpdir, "foobar")
Out:
import tempfile
import glob
tmpdir = tempfile.gettempdir()
Out:
['/tmp/foobar/myfile.txt']
['myfile']
import shutil
shutil.copy(src, dst)
try:
shutil.copytree(src, dst)
shutil.rmtree(dst)
shutil.move(src, dst)
except (FileExistsError, FileNotFoundError) as e:
pass
Out:
• For more advanced use cases, the underlying Popen interface can be used directly.
• Run the command described by args.
• Wait for command to complete
• return a CompletedProcess instance.
• Does not capture stdout or stderr by default. To do so, pass PIPE for the stdout and/or
stderr arguments.
import subprocess
# Capture output
out = subprocess.run(["ls", "-a", "/"], stdout=subprocess.PIPE, stderr=subprocess.STDOUT)
# out.stdout is a sequence of bytes that should be decoded into a utf-8 string
print(out.stdout.decode('utf-8').split("\n")[:5])
Out:
0
['.', '..', 'bin', 'boot', 'cdrom']
Process
A process is a name given to a program instance that has been loaded into memory
and managed by the operating system.
Process = address space + execution context (thread of control)
Process address space (segments):
• Code.
• Data (static/global).
• Heap (dynamic memory allocation).
• Stack.
Execution context:
• Data registers.
• Stack pointer (SP).
• Program counter (PC).
• Working Registers.
OS Scheduling of processes: context switching (ie. save/load Execution context)
Pros/cons
• Context switching expensive.
• (potentially) complex data sharing (not necessary true).
• Cooperating processes - no need for memory protection (separate address
spaces).
• Relevant for parrallel computation with memory allocation.
Threads
• Threads share the same address space (Data registers): access to code, heap
and (global) data.
• Separate execution stack, PC and Working Registers.
Pros/cons
• Faster context switching only SP, PC and Working Registers.
• Can exploit fine-grain concurrency
• Simple data sharing through the shared address space.
• Precautions have to be taken or two threads will write to the same memory at
the same time. This is what the global interpreter lock (GIL) is for.
• Relevant for GUI, I/O (Network, disk) concurrent operation
In Python
• The threading module uses threads.
• The multiprocessing module uses processes.
Multithreading
import time
import threading
startime = time.time()
# Will execute both in parallel
thread1.start()
thread2.start()
# Joins threads back to the parent process
thread1.join()
thread2.join()
print("Threading ellapsed time ", time.time() - startime)
print(out_list[:10])
Out:
Multiprocessing
import multiprocessing
startime = time.time()
p1.start()
p2.start()
p1.join()
p2.join()
print("Multiprocessing ellapsed time ", time.time() - startime)
Out:
import multiprocessing
import time
startime = time.time()
p1.start()
p2.start()
p1.join()
p2.join()
print(out_list[:10])
Out:
import os
import os.path
import argparse
import re
import pandas as pd
if __name__ == "__main__":
# parse command line options
output = "word_count.csv"
parser = argparse.ArgumentParser()
parser.add_argument('-i', '--input',
help='list of input files.',
nargs='+', type=str)
parser.add_argument('-o', '--output',
help='output csv file (default %s)' % output,
type=str, default=output)
options = parser.parse_args()
if options.input is None :
parser.print_help()
raise SystemExit("Error: input files are missing")
else:
filenames = [f for f in options.input if os.path.isfile(f)]
# Match words
regex = re.compile("[a-zA-Z]+")
count = dict()
for filename in filenames:
(continues on next page)
fd = open(options.output, "w")
# Pandas
df = pd.DataFrame([[k, count[k]] for k in count], columns=["word", "count"])
df.to_csv(options.output, index=False)
2.10 Networking
# TODO
2.10.1 FTP
Out:
2.10.2 HTTP
# TODO
2.10. Networking 29
Statistics and Machine Learning in Python, Release 0.3 beta
2.10.3 Sockets
# TODO
2.10.4 xmlrpc
# TODO
A module is a Python file. A package is a directory which MUST contain a special file called
__init__.py
To import, extend variable PYTHONPATH:
export PYTHONPATH=path_to_parent_python_module:${PYTHONPATH}
Or
import sys
sys.path.append("path_to_parent_python_module")
The __init__.py file can be empty. But you can set which modules the package exports as the
API, while keeping other modules internal, by overriding the __all__ variable, like so:
parentmodule/__init__.py file:
import parentmodule.submodule1
import parentmodule.function1
Sources
• http://python-textbok.readthedocs.org/en/latest/Object_Oriented_Programming.html
Principles
• Encapsulate data (attributes) and code (methods) into objects.
import math
class Shape2D:
def area(self):
raise NotImplementedError()
# Inheritance + Encapsulation
class Square(Shape2D):
def __init__(self, width):
self.width = width
def area(self):
return self.width ** 2
class Disk(Shape2D):
def __init__(self, radius):
self.radius = radius
def area(self):
return math.pi * self.radius ** 2
# Polymorphism
print([s.area() for s in shapes])
s = Shape2D()
try:
s.area()
except NotImplementedError as e:
print("NotImplementedError")
Out:
[4, 28.274333882308138]
NotImplementedError
2.13 Exercises
Create a function that acts as a simple calulator If the operation is not specified, default to
addition If the operation is misspecified, return an prompt message Ex: calc(4,5,"multiply")
returns 20 Ex: calc(3,5) returns 8 Ex: calc(1, 2, "something") returns error message
Given a list of numbers, return a list where all adjacent duplicate elements have been reduced
to a single element. Ex: [1, 2, 2, 3, 2] returns [1, 2, 3, 2]. You may create a new list or
modify the passed in list.
Remove all duplicate values (adjacent or not) Ex: [1, 2, 2, 3, 2] returns [1, 2, 3]
THREE
SCIENTIFIC PYTHON
NumPy is an extension to the Python programming language, adding support for large, multi-
dimensional (numerical) arrays and matrices, along with a large library of high-level mathe-
matical functions to operate on these arrays.
Sources:
• Kevin Markham: https://github.com/justmarkham
import numpy as np
Create ndarrays from lists. note: every element must be the same type (will be converted if
possible)
np.zeros(10)
np.zeros((3, 6))
np.ones(10)
np.linspace(0, 1, 5) # 0 to 1 (inclusive) with 5 points
np.logspace(0, 3, 4) # 10^0 to 10^3 (inclusive) with 4 points
int_array = np.arange(5)
float_array = int_array.astype(float)
33
Statistics and Machine Learning in Python, Release 0.3 beta
arr1.dtype # float64
arr2.dtype # int32
arr2.ndim # 2
arr2.shape # (2, 4) - axis 0 is rows, axis 1 is columns
arr2.size # 8 - total number of elements
len(arr2) # 2 - size of first dimension (aka axis)
3.1.3 Reshaping
Out:
(2, 5)
[[0. 1.]
[2. 3.]
[4. 5.]
[6. 7.]
[8. 9.]]
Add an axis
a = np.array([0, 1])
a_col = a[:, np.newaxis]
print(a_col)
#or
a_col = a[:, None]
Out:
[[0]
[1]]
Transpose
print(a_col.T)
Out:
[[0 1]]
arr_flt = arr.flatten()
arr_flt[0] = 33
print(arr_flt)
print(arr)
Out:
[33. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
[[0. 1. 2. 3. 4.]
[5. 6. 7. 8. 9.]]
arr_flt = arr.ravel()
arr_flt[0] = 33
print(arr_flt)
print(arr)
Out:
[33. 1. 2. 3. 4. 5. 6. 7. 8. 9.]
[[33. 1. 2. 3. 4.]
[ 5. 6. 7. 8. 9.]]
Numpy internals: By default Numpy use C convention, ie, Row-major language: The matrix is
stored by rows. In C, the last index changes most rapidly as one moves through the array as
stored in memory.
For 2D arrays, sequential move in the memory will:
• iterate over rows (axis 0)
– iterate over columns (axis 1)
For 3D arrays, sequential move in the memory will:
• iterate over plans (axis 0)
– iterate over rows (axis 1)
x = np.arange(2 * 3 * 4)
print(x)
Out:
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23]
x = x.reshape(2, 3, 4)
print(x)
Out:
[[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
[[12 13 14 15]
[16 17 18 19]
[20 21 22 23]]]
print(x[0, :, :])
Out:
[[ 0 1 2 3]
[ 4 5 6 7]
[ 8 9 10 11]]
print(x[:, 0, :])
Out:
[[ 0 1 2 3]
[12 13 14 15]]
print(x[:, :, 0])
Out:
[[ 0 4 8]
[12 16 20]]
Ravel
print(x.ravel())
Out:
[ 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23]
a = np.array([0, 1])
b = np.array([2, 3])
ab = np.stack((a, b)).T
print(ab)
# or
np.hstack((a[:, None], b[:, None]))
Out:
[[0 2]
[1 3]]
3.1.6 Selection
Single item
Slicing
Syntax: start:stop:step with start (default 0) stop (default last) step (default 1)
Out:
[[1. 2. 3.]
[6. 7. 8.]]
arr2[0, 0] = 33
print(arr2)
print(arr)
Out:
[[33. 2. 3.]
[ 6. 7. 8.]]
[[ 0. 33. 2. 3. 4.]
[ 5. 6. 7. 8. 9.]]
print(arr[0, ::-1])
# The rule of thumb here can be: in the context of lvalue indexing (i.e. the indices are␣
˓→placed in the left hand side value of an assignment), no view or copy of the array is␣
˓→created (because there is no need to). However, with regular values, the above rules␣
Out:
[ 4. 3. 2. 33. 0.]
Out:
[[33. 2. 3.]
[ 6. 7. 8.]]
[[44. 2. 3.]
[ 6. 7. 8.]]
[[ 0. 33. 2. 3. 4.]
[ 5. 6. 7. 8. 9.]]
print(arr2)
arr2[0] = 44
print(arr2)
print(arr)
Out:
[33. 6. 7. 8. 9.]
[44. 6. 7. 8. 9.]
[[ 0. 33. 2. 3. 4.]
[ 5. 6. 7. 8. 9.]]
However, In the context of lvalue indexing (left hand side value of an assignment) Fancy autho-
rizes the modification of the original array
arr[arr > 5] = 0
print(arr)
Out:
[[0. 0. 2. 3. 4.]
[5. 0. 0. 0. 0.]]
nums = np.arange(5)
nums * 10 # multiply each element by 10
nums = np.sqrt(nums) # square root of each element
np.ceil(nums) # also floor, rint (round to nearest int)
np.isnan(nums) # checks for NaN
nums + np.arange(5) # add element-wise
np.maximum(nums, np.array([1, -2, 3, -4, 5])) # compare element-wise
# random numbers
np.random.seed(12234) # Set the seed
np.random.rand(2, 3) # 2 x 3 matrix in [0, 1]
np.random.randn(10) # random normals (mean 0, sd 1)
np.random.randint(0, 2, 10) # 10 randomly picked 0 or 1
3.1.8 Broadcasting
Rules
Starting with the trailing axis and working backward, Numpy compares arrays dimensions.
• If two dimensions are equal then continues
• If one of the operand has dimension 1 stretches it to match the largest one
• When one of the shapes runs out of dimensions (because it has less dimensions than
the other shape), Numpy will use 1 in the comparison process until the other shape’s
dimensions run out as well.
a = np.array([[ 0, 0, 0],
[10, 10, 10],
[20, 20, 20],
[30, 30, 30]])
b = np.array([0, 1, 2])
(continues on next page)
print(a + b)
Out:
[[ 0 1 2]
[10 11 12]
[20 21 22]
[30 31 32]]
Examples
Shapes of operands A, B and result:
A (2d array): 5 x 4
B (1d array): 1
Result (2d array): 5 x 4
A (2d array): 5 x 4
B (1d array): 4
Result (2d array): 5 x 4
A (3d array): 15 x 3 x 5
B (3d array): 15 x 1 x 5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3 x 5
Result (3d array): 15 x 3 x 5
A (3d array): 15 x 3 x 5
B (2d array): 3 x 1
Result (3d array): 15 x 3 x 5
3.1.9 Exercises
• For each column find the row index of the minimum value.
• Write a function standardize(X) that return an array whose columns are centered and
scaled (by std-dev).
Total running time of the script: ( 0 minutes 0.039 seconds)
It is often said that 80% of data analysis is spent on the cleaning and small, but important,
aspect of data manipulation and cleaning with Pandas.
Sources:
• Kevin Markham: https://github.com/justmarkham
• Pandas doc: http://pandas.pydata.org/pandas-docs/stable/index.html
Data structures
• Series is a one-dimensional labeled array capable of holding any data type (inte-
gers, strings, floating point numbers, Python objects, etc.). The axis labels are col-
lectively referred to as the index. The basic method to create a Series is to call
pd.Series([1,3,5,np.nan,6,8])
• DataFrame is a 2-dimensional labeled data structure with columns of potentially different
types. You can think of it like a spreadsheet or SQL table, or a dict of Series objects. It
stems from the R data.frame() object.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
print(user3)
Out:
Concatenate DataFrame
user1.append(user2)
users = pd.concat([user1, user2, user3])
print(users)
Out:
Join DataFrame
Out:
name height
0 alice 165
1 john 180
2 eric 175
3 julie 171
print(merge_inter)
Out:
Out:
Reshaping by pivoting
Out:
Out:
3.2.3 Summarizing
Out:
<class 'pandas.core.frame.DataFrame'>
Int64Index: 6 entries, 0 to 5
Data columns (total 5 columns):
name 6 non-null object
age 6 non-null int64
gender 6 non-null object
job 6 non-null object
height 4 non-null float64
dtypes: float64(1), int64(1), object(3)
memory usage: 288.0+ bytes
df = users.copy()
df.iloc[0] # first row
df.iloc[0, 0] # first item of first row
df.iloc[0, 0] = 55
for i in range(users.shape[0]):
row = df.iloc[i]
(continues on next page)
Out:
/home/edouard/anaconda3/lib/python3.7/site-packages/pandas/core/generic.py:5096:␣
˓→SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
self[name] = value
name age gender job height
0 55 19 F student 165.0
1 john 26 M student 180.0
2 eric 22 M student 175.0
3 paul 58 F manager NaN
4 peter 33 M engineer NaN
5 julie 44 F scientist 171.0
df = users.copy()
df.loc[0] # first row
df.loc[0, "age"] # first item of first row
df.loc[0, "age"] = 55
for i in range(df.shape[0]):
df.loc[i, "age"] *= 10
print(df) # df is modified
Out:
Out:
3.2.7 Sorting
df = users.copy()
print(df)
Out:
print(df.describe())
Out:
age height
count 6.000000 4.000000
mean 33.666667 172.750000
std 14.895189 6.344289
min 19.000000 165.000000
25% 23.000000 169.500000
50% 29.500000 173.000000
75% 41.250000 176.250000
max 58.000000 180.000000
print(df.describe(include='all'))
print(df.describe(include=['object'])) # limit to one (or more) types
Out:
print(df.groupby("job").mean())
print(df.groupby("job")["age"].mean())
print(df.groupby("job").describe(include='all'))
Out:
age height
job
engineer 33.000000 NaN
manager 58.000000 NaN
scientist 44.000000 171.000000
student 22.333333 173.333333
job
engineer 33.000000
manager 58.000000
scientist 44.000000
student 22.333333
Name: age, dtype: float64
name ... height
count unique top freq mean ... min 25% 50% 75% max
job ...
engineer 1 1 peter 1 NaN ... NaN NaN NaN NaN NaN
manager 1 1 paul 1 NaN ... NaN NaN NaN NaN NaN
scientist 1 1 julie 1 NaN ... 171.0 171.0 171.0 171.0 171.0
student 3 3 eric 1 NaN ... 165.0 170.0 175.0 177.5 180.0
[4 rows x 44 columns]
Groupby in a loop
Out:
df = users.append(df.iloc[0], ignore_index=True)
Out:
0 False
1 False
2 False
3 False
4 False
5 False
6 True
dtype: bool
Missing data
df.height.mean()
df = users.copy()
df.loc[df.height.isnull(), "height"] = df["height"].mean()
print(df)
Out:
df = users.copy()
print(df.columns)
df.columns = ['age', 'genre', 'travail', 'nom', 'taille']
df['travail'].str.contains("etu|inge")
Out:
Assume random variable follows the normal distribution Exclude data outside 3 standard-
deviations: - Probability that a sample lies within 1 sd: 68.27% - Probability that a sample
lies within 3 sd: 99.73% (68.27 + 2 * 15.73)
size_outlr_mean = size.copy()
size_outlr_mean[((size - size.mean()).abs() > 3 * size.std())] = size.mean()
print(size_outlr_mean.mean())
Out:
248.48963819938044
Median absolute deviation (MAD), based on the median, is a robust non-parametric statistics.
https://en.wikipedia.org/wiki/Median_absolute_deviation
Out:
173.80000467192673 178.7023568870694
csv
url = 'https://raw.github.com/neurospin/pystatsml/master/datasets/salary_table.csv'
salary = pd.read_csv(url)
Excel
pd.read_excel(xls_filename, sheet_name='users')
# Multiple sheets
with pd.ExcelWriter(xls_filename) as writer:
users.to_excel(writer, sheet_name='users', index=False)
(continues on next page)
pd.read_excel(xls_filename, sheet_name='users')
pd.read_excel(xls_filename, sheet_name='salary')
SQL (SQLite)
import pandas as pd
import sqlite3
Connect
conn = sqlite3.connect(db_filename)
url = 'https://raw.github.com/neurospin/pystatsml/master/datasets/salary_table.csv'
salary = pd.read_csv(url)
Push modifications
cur = conn.cursor()
values = (100, 14000, 5, 'Bachelor', 'N')
cur.execute("insert into salary values (?, ?, ?, ?, ?)", values)
conn.commit()
Out:
3.2.13 Exercises
Data Frame
Missing data
df = users.copy()
df.ix[[0, 2], "age"] = None
df.ix[[1, 3], "gender"] = None
Out:
/home/edouard/git/pystatsml/scientific_python/scipy_pandas.py:440: DeprecationWarning:
.ix is deprecated. Please use
.loc for label based indexing or
.iloc for positional indexing
1. Write a function fillmissing_with_mean(df) that fill all missing value of numerical col-
umn with the mean of the current columns.
2. Save the original users and “imputed” frame in a single excel file “users.xlsx” with 2 sheets:
original, imputed.
Total running time of the script: ( 0 minutes 1.488 seconds)
import numpy as np
import matplotlib.pyplot as plt
plt.plot(x, sinus)
plt.show()
# Rapid multiplot
cosinus = np.cos(x)
plt.plot(x, sinus, "-b", x, sinus, "ob", x, cosinus, "-r", x, cosinus, "or")
plt.xlabel('this is x!')
plt.ylabel('this is y!')
plt.title('My First Plot')
plt.show()
# Step by step
plt.plot(x, sinus, label='sinus', color='blue', linestyle='--', linewidth=2)
(continues on next page)
Load dataset
import pandas as pd
try:
salary = pd.read_csv("../datasets/salary_table.csv")
except:
url = 'https://raw.github.com/neurospin/pystatsml/master/datasets/salary_table.csv'
salary = pd.read_csv(url)
df = salary
<matplotlib.collections.PathCollection at 0x7f39efac6358>
## Figure size
plt.figure(figsize=(6,5))
## Set labels
plt.xlabel('Experience')
plt.ylabel('Salary')
plt.legend(loc=4) # lower right
plt.show()
# Prefer vectorial format (SVG: Scalable Vector Graphics) can be edited with
# Inkscape, Adobe Illustrator, Blender, etc.
plt.plot(x, sinus)
plt.savefig("sinus.svg")
plt.close()
# Or pdf
plt.plot(x, sinus)
plt.savefig("sinus.pdf")
plt.close()
3.3.4 Seaborn
Boxplot
Box plots are non-parametric: they display variation in samples of a statistical population with-
out making any assumptions of the underlying statistical distribution.
Fig. 2: title
<matplotlib.axes._subplots.AxesSubplot at 0x7f39ed42ff28>
<matplotlib.axes._subplots.AxesSubplot at 0x7f39eb61d780>
i = 0
for edu, d in salary.groupby(['education']):
sns.distplot(d.salary[d.management == "Y"], color="b", bins=10, label="Manager",␣
˓→ax=axes[i])
sns.distplot(d.salary[d.management == "N"], color="r", bins=10, label="Employee",␣
˓→ax=axes[i])
axes[i].set_title(edu)
axes[i].set_ylabel('Density')
i += 1
ax = plt.legend()
ax = sns.violinplot(x="salary", data=salary)
Tune bandwidth
Tips dataset One waiter recorded information about each tip he received over a period of a few
months working in one restaurant. He collected several variables:
ax = sns.violinplot(x=tips["total_bill"])
Group by day
g = sns.PairGrid(salary, hue="management")
g.map_diag(plt.hist)
g.map_offdiag(plt.scatter)
ax = g.add_legend()
ax = sns.pointplot(x="timepoint", y="signal",
hue="region", style="event",
data=fmri)
# version 0.9
# sns.lineplot(x="timepoint", y="signal",
# hue="region", style="event",
# data=fmri)
FOUR
STATISTICS
Mean
The estimator 𝑥
¯ on a sample of size 𝑛: 𝑥 = 𝑥1 , ..., 𝑥𝑛 is given by
1 ∑︁
𝑥
¯= 𝑥𝑖
𝑛
𝑖
Variance
69
Statistics and Machine Learning in Python, Release 0.3 beta
The estimator is
1 ∑︁
𝜎𝑥2 = ¯)2
(𝑥𝑖 − 𝑥
𝑛−1
𝑖
Note here the subtracted 1 degree of freedom (df) in the divisor. In standard statistical practice,
𝑑𝑓 = 1 provides an unbiased estimator of the variance of a hypothetical infinite population.
With 𝑑𝑓 = 0 it instead provides a maximum likelihood estimate of the variance for normally
distributed variables.
Standard deviation
√︀
𝑆𝑡𝑑(𝑋) = 𝑉 𝑎𝑟(𝑋)
√︀
The estimator is simply 𝜎𝑥 = 𝜎𝑥2 .
Covariance
Correlation
𝐶𝑜𝑣(𝑋, 𝑌 )
𝐶𝑜𝑟(𝑋, 𝑌 ) =
𝑆𝑡𝑑(𝑋)𝑆𝑡𝑑(𝑌 )
The estimator is
𝜎𝑥𝑦
𝜌𝑥𝑦 = .
𝜎𝑥 𝜎𝑦
The standard error (SE) is the standard deviation (of the sampling distribution) of a statistic:
𝑆𝑡𝑑(𝑋)
𝑆𝐸(𝑋) = √ .
𝑛
It is most commonly considered for the mean with the estimator
70 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
Exercises
• Generate 2 random samples: 𝑥 ∼ 𝑁 (1.78, 0.1) and 𝑦 ∼ 𝑁 (1.66, 0.1), both of size 10.
• Compute 𝑥¯, 𝜎𝑥 , 𝜎𝑥𝑦 (xbar, xvar, xycov) using only the np.sum() operation. Explore
the np. module to find out which numpy functions performs the same computations and
compare them (using assert) with your previous results.
Normal distribution
The normal distribution, noted 𝒩 (𝜇, 𝜎) with parameters: 𝜇 mean (location) and 𝜎 > 0 std-dev.
Estimators: 𝑥
¯ and 𝜎𝑥 .
The normal distribution, noted 𝒩 , is useful because of the central limit theorem (CLT) which
states that: given certain conditions, the arithmetic mean of a sufficiently large number of iter-
ates of independent random variables, each with a well-defined expected value and well-defined
variance, will be approximately normally distributed, regardless of the underlying distribution.
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import norm
%matplotlib inline
mu = 0 # mean
variance = 2 #variance
sigma = np.sqrt(variance) #standard deviation",
x = np.linspace(mu-3*variance,mu+3*variance, 100)
plt.plot(x, norm.pdf(x, mu, sigma))
[<matplotlib.lines.Line2D at 0x7f5cd6d3afd0>]
The chi-square or 𝜒2𝑛 distribution with 𝑛 degrees of freedom (df) is the distribution of a sum of
the squares of 𝑛 independent standard normal random variables 𝒩 (0, 1). Let 𝑋 ∼ 𝒩 (𝜇, 𝜎 2 ),
then, 𝑍 = (𝑋 − 𝜇)/𝜎 ∼ 𝒩 (0, 1), then:
• The squared standard 𝑍 2 ∼ 𝜒21 (one df).
∑︀𝑛
• The distribution of sum of squares of 𝑛 normal random variables: 𝑖 𝑍𝑖2 ∼ 𝜒2𝑛
The sum of two 𝜒2 RV with 𝑝 and 𝑞 df is a 𝜒2 RV with 𝑝 + 𝑞 df. This is useful when sum-
ming/subtracting sum of squares.
The 𝜒2 -distribution is used to model errors measured as sum of squares or the distribution of
the sample variance.
The 𝐹 -distribution, 𝐹𝑛,𝑝 , with 𝑛 and 𝑝 degrees of freedom is the ratio of two independent 𝜒2
variables. Let 𝑋 ∼ 𝜒2𝑛 and 𝑌 ∼ 𝜒2𝑝 then:
𝑋/𝑛
𝐹𝑛,𝑝 =
𝑌 /𝑝
The 𝐹 -distribution plays a central role in hypothesis testing answering the question: Are two
variances equals?, is the ratio or two errors significantly large ?.
import numpy as np
from scipy.stats import f
import matplotlib.pyplot as plt
%matplotlib inline
72 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
Let 𝑀 ∼ 𝒩 (0, 1) and 𝑉 ∼ 𝜒2𝑛 . The 𝑡-distribution, 𝑇𝑛 , with 𝑛 degrees of freedom is the ratio:
𝑀
𝑇𝑛 = √︀
𝑉 /𝑛
The distribution of the difference between an estimated parameter and its true (or assumed)
value divided by the standard deviation of the estimated parameter (standard error) follow a
𝑡-distribution. Is this parameters different from a given value?
Examples
• Test a proportion: Biased coin ? 200 heads have been found over 300 flips, is it coins
biased ?
• Test the association between two variables.
– Exemple height and sex: In a sample of 25 individuals (15 females, 10 males), is
female height is different from male height ?
– Exemple age and arterial hypertension: In a sample of 25 individuals is age height
correlated with arterial hypertension ?
Steps
1. Model the data
2. Fit: estimate the model parameters (frequency, mean, correlation, regression coeficient)
3. Compute a test statistic from model the parameters.
4. Formulate the null hypothesis: What would be the (distribution of the) test statistic if the
observations are the result of pure chance.
5. Compute the probability (𝑝-value) to obtain a larger value for the test statistic by chance
(under the null hypothesis).
Biased coin ? 2 heads have been found over 3 flips, is it coins biased ?
1. Model the data: number of heads follow a Binomial disctribution.
2. Compute model parameters: N=3, P = the frequency of number of heads over the number
of flip: 2/3.
3. Compute a test statistic, same as frequency.
4. Under the null hypothesis the distribution of the number of tail is:
1 2 3 count #heads
0
H 1
H 1
H 1
H H 2
H H 2
H H 2
H H H 3
8 possibles configurations, probabilities of differents values for 𝑝 are: 𝑥 measure the number of
success.
• 𝑃 (𝑥 = 0) = 1/8
• 𝑃 (𝑥 = 1) = 3/8
• 𝑃 (𝑥 = 2) = 3/8
• 𝑃 (𝑥 = 3) = 1/8
Text(0.5, 0, 'Distribution of the number of head over 3 flip under the null hypothesis')
74 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
3. Compute the probability (𝑝-value) to observe a value larger or equal that 2 under the null
hypothesis ? This probability is the 𝑝-value:
Biased coin ? 60 heads have been found over 100 flips, is it coins biased ?
1. Model the data: number of heads follow a Binomial disctribution.
2. Compute model parameters: N=100, P=60/100.
3. Compute a test statistic, same as frequency.
4. Compute a test statistic: 60/100.
5. Under the null hypothesis the distribution of the number of tail (𝑘) follow the binomial
distribution of parameters N=100, P=0.5:
(︂ )︂
100
𝑃 𝑟(𝑋 = 𝑘|𝐻0 ) = 𝑃 𝑟(𝑋 = 𝑘|𝑛 = 100, 𝑝 = 0.5) = 0.5𝑘 (1 − 0.5)(100−𝑘) .
𝑘
100 (︂ )︂
∑︁ 100
𝑃 (𝑋 = 𝑘 ≥ 60|𝐻0 ) = 0.5𝑘 (1 − 0.5)(100−𝑘)
𝑘
𝑘=60
60 (︂ )︂
∑︁ 100
=1− 0.5𝑘 (1 − 0.5)(100−𝑘) , the cumulative distribution function.
𝑘
𝑘=1
import scipy.stats
import matplotlib.pyplot as plt
0.01760010010885238
76 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
The one-sample 𝑡-test is used to determine whether a sample comes from a population with a
specific mean. For example you want to test if the average height of a population is 1.75 𝑚.
1 Model the data
Assume that height is normally distributed: 𝑋 ∼ 𝒩 (𝜇, 𝜎), ie:
difference of means √
𝑡= 𝑛 (4.8)
std-dev √of noise
𝑡 = effect size 𝑛 (4.9)
¯ − 𝜇0 √
𝑥
𝑡= 𝑛 (4.10)
𝑠𝑥
Remarks: Although the parent population does not need to be normally distributed, the dis-
tribution of the population of sample means, 𝑥, is assumed to be normal. By the central limit
theorem, if the sampling of the parent population is independent then the sample means will
be approximately normal.
4 Compute the probability of the test statistic under the null hypotheis. This require to have the
distribution of the t statistic under 𝐻0 .
Example
Given the following samples, we will test whether its true mean is 1.75.
Warning, when computing the std or the variance, set ddof=1. The default value, ddof=0, leads
to the biased estimator of the variance.
import numpy as np
x = [1.83, 1.83, 1.73, 1.82, 1.83, 1.73, 1.99, 1.85, 1.68, 1.87]
print(xbar)
1.816
2.3968766311585883
The :math:‘p‘-value is the probability to observe a value 𝑡 more extreme than the observed one
𝑡𝑜𝑏𝑠 under the null hypothesis 𝐻0 : 𝑃 (𝑡 > 𝑡𝑜𝑏𝑠 |𝐻0 )
78 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
80 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
import numpy as np
import scipy.stats as stats
n = 50
x = np.random.normal(size=n)
y = 2 * x + np.random.normal(size=n)
0.904453622242007 2.189729365511301e-19
The two-sample 𝑡-test (Snedecor and Cochran, 1989) is used to determine if two population
means are equal. There are several variations on this test. If data are paired (e.g. 2 measures,
before and after treatment for each individual) use the one-sample 𝑡-test of the difference. The
variances of the two samples may be assumed to be equal (a.k.a. homoscedasticity) or unequal
(a.k.a. heteroscedasticity).
Assume that the two random variables are normally distributed: 𝑦1 ∼ 𝒩 (𝜇1 , 𝜎1 ), 𝑦2 ∼
𝒩 (𝜇2 , 𝜎2 ).
3. 𝑡-test
difference of means
𝑡= (4.11)
standard dev of error
difference of means
= (4.12)
its standard error
𝑦¯1 − 𝑦¯2 √
= √︀∑︀ 𝑛−2 (4.13)
𝜀2
𝑦¯1 − 𝑦¯2
= (4.14)
𝑠𝑦¯1 −𝑦¯2
𝑠2𝑦1 𝑠2𝑦2
𝑠2𝑦¯1 −𝑦¯2 = 𝑠2𝑦¯1 + 𝑠2𝑦¯2 = + (4.15)
𝑛1 𝑛2
thus (4.16)
√︃
𝑠2𝑦1 𝑠2𝑦
𝑠𝑦¯1 −𝑦¯2 = + 2 (4.17)
𝑛1 𝑛2
To compute the 𝑝-value one needs the degrees of freedom associated with this variance estimate.
It is approximated using the Welch–Satterthwaite equation:
(︂ 2 )︂2
𝑠𝑦1 𝑠2𝑦2
𝑛1 + 𝑛2
𝜈≈ 𝑠4𝑦1 𝑠4𝑦2
.
𝑛2 (𝑛 −1)
+ 𝑛2 (𝑛 −1)
1 1 2 2
If we assume equal variance (ie, 𝑠2𝑦1 = 𝑠2𝑦1 = 𝑠2 ), where 𝑠2 is an estimator of the common
variance of the two samples:
then
√︃ √︂
𝑠2 𝑠2 1 1
𝑠𝑦¯1 −𝑦¯2 = + =𝑠 +
𝑛1 𝑛2 𝑛1 𝑛2
82 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
Therefore, the 𝑡 statistic, that is used to test whether the means are different is:
𝑦¯ − 𝑦¯2
𝑡= √︁1 ,
𝑠 · 𝑛11 + 1
𝑛2
𝑦¯1 − 𝑦¯2 √
𝑡= √ · 𝑛 (4.20)
𝑠 2
√
≈ effect size · 𝑛 (4.21)
difference of means √
≈ · 𝑛 (4.22)
standard deviation of the noise
Example
Given the following two samples, test whether their means are equal using the standard t-test,
assuming equal variance.
height = np.array([ 1.83, 1.83, 1.73, 1.82, 1.83, 1.73, 1.99, 1.85, 1.68, 1.87,
1.66, 1.71, 1.73, 1.64, 1.70, 1.60, 1.79, 1.73, 1.62, 1.77])
Ttest_indResult(statistic=3.5511519888466885, pvalue=0.00228208937112721)
Analysis of variance (ANOVA) provides a statistical test of whether or not the means of several
groups are equal, and therefore generalizes the 𝑡-test to more than two groups. ANOVAs are
useful for comparing (testing) three or more means (groups or variables) for statistical signifi-
cance. It is conceptually similar to multiple two-sample 𝑡-tests, but is less conservative.
Here we will consider one-way ANOVA with one independent variable, ie one-way anova.
Wikipedia:
• Test if any group is on average superior, or inferior, to the others versus the null hypothesis
that all four strategies yield the same mean response
• Detect any of several possible differences.
• The advantage of the ANOVA 𝐹 -test is that we do not need to pre-specify which strategies
are to be compared, and we do not need to adjust for making multiple comparisons.
• The disadvantage of the ANOVA 𝐹 -test is that if we reject the null hypothesis, we do not
know which strategies can be said to be significantly different from the others.
A company has applied three marketing strategies to three samples of customers in order in-
crease their business volume. The marketing is asking whether the strategies led to different
increases of business volume. Let 𝑦1 , 𝑦2 and 𝑦3 be the three samples of business volume increase.
Here we assume that the three populations were sampled from three random variables that are
normally distributed. I.e., 𝑌1 ∼ 𝑁 (𝜇1 , 𝜎1 ), 𝑌2 ∼ 𝑁 (𝜇2 , 𝜎2 ) and 𝑌3 ∼ 𝑁 (𝜇3 , 𝜎3 ).
3. 𝐹 -test
Explained variance
𝐹 = (4.23)
Unexplained variance
Between-group variability 𝑠2
= = 2𝐵 . (4.24)
Within-group variability 𝑠𝑊
where 𝑦¯𝑖· denotes the sample mean in the 𝑖th group, 𝑛𝑖 is the number of observations in the 𝑖th
group, 𝑦¯ denotes the overall mean of the data, and 𝐾 denotes the number of groups.
The “unexplained variance”, or “within-group variability” is
∑︁
𝑠2𝑊 = (𝑦𝑖𝑗 − 𝑦¯𝑖· )2 /(𝑁 − 𝐾),
𝑖𝑗
where 𝑦𝑖𝑗 is the 𝑗th observation in the 𝑖th out of 𝐾 groups and 𝑁 is the overall sample size.
This 𝐹 -statistic follows the 𝐹 -distribution with 𝐾 − 1 and 𝑁 − 𝐾 degrees of freedom under the
null hypothesis. The statistic will be large if the between-group variability is large relative to
the within-group variability, which is unlikely to happen if the population means of the groups
all have the same value.
Note that when there are only two groups for the one-way ANOVA F-test, 𝐹 = 𝑡2 where 𝑡 is the
Student’s 𝑡 statistic.
84 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
Computes the chi-square, 𝜒2 , statistic and 𝑝-value for the hypothesis test of independence of
frequencies in the observed contingency table (cross-table). The observed frequencies are tested
against an expected contingency table obtained by computing expected frequencies based on
the marginal sums under the assumption of independence.
Example: 20 participants: 10 exposed to some chemical product and 10 non exposed (exposed
= 1 or 0). Among the 20 participants 10 had cancer 10 not (cancer = 1 or 0). 𝜒2 tests the
association between those two variables.
import numpy as np
import pandas as pd
import scipy.stats as stats
# Dataset:
# 15 samples:
# 10 first exposed
exposed = np.array([1] * 10 + [0] * 10)
# 8 first with cancer, 10 without, the last two with.
cancer = np.array([1] * 8 + [0] * 10 + [1] * 2)
Observed table:
---------------
cancer 0 1
exposed
0 8 2
1 2 8
Statistics:
-----------
Chi2 = 5.000000, pval = 0.025347
Expected table:
---------------
[[5. 5.]
[5. 5.]]
print('Expected frequencies:')
print(np.outer(exposed_freq, cancer_freq))
import numpy as np
import scipy.stats as stats
import matplotlib.pyplot as plt
x = np.array([44.4, 45.9, 41.9, 53.3, 44.7, 44.1, 50.7, 45.2, 46, 47, 48, 60.1])
y = np.array([2.6, 3.1, 2.5, 5.0, 3.6, 4.0, 5.2, 2.8, 4, 4.1, 4.5, 3.8])
plt.plot(x, y, "bo")
# Non-Parametric Spearman
cor, pval = stats.spearmanr(x, y)
print("Non-Parametric Spearman cor test, cor: %.4f, pval: %.4f" % (cor, pval))
86 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
Source: https://en.wikipedia.org/wiki/Wilcoxon_signed-rank_test
The Wilcoxon signed-rank test is a non-parametric statistical hypothesis test used when com-
paring two related samples, matched samples, or repeated measurements on a single sample
to assess whether their population mean ranks differ (i.e. it is a paired difference test). It is
equivalent to one-sample test of the difference of paired samples.
It can be used as an alternative to the paired Student’s 𝑡-test, 𝑡-test for matched pairs, or the 𝑡-
test for dependent samples when the population cannot be assumed to be normally distributed.
When to use it? Observe the data distribution: - presence of outliers - the distribution of the
residuals is not Gaussian
It has a lower sensitivity compared to 𝑡-test. May be problematic to use when the sample size is
small.
Null hypothesis 𝐻0 : difference between the pairs follows a symmetric distribution around zero.
# create an outlier
bv1[0] -= 10
# Paired t-test
(continues on next page)
# Wilcoxon
print(stats.wilcoxon(bv0, bv1))
Ttest_relResult(statistic=0.8167367438079456, pvalue=0.4242016933514212)
WilcoxonResult(statistic=40.0, pvalue=0.015240061183200121)
# create an outlier
bv1[0] -= 10
# Two-samples t-test
print(stats.ttest_ind(bv0, bv1))
# Wilcoxon
print(stats.mannwhitneyu(bv0, bv1))
Ttest_indResult(statistic=0.6227075213159515, pvalue=0.5371960369300763)
MannwhitneyuResult(statistic=43.0, pvalue=1.1512354940556314e-05)
Given 𝑛 random samples (𝑦𝑖 , 𝑥1𝑖 , . . . , 𝑥𝑝𝑖 ), 𝑖 = 1, . . . , 𝑛, the linear regression models the relation
between the observations 𝑦𝑖 and the independent variables 𝑥𝑝𝑖 is formulated as
𝑦𝑖 = 𝛽0 + 𝛽1 𝑥1𝑖 + · · · + 𝛽𝑝 𝑥𝑝𝑖 + 𝜀𝑖 𝑖 = 1, . . . , 𝑛
• The 𝛽’s are the model parameters, ie, the regression coeficients.
• 𝛽0 is the intercept or the bias.
• 𝜀𝑖 are the residuals.
• An independent variable (IV). It is a variable that stands alone and isn’t changed by
the other variables you are trying to measure. For example, someone’s age might be an
88 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
independent variable. Other factors (such as what they eat, how much they go to school,
how much television they watch) aren’t going to change a person’s age. In fact, when
you are looking for some kind of relationship between variables you are trying to see if
the independent variable causes some kind of change in the other variables, or dependent
variables. In Machine Learning, these variables are also called the predictors.
• A dependent variable. It is something that depends on other factors. For example, a test
score could be a dependent variable because it could change depending on several factors
such as how much you studied, how much sleep you got the night before you took the
test, or even how hungry you were when you took it. Usually when you are looking for
a relationship between two things you are trying to find out what makes the dependent
variable change the way it does. In Machine Learning this variable is called a target
variable.
Using the dataset “salary”, explore the association between the dependant variable (e.g. Salary)
and the independent variable (e.g.: Experience is quantitative).
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
url = 'https://raw.github.com/neurospin/pystatsml/master/datasets/salary_table.csv'
salary = pd.read_csv(url)
Model the data on some hypothesis e.g.: salary is a linear function of the experience.
salary𝑖 = 𝛽 experience𝑖 + 𝛽0 + 𝜖𝑖 ,
more generally
𝑦𝑖 = 𝛽 𝑥𝑖 + 𝛽0 + 𝜖𝑖
Recall from calculus that an extreme point can be found by computing where the derivative is
zero, i.e. to find the intercept, we perform the steps:
𝜕𝑆𝑆𝐸 ∑︁
= (𝑦𝑖 − 𝛽 𝑥𝑖 − 𝛽0 ) = 0
𝜕𝛽0
𝑖
∑︁ ∑︁
𝑦𝑖 = 𝛽 𝑥𝑖 + 𝑛 𝛽0
𝑖 𝑖
𝑛 𝑦¯ = 𝑛 𝛽 𝑥
¯ + 𝑛 𝛽0
𝛽0 = 𝑦¯ − 𝛽 𝑥
¯
Plug in 𝛽0 :
∑︁
𝑥𝑖 (𝑦𝑖 − 𝛽 𝑥𝑖 − 𝑦¯ + 𝛽 𝑥
¯) = 0
𝑖
∑︁ ∑︁ ∑︁
𝑥𝑖 𝑦𝑖 − 𝑦¯ 𝑥𝑖 = 𝛽 (𝑥𝑖 − 𝑥
¯)
𝑖 𝑖 𝑖
90 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
print("Using seaborn")
import seaborn as sns
sns.regplot(x="experience", y="salary", data=salary);
Using seaborn
3. 𝐹 -Test
3.1 Goodness of fit
The goodness of fit of a statistical model describes how well it fits a set of observations. Mea-
sures of goodness of fit typically summarize the discrepancy between observed values and the
values expected under the model in question. We will consider the explained variance also
known as the coefficient of determination, denoted 𝑅2 pronounced R-squared.
The total sum of squares, 𝑆𝑆tot is the sum of the sum of squares explained by the regression,
𝑆𝑆reg , plus the sum of squares of residuals unexplained by the regression, 𝑆𝑆res , also called the
SSE, i.e. such that
Fig. 4: title
92 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
The mean of 𝑦 is
1 ∑︁
𝑦¯ = 𝑦𝑖 .
𝑛
𝑖
The total sum of squares is the total squared sum of deviations from the mean of 𝑦, i.e.
∑︁
𝑆𝑆tot = (𝑦𝑖 − 𝑦¯)2
𝑖
The regression sum of squares, also called the explained sum of squares:
∑︁
𝑆𝑆reg = 𝑦𝑖 − 𝑦¯)2 ,
(ˆ
𝑖
where 𝑦ˆ𝑖 = 𝛽𝑥𝑖 + 𝛽0 is the estimated value of salary 𝑦ˆ𝑖 given a value of experience 𝑥𝑖 .
The sum of squares of the residuals, also called the residual sum of squares (RSS) is:
∑︁
𝑆𝑆res = (𝑦𝑖 − 𝑦ˆ𝑖 )2 .
𝑖
𝑅2 is the explained sum of squares of errors. It is the variance explain by the regression divided
by the total variance, i.e.
3.2 Test
Using the 𝐹 -distribution, compute the probability of observing a value greater than 𝐹 under
𝐻0 , i.e.: 𝑃 (𝑥 > 𝐹 |𝐻0 ), i.e. the survival function (1 − Cumulative Distribution Function) at 𝑥 of
the given 𝐹 -distribution.
Multiple regression
Theory
In linear regression, we assume that the model that generates the data involves only a linear
combination of the input variables, i.e.
or, simplified
𝑃 −1
𝛽𝑗 𝑥𝑗𝑖 .
∑︁
𝑦(𝑥𝑖 , 𝛽) = 𝛽0 +
𝑗=1
Extending each sample with an intercept, 𝑥𝑖 := [1, 𝑥𝑖 ] ∈ 𝑅𝑃 +1 allows us to use a more general
notation based on linear algebra and write it as a simple dot product:
𝑦(𝑥𝑖 , 𝛽) = 𝑥𝑇𝑖 𝛽,
where 𝛽 ∈ 𝑅𝑃 +1 is a vector of weights that define the 𝑃 + 1 parameters of the model. From
now we have 𝑃 regressors + the intercept.
Minimize the Mean Squared Error MSE loss:
𝑁 𝑁
1 ∑︁ 1 ∑︁
𝑀 𝑆𝐸(𝛽) = (𝑦𝑖 − 𝑦(𝑥𝑖 , 𝛽))2 = (𝑦𝑖 − 𝑥𝑇𝑖 𝛽)2
𝑁 𝑁
𝑖=1 𝑖=1
Let 𝑋 = [𝑥𝑇0 , ..., 𝑥𝑇𝑁 ] be a 𝑁 × 𝑃 + 1 matrix of 𝑁 samples of 𝑃 input features with one column
of one and let be 𝑦 = [𝑦1 , ..., 𝑦𝑁 ] be a vector of the 𝑁 targets. Then, using linear algebra, the
mean squared error (MSE) loss can be rewritten:
1
𝑀 𝑆𝐸(𝛽) = ||𝑦 − 𝑋𝛽||22 .
𝑁
The 𝛽 that minimises the MSE can be found by:
(︂ )︂
1
∇𝛽 ||𝑦 − 𝑋𝛽||22 =0 (4.25)
𝑁
1
∇𝛽 (𝑦 − 𝑋𝛽)𝑇 (𝑦 − 𝑋𝛽) = 0 (4.26)
𝑁
1
∇𝛽 (𝑦 𝑇 𝑦 − 2𝛽 𝑇 𝑋 𝑇 𝑦 + 𝛽 𝑇 𝑋 𝑇 𝑋𝛽) = 0 (4.27)
𝑁
−2𝑋 𝑇 𝑦 + 2𝑋 𝑇 𝑋𝛽 = 0 (4.28)
𝑇 𝑇
𝑋 𝑋𝛽 = 𝑋 𝑦 (4.29)
𝛽 = (𝑋 𝑇 𝑋)−1 𝑋 𝑇 𝑦, (4.30)
import numpy as np
from scipy import linalg
np.random.seed(seed=42) # make the example reproducible
(continues on next page)
94 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
# Dataset
N, P = 50, 4
X = np.random.normal(size= N * P).reshape((N, P))
## Our model needs an intercept so we add a column of 1s:
X[:, 0] = 1
print(X[:5, :])
Sources: http://statsmodels.sourceforge.net/devel/examples/
Multiple regression
import statsmodels.api as sm
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly␣
˓→specified.
Use R language syntax for data.frame. For an additive model: 𝑦𝑖 = 𝛽 0 + 𝑥1𝑖 𝛽 1 + 𝑥2𝑖 𝛽 2 + 𝜖𝑖 ≡ y ~
x1 + x2.
96 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly␣
˓→specified.
Analysis of covariance (ANCOVA) is a linear model that blends ANOVA and linear regression.
ANCOVA evaluates whether population means of a dependent variable (DV) are equal across
levels of a categorical independent variable (IV) often called a treatment, while statistically
controlling for the effects of other quantitative or continuous variables that are not of primary
interest, known as covariates (CV).
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
try:
df = pd.read_csv("../datasets/salary_table.csv")
except:
url = 'https://raw.github.com/neurospin/pystatsml/master/datasets/salary_table.csv'
df = pd.read_csv(url)
-----------------------------------------------------
<ipython-input-28-c2d69cab90c3> in <module>
8
(continues on next page)
One-way AN(C)OVA
98 Chapter 4. Statistics
Statistics and Machine Learning in Python, Release 0.3 beta
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly␣
˓→specified.
sum_sq df F PR(>F)
management 5.755739e+08 1.0 183.593466 4.054116e-17
experience 3.334992e+08 1.0 106.377768 3.349662e-13
Residual 1.348070e+08 43.0 NaN NaN
Two-way AN(C)OVA
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly␣
˓→specified.
sum_sq df F PR(>F)
education 9.152624e+07 2.0 43.351589 7.672450e-11
management 5.075724e+08 1.0 480.825394 2.901444e-24
experience 3.380979e+08 1.0 320.281524 5.546313e-21
Residual 4.328072e+07 41.0 NaN NaN
oneway is nested within twoway. Comparing two nested models tells us if the additional predic-
tors (i.e. education) of the full model significantly decrease the residuals. Such comparison can
be done using an 𝐹 -test on residuals:
Factor coding
See http://statsmodels.sourceforge.net/devel/contrasts.html
By default Pandas use “dummy coding”. Explore:
print(twoway.model.data.param_names)
print(twoway.model.data.exog[:10, :])
import numpy as np
np.random.seed(seed=42) # make example reproducible
# Dataset
n_samples, n_features = 100, 1000
n_info = int(n_features/10) # number of features with information
n1, n2 = int(n_samples/2), n_samples - int(n_samples/2)
snr = .5
Y = np.random.randn(n_samples, n_features)
grp = np.array(["g1"] * n1 + ["g2"] * n2)
#
import scipy.stats as stats
import matplotlib.pyplot as plt
tvals, pvals = np.full(n_features, np.NAN), np.full(n_features, np.NAN)
for j in range(n_features):
tvals[j], pvals[j] = stats.ttest_ind(Y[grp=="g1", j], Y[grp=="g2", j],
equal_var=True)
axis[2].hist([pvals[n_info:], pvals[:n_info]],
stacked=True, bins=100, label=["Negatives", "Positives"])
axis[2].set_xlabel("p-value histogram")
axis[2].set_ylabel("density")
axis[2].legend()
plt.tight_layout()
Note that under the null hypothesis the distribution of the p-values is uniform.
Statistical measures:
• True Positive (TP) equivalent to a hit. The test correctly concludes the presence of an
effect.
• True Negative (TN). The test correctly concludes the absence of an effect.
• False Positive (FP) equivalent to a false alarm, Type I error. The test improperly con-
cludes the presence of an effect. Thresholding at 𝑝-value < 0.05 leads to 47 FP.
• False Negative (FN) equivalent to a miss, Type II error. The test improperly concludes the
absence of an effect.
The Bonferroni correction is based on the idea that if an experimenter is testing 𝑃 hypothe-
ses, then one way of maintaining the familywise error rate (FWER) is to test each individual
hypothesis at a statistical significance level of 1/𝑃 times the desired maximum overall level.
So, if the desired significance level for the whole family of tests is 𝛼 (usually 0.05), then the
Bonferroni correction would test each individual hypothesis at a significance level of 𝛼/𝑃 . For
example, if a trial is testing 𝑃 = 8 hypotheses with a desired 𝛼 = 0.05, then the Bonferroni
correction would test each individual hypothesis at 𝛼 = 0.05/8 = 0.00625.
FDR-controlling procedures are designed to control the expected proportion of rejected null
hypotheses that were incorrect rejections (“false discoveries”). FDR-controlling procedures pro-
vide less stringent control of Type I errors compared to the familywise error rate (FWER) con-
trolling procedures (such as the Bonferroni correction), which control the probability of at least
one Type I error. Thus, FDR-controlling procedures have greater power, at the cost of increased
rates of Type I errors.
4.1.9 Exercises
Load the dataset: birthwt Risk Factors Associated with Low Infant Birth Weight at https://raw.
github.com/neurospin/pystatsml/master/datasets/birthwt.csv
1. Test the association of mother’s (bwt) age and birth weight using the correlation test and
linear regeression.
2. Test the association of mother’s weight (lwt) and birth weight using the correlation test
and linear regeression.
3. Produce two scatter plot of: (i) age by birth weight; (ii) mother’s weight by birth weight.
Conclusion ?
Considering the salary and the experience of the salary table. https://raw.github.com/
neurospin/pystatsml/master/datasets/salary_table.csv
Compute:
• Estimate the model paramters 𝛽, 𝛽0 using scipy stats.linregress(x,y)
• Compute the predicted values 𝑦ˆ
Compute:
• 𝑦¯: y_mu
• 𝑆𝑆tot : ss_tot
• 𝑆𝑆reg : ss_reg
• 𝑆𝑆res : ss_res
• Check partition of variance formula based on sum of squares by using assert np.
allclose(val1, val2, atol=1e-05)
• Compute 𝑅2 and compare it with the r_value above
• Compute the 𝐹 score
• Compute the 𝑝-value:
• Plot the 𝐹 (1, 𝑛) distribution for 100 𝑓 values within [10, 25]. Draw 𝑃 (𝐹 (1, 𝑛) > 𝐹 ),
i.e. color the surface defined by the 𝑥 values larger than 𝐹 below the 𝐹 (1, 𝑛).
• 𝑃 (𝐹 (1, 𝑛) > 𝐹 ) is the 𝑝-value, compute it.
Multiple regression
import numpy as np
from scipy import linalg
np.random.seed(seed=42) # make the example reproducible
# Dataset
N, P = 50, 4
X = np.random.normal(size= N * P).reshape((N, P))
## Our model needs an intercept so we add a column of 1s:
X[:, 0] = 1
print(X[:5, :])
Given the following two sample, test whether their means are equals.
𝑦 =𝑔+𝜀
Where the noise 𝜀 ∼ 𝑁 (1, 1) and 𝑔 ∈ {0, 1} is a group indicator variable with 50 ones and 50
zeros.
• Write a function tstat(y, g) that compute the two samples t-test of y splited in two
groups defined by g.
• Sample the t-statistic distribution under the null hypothesis using random permutations.
• Assess the p-value.
Write a function univar_stat(df, target, variables) that computes the parametric statistics
and 𝑝-values between the target variable (provided as as string) and all variables (provided
as a list of string) of the pandas DataFrame df. The target is a quantitative variable but vari-
ables may be quantitative or qualitative. The function returns a DataFrame with four columns:
variable, test, value, p_value.
Apply it to the salary dataset available at https://raw.github.com/neurospin/pystatsml/master/
datasets/salary_table.csv, with target being S: salaries for IT staff in a corporation.
Multiple comparisons
This exercise has 2 goals: apply you knowledge of statistics using vectorized numpy operations.
Given the dataset provided for multiple comparisons, compute the two-sample 𝑡-test (assuming
equal variance) for each (column) feature of the Y array given the two groups defined by grp
variable. You should return two vectors of size n_features: one for the 𝑡-values and one for the
𝑝-values.
ANOVA
# dataset
mu_k = np.array([1, 2, 3]) # means of 3 samples
sd_k = np.array([1, 1, 1]) # sd of 3 samples
n_k = np.array([10, 20, 30]) # sizes of 3 samples
grp = [0, 1, 2] # group labels
n = np.sum(n_k)
label = np.hstack([[k] * n_k[k] for k in [0, 1, 2]])
y = np.zeros(n)
for k in grp:
y[label == k] = np.random.normal(mu_k[k], sd_k[k], n_k[k])
The study provides the brain volumes of grey matter (gm), white matter (wm) and cerebrospinal
fluid) (csf) of 808 anatomical MRI scans.
import os
import os.path
import pandas as pd
import tempfile
import urllib.request
WD = os.path.join(tempfile.gettempdir(), "brainvol")
os.makedirs(WD, exist_ok=True)
#os.chdir(WD)
Fetch data
• Demographic data demo.csv (columns: participant_id, site, group, age, sex) and tissue
volume data: group is Control or Patient. site is the recruiting site.
• Gray matter volume gm.csv (columns: participant_id, session, gm_vol)
• White matter volume wm.csv (columns: participant_id, session, wm_vol)
• Cerebrospinal Fluid csf.csv (columns: participant_id, session, csf_vol)
base_url = 'https://raw.github.com/neurospin/pystatsml/master/datasets/brain_volumes/%s'
data = dict()
for file in ["demo.csv", "gm.csv", "wm.csv", "csf.csv"]:
urllib.request.urlretrieve(base_url % file, os.path.join(WD, "data", file))
Out:
brain_vol = brain_vol.dropna()
assert brain_vol.shape == (766, 9)
import os
import pandas as pd
import seaborn as sns
import statsmodels.formula.api as smfrmla
import statsmodels.api as sm
Descriptive statistics Most of participants have several MRI sessions (column session) Select
on rows from session one “ses-01”
desc_glob_num = brain_vol1.describe()
print(desc_glob_num)
Out:
Out:
Out:
Out:
gm_vol
count meanstd min 25% 50% 75% max
group
Control 86.00 0.72 0.09 0.48 0.66 0.71 0.78 1.03
Patient 157.00 0.70 0.08 0.53 0.65 0.70 0.76 0.90
4.2.3 Statistics
Objectives:
1. Site effect of gray matter atrophy
2. Test the association between the age and gray matter atrophy in the control and patient
population independently.
3. Test for differences of atrophy between the patients and the controls
4. Test for interaction between age and clinical status, ie: is the brain atrophy process in
patient population faster than in the control population.
5. The effect of the medication in the patient population.
import statsmodels.api as sm
import statsmodels.formula.api as smfrmla
import scipy.stats
import seaborn as sns
Plot
Out:
print(sm.stats.anova_lm(anova, typ=2))
Out:
2. Test the association between the age and gray matter atrophy in the control and patient
population independently.
Plot
Out:
Out:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly␣
˓→specified.
Age explains 10.57% of the grey matter fraction variance
--- In patient population ---
OLS Regression Results
==============================================================================
Dep. Variable: gm_f R-squared: 0.280
Model: OLS Adj. R-squared: 0.275
Method: Least Squares F-statistic: 60.16
Date: jeu., 31 oct. 2019 Prob (F-statistic): 1.09e-12
Time: 16:09:40 Log-Likelihood: 289.38
No. Observations: 157 AIC: -574.8
Df Residuals: 155 BIC: -568.7
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [0.025 0.975]
------------------------------------------------------------------------------
Intercept 0.5569 0.009 60.817 0.000 0.539 0.575
age -0.0019 0.000 -7.756 0.000 -0.002 -0.001
==============================================================================
Omnibus: 2.310 Durbin-Watson: 1.325
Prob(Omnibus): 0.315 Jarque-Bera (JB): 1.854
Skew: 0.230 Prob(JB): 0.396
Kurtosis: 3.268 Cond. No. 111.
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly␣
˓→specified.
Age explains 27.96% of the grey matter fraction variance
Before testing for differences of atrophy between the patients ans the controls Preliminary tests
for age x group effect (patients would be older or younger than Controls)
Plot
Out:
Ttest_indResult(statistic=-1.2155557697674162, pvalue=0.225343592508479)
Out:
OLS Regression Results
==============================================================================
Dep. Variable: age R-squared: 0.006
Model: OLS Adj. R-squared: 0.002
Method: Least Squares F-statistic: 1.478
Date: jeu., 31 oct. 2019 Prob (F-statistic): 0.225
Time: 16:09:40 Log-Likelihood: -949.69
No. Observations: 243 AIC: 1903.
Df Residuals: 241 BIC: 1910.
Df Model: 1
Covariance Type: nonrobust
====================================================================================
coef std err t P>|t| [0.025 0.975]
(continues on next page)
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly␣
˓→specified.
No significant difference in age between patients and controls
Preliminary tests for sex x group (more/less males in patients than in Controls)
Out:
3. Test for differences of atrophy between the patients and the controls
Out:
sum_sq df F PR(>F)
group 0.00 1.00 0.01 0.92
Residual 0.46 241.00 nan nan
No significant difference in age between patients and controls
print(sm.stats.anova_lm(smfrmla.ols(
"gm_f ~ group + age + site", data=brain_vol1).fit(), typ=2))
print("No significant difference in age between patients and controls")
Out:
sum_sq df F PR(>F)
group 0.00 1.00 1.82 0.18
site 0.11 5.00 19.79 0.00
age 0.09 1.00 86.86 0.00
Residual 0.25 235.00 nan nan
No significant difference in age between patients and controls
4. Test for interaction between age and clinical status, ie: is the brain atrophy process in
patient population faster than in the control population.
print("%.3f%% of grey matter loss per year (almost %.1f%% per decade)" %\
(ancova.params.age * 100, ancova.params.age * 100 * 10))
Out:
sum_sq df F PR(>F)
site 0.11 5.00 20.28 0.00
age 0.10 1.00 89.37 0.00
group:age 0.00 1.00 3.28 0.07
Residual 0.25 235.00 nan nan
= Parameters =
Intercept 0.52
site[T.S3] 0.01
site[T.S4] 0.03
site[T.S5] 0.01
site[T.S7] 0.06
site[T.S8] 0.02
age -0.00
group[T.Patient]:age -0.00
dtype: float64
-0.148% of grey matter loss per year (almost -1.5% per decade)
grey matter loss in patients is accelerated by -0.232% per decade
Multivariate statistics includes all statistical techniques for analyzing samples made of two or
more variables. The data set (a 𝑁 × 𝑃 matrix X) is a collection of 𝑁 independent samples
Source: Wikipedia
Algebraic definition
The dot product, denoted ’‘·” of two 𝑃 -dimensional vectors a = [𝑎1 , 𝑎2 , ..., 𝑎𝑃 ] and a =
[𝑏1 , 𝑏2 , ..., 𝑏𝑃 ] is defined as
⎡ ⎤
𝑏1
⎢ .. ⎥
]︀ ⎢ . ⎥
⎢ ⎥
∑︁
𝑇 𝑇
[︀
a·b=a b= 𝑎𝑖 𝑏𝑖 = 𝑎1 . . . a . . . 𝑎𝑃 ⎢ ⎢ b ⎥.
⎥
𝑖 ⎢ .. ⎥
⎣.⎦
𝑏𝑃
The Euclidean norm of a vector can be computed using the dot product, as
√
‖a‖2 = a · a.
a · b = 0.
At the other extreme, if they are codirectional, then the angle between them is 0° and
a · b = ‖a‖2 ‖b‖2
a · a = ‖a‖22 .
The scalar projection (or scalar component) of a Euclidean vector a in the direction of a Eu-
clidean vector b is given by
𝑎𝑏 = ‖a‖2 cos 𝜃,
Fig. 5: Projection.
import numpy as np
np.random.seed(42)
a = np.random.randn(10)
b = np.random.randn(10)
np.dot(a, b)
-4.085788532659924
𝑥𝑖𝑃 𝑥¯𝑃
• The covariance matrix ΣXX is a symmetric positive semi-definite matrix whose element
in the 𝑗, 𝑘 position is the covariance between the 𝑗 𝑡ℎ and 𝑘 𝑡ℎ elements of a random vector
i.e. the 𝑗 𝑡ℎ and 𝑘 𝑡ℎ columns of X.
• The covariance matrix generalizes the notion of covariance to multiple dimensions.
• The covariance matrix describe the shape of the sample distribution around the mean
assuming an elliptical distribution:
where
𝑁
1 1 ∑︁
𝑠𝑗𝑘 = 𝑠𝑘𝑗 = xj 𝑇 x k = 𝑥𝑖𝑗 𝑥𝑖𝑘
𝑁 −1 𝑁 −1
𝑖=1
# Generate dataset
for i in range(len(mean)):
X[i] = np.random.multivariate_normal(mean[i], Cov[i], n_samples)
# Plot
for i in range(len(mean)):
# Points
plt.scatter(X[i][:, 0], X[i][:, 1], color=colors[i], label="class %i" % i)
# Means
plt.scatter(mean[i][0], mean[i][1], marker="o", s=200, facecolors='w',
edgecolors=colors[i], linewidth=2)
# Ellipses representing the covariance matrices
pystatsml.plot_utils.plot_cov_ellipse(Cov[i], pos=mean[i], facecolor='none',
linewidth=2, edgecolor=colors[i])
plt.axis('equal')
_ = plt.legend(loc='upper left')
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
url = 'https://python-graph-gallery.com/wp-content/uploads/mtcars.csv'
df = pd.read_csv(url)
f, ax = plt.subplots(figsize=(5.5, 4.5))
cmap = sns.color_palette("RdBu_r", 11)
# Draw the heatmap with the mask and correct aspect ratio
_ = sns.heatmap(corr, mask=None, cmap=cmap, vmax=1, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
lab=0
reordered = np.concatenate(clusters)
R = corr.loc[reordered, reordered]
f, ax = plt.subplots(figsize=(5.5, 4.5))
# Draw the heatmap with the mask and correct aspect ratio
_ = sns.heatmap(R, mask=None, cmap=cmap, vmax=1, center=0,
square=True, linewidths=.5, cbar_kws={"shrink": .5})
[['mpg', 'cyl', 'disp', 'hp', 'wt', 'qsec', 'vs', 'carb'], ['am', 'gear'], ['drat']]
In statistics, precision is the reciprocal of the variance, and the precision matrix is the matrix
inverse of the covariance matrix.
It is related to partial correlations that measures the degree of association between two vari-
ables, while controlling the effect of other variables.
import numpy as np
print(Pcor.round(2))
# Precision matrix:
[[ 6.79 -3.21 -3.21 0. 0. 0. ]
[-3.21 6.79 -3.21 0. 0. 0. ]
[-3.21 -3.21 6.79 0. 0. 0. ]
[ 0. -0. -0. 5.26 -4.74 -0. ]
[ 0. 0. 0. -4.74 5.26 0. ]
[ 0. 0. 0. 0. 0. 1. ]]
# Partial correlations:
[[ nan 0.47 0.47 -0. -0. -0. ]
[ nan nan 0.47 -0. -0. -0. ]
[ nan nan nan -0. -0. -0. ]
[ nan nan nan nan 0.9 0. ]
[ nan nan nan nan nan -0. ]
[ nan nan nan nan nan nan]]
• The Mahalanobis distance is a measure of the distance between two points x and 𝜇 where
the dispersion (i.e. the covariance structure) of the samples is taken into account.
• The dispersion is considered through covariance matrix.
This is formally expressed as
√︁
𝐷𝑀 (x, 𝜇) = (x − 𝜇)𝑇 Σ−1 (x − 𝜇).
Intuitions
• Distances along the principal directions of dispersion are contracted since they correspond
to likely dispersion of points.
• Distances othogonal to the principal directions of dispersion are dilated since they corre-
spond to unlikely dispersion of points.
For example
√
𝐷𝑀 (1) = 1𝑇 Σ−1 1.
ones = np.ones(Cov.shape[0])
d_euc = np.sqrt(np.dot(ones, ones))
d_mah = np.sqrt(np.dot(np.dot(ones, Prec), ones))
The first dot product that distances along the principal directions of dispersion are contracted:
print(np.dot(ones, Prec))
import numpy as np
import scipy
import matplotlib.pyplot as plt
import seaborn as sns
import pystatsml.plot_utils
%matplotlib inline
np.random.seed(40)
colors = sns.color_palette()
Covi = scipy.linalg.inv(Cov)
dm_m_x1 = scipy.spatial.distance.mahalanobis(mean, x1, Covi)
dm_m_x2 = scipy.spatial.distance.mahalanobis(mean, x2, Covi)
# Plot distances
vm_x1 = (x1 - mean) / d2_m_x1
vm_x2 = (x2 - mean) / d2_m_x2
jitter = .1
plt.plot([mean[0] - jitter, d2_m_x1 * vm_x1[0] - jitter],
[mean[1], d2_m_x1 * vm_x1[1]], color='k')
plt.plot([mean[0] - jitter, d2_m_x2 * vm_x2[0] - jitter],
[mean[1], d2_m_x2 * vm_x2[1]], color='k')
plt.legend(loc='lower right')
plt.text(-6.1, 3,
'Euclidian: d(m, x1) = %.1f<d(m, x2) = %.1f' % (d2_m_x1, d2_m_x2), color='k')
plt.text(-6.1, 3.5,
'Mahalanobis: d(m, x1) = %.1f>d(m, x2) = %.1f' % (dm_m_x1, dm_m_x2), color='r')
plt.axis('equal')
print('Euclidian d(m, x1) = %.2f < d(m, x2) = %.2f' % (d2_m_x1, d2_m_x2))
print('Mahalanobis d(m, x1) = %.2f > d(m, x2) = %.2f' % (dm_m_x1, dm_m_x2))
If the covariance matrix is the identity matrix, the Mahalanobis distance reduces to the Eu-
clidean distance. If the covariance matrix is diagonal, then the resulting distance measure is
called a normalized Euclidean distance.
More generally, the Mahalanobis distance is a measure of the distance between a point x and a
distribution 𝒩 (x|𝜇, Σ). It is a multi-dimensional generalization of the idea of measuring how
many standard deviations away x is from the mean. This distance is zero if x is at the mean,
and grows as x moves away from the mean: along each principal component axis, it measures
the number of standard deviations from x to the mean of the distribution.
The distribution, or probability density function (PDF) (sometimes just density), of a continuous
random variable is a function that describes the relative likelihood for this random variable to
take on a given value.
The multivariate normal distribution, or multivariate Gaussian distribution, of a 𝑃 -dimensional
random vector x = [𝑥1 , 𝑥2 , . . . , 𝑥𝑃 ]𝑇 is
1 1
𝒩 (x|𝜇, Σ) = exp{− (x − 𝜇)𝑇 Σ−1 (x − 𝜇)}.
(2𝜋)𝑃/2 |Σ|1/2 2
import numpy as np
import matplotlib.pyplot as plt
import scipy.stats
from scipy.stats import multivariate_normal
from mpl_toolkits.mplot3d import Axes3D
# x, y grid
x, y = np.mgrid[-3:3:.1, -3:3:.1]
X = np.stack((x.ravel(), y.ravel())).T
norm = multivariate_normal_pdf(X, mean, sigma).reshape(x.shape)
# Do it with scipy
norm_scpy = multivariate_normal(mu, sigma).pdf(np.stack((x, y), axis=2))
assert np.allclose(norm, norm_scpy)
# Plot
fig = plt.figure(figsize=(10, 7))
ax = fig.gca(projection='3d')
surf = ax.plot_surface(x, y, norm, rstride=3,
cstride=3, cmap=plt.cm.coolwarm,
linewidth=1, antialiased=False
)
ax.set_zlim(0, 0.2)
ax.zaxis.set_major_locator(plt.LinearLocator(10))
ax.zaxis.set_major_formatter(plt.FormatStrFormatter('%.02f'))
ax.set_xlabel('X')
ax.set_ylabel('Y')
ax.set_zlabel('p(x)')
4.3.8 Exercises
4. Compute S−1 (Sinv) the inverse of the covariance matrix by using scipy.linalg.inv(S).
5. Write a function mahalanobis(x, xbar, Sinv) that computes the Mahalanobis distance
of a vector x to the mean, x̄.
6. Compute the Mahalanobis and Euclidean distances of each sample x𝑖 to the mean x̄. Store
the results in a 100 × 2 dataframe.
Two libraries:
• Pandas: https://pandas.pydata.org/pandas-docs/stable/timeseries.html
• scipy http://www.statsmodels.org/devel/tsa.html
4.4.1 Stationarity
A TS is said to be stationary if its statistical properties such as mean, variance remain constant
over time.
• constant mean
• constant variance
• an autocovariance that does not depend on time.
what is making a TS non-stationary. There are 2 major reasons behind non-stationaruty of a
TS:
1. Trend – varying mean over time. For eg, in this case we saw that on average, the number
of passengers was growing over time.
2. Seasonality – variations at specific time-frames. eg people might have a tendency to buy
cars in a particular month because of pay increment or festivals.
import pandas as pd
import numpy as np
# String as index
prices = {'apple': 4.99,
'banana': 1.99,
'orange': 3.99}
ser = pd.Series(prices)
(continues on next page)
0 1
1 3
dtype: int64
apple 4.99
banana 1.99
orange 3.99
dtype: float64
a 1
b 2
dtype: int64
2
source: https://www.datacamp.com/community/tutorials/time-series-analysis-tutorial
Get Google Trends data of keywords such as ‘diet’ and ‘gym’ and see how they vary over time
while learning about trends and seasonality in time series data.
In the Facebook Live code along session on the 4th of January, we checked out Google trends
data of keywords ‘diet’, ‘gym’ and ‘finance’ to see how they vary over time. We asked ourselves
if there could be more searches for these terms in January when we’re all trying to turn over a
new leaf?
In this tutorial, you’ll go through the code that we put together during the session step by step.
You’re not going to do much mathematics but you are going to do the following:
• Read data
• Recode data
• Exploratory Data Analysis
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
print(df.head())
# Rename columns
df.columns = ['month', 'diet', 'gym', 'finance']
# Describe
print(df.describe())
Next, you’ll turn the ‘month’ column into a DateTime data type and make it the index of the
DataFrame.
Note that you do this because you saw in the result of the .info() method that the ‘Month’
column was actually an of data type object. Now, that generic data type encapsulates everything
from strings to integers, etc. That’s not exactly what you want when you want to be looking
at time series data. That’s why you’ll use .to_datetime() to convert the ‘month’ column in your
DataFrame to a DateTime.
Be careful! Make sure to include the inplace argument when you’re setting the index of the
DataFrame df so that you actually alter the original index and set it to the ‘month’ column.
df.month = pd.to_datetime(df.month)
df.set_index('month', inplace=True)
print(df.head())
You can use a built-in pandas visualization method .plot() to plot your data as 3 line plots on a
single figure (one for each column, namely, ‘diet’, ‘gym’, and ‘finance’).
df.plot()
plt.xlabel('Year');
Note that this data is relative. As you can read on Google trends:
Numbers represent search interest relative to the highest point on the chart for the given region
and time. A value of 100 is the peak popularity for the term. A value of 50 means that the term
is half as popular. Likewise a score of 0 means the term was less than 1% as popular as the
peak.
Rolling average, for each time point, take the average of the points on either side of it. Note
that the number of points is specified by a window size.
diet = df['diet']
diet_resamp_yr = diet.resample('A').mean()
diet_roll_yr = diet.rolling(12).mean()
<matplotlib.legend.Legend at 0x7f0db4e0a2b0>
x = np.asarray(df[['diet']])
win = 12
win_half = int(win / 2)
# print([((idx-win_half), (idx+win_half)) for idx in np.arange(win_half, len(x))])
[<matplotlib.lines.Line2D at 0x7f0db4cfea90>]
gym = df['gym']
Text(0.5, 0, 'Year')
Detrending
Text(0.5, 0, 'Year')
df.diff().plot()
plt.xlabel('Year')
Text(0.5, 0, 'Year')
df.plot()
plt.xlabel('Year');
print(df.corr())
sns.heatmap(df.corr(), cmap="coolwarm")
<matplotlib.axes._subplots.AxesSubplot at 0x7f0db29f3ba8>
‘diet’ and ‘gym’ are negatively correlated! Remember that you have a seasonal and a trend
component. From the correlation coefficient, ‘diet’ and ‘gym’ are negatively correlated:
• trends components are negatively correlated.
• seasonal components would positively correlated and their
The actual correlation coefficient is actually capturing both of those.
Seasonal correlation: correlation of the first-order differences of these time series
df.diff().plot()
plt.xlabel('Year');
print(df.diff().corr())
<matplotlib.axes._subplots.AxesSubplot at 0x7f0db28aeb70>
x = gym
plt.subplot(411)
plt.plot(x, label='Original')
plt.legend(loc='best')
plt.subplot(412)
plt.plot(trend, label='Trend')
plt.legend(loc='best')
plt.subplot(413)
plt.plot(seasonal,label='Seasonality')
plt.legend(loc='best')
plt.subplot(414)
plt.plot(residual, label='Residuals')
plt.legend(loc='best')
plt.tight_layout()
4.4.10 Autocorrelation
A time series is periodic if it repeats itself at equally spaced intervals, say, every 12 months.
Autocorrelation Function (ACF): It is a measure of the correlation between the TS with a lagged
version of itself. For instance at lag 5, ACF would compare series at time instant t1. . . t2 with
series at instant t1-5. . . t2-5 (t1-5 and t2 being end points).
Plot
x = df["diet"].astype(float)
autocorrelation_plot(x)
<matplotlib.axes._subplots.AxesSubplot at 0x7f0db25b2dd8>
/home/edouard/anaconda3/lib/python3.7/site-packages/statsmodels/tsa/stattools.py:541:␣
˓→FutureWarning: fft=True will become the default in a future version of statsmodels. To␣
˓→suppress this warning, explicitly set fft=False.
warnings.warn(msg, FutureWarning)
ACF peaks every 12 months: Time series is correlated with itself shifted by 12 months.
4.4.11 Time Series Forecasting with Python using Autoregressive Moving Average
(ARMA) models
Source:
• https://www.packtpub.com/mapt/book/big_data_and_business_intelligence/
9781783553358/7/ch07lvl1sec77/arma-models
• http://en.wikipedia.org/wiki/Autoregressive%E2%80%93moving-average_model
• ARIMA: https://www.analyticsvidhya.com/blog/2016/02/
time-series-forecasting-codes-python/
ARMA models are often used to forecast a time series. These models combine autoregressive
and moving average models. In moving average models, we assume that a variable is the sum
of the mean of the time series and a linear combination of noise components.
The autoregressive and moving average models can have different orders. In general, we can
define an ARMA model with p autoregressive terms and q moving average terms as follows:
𝑝
∑︁ 𝑞
∑︁
𝑥𝑡 = 𝑎𝑖 𝑥𝑡−𝑖 + 𝑏𝑖 𝜀𝑡−𝑖 + 𝜀𝑡
𝑖 𝑖
Choosing p and q
Plot the partial autocorrelation functions for an estimate of p, and likewise using the autocorre-
lation functions for an estimate of q.
Partial Autocorrelation Function (PACF): This measures the correlation between the TS with a
lagged version of itself but after eliminating the variations already explained by the intervening
comparisons. Eg at lag 5, it will check the correlation but remove the effects already explained
by lags 1 to 4.
x = df["gym"].astype(float)
#Plot ACF:
plt.subplot(121)
plt.plot(lag_acf)
plt.axhline(y=0,linestyle='--',color='gray')
plt.axhline(y=-1.96/np.sqrt(len(x_diff)),linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(x_diff)),linestyle='--',color='gray')
plt.title('Autocorrelation Function (q=1)')
#Plot PACF:
plt.subplot(122)
plt.plot(lag_pacf)
plt.axhline(y=0,linestyle='--',color='gray')
plt.axhline(y=-1.96/np.sqrt(len(x_diff)),linestyle='--',color='gray')
plt.axhline(y=1.96/np.sqrt(len(x_diff)),linestyle='--',color='gray')
plt.title('Partial Autocorrelation Function (p=1)')
plt.tight_layout()
In this plot, the two dotted lines on either sides of 0 are the confidence interevals. These can be
used to determine the p and q values as:
• p: The lag value where the PACF chart crosses the upper confidence interval for the first
1. Define the model by calling ARMA() and passing in the p and q parameters.
2. The model is prepared on the training data by calling the fit() function.
3. Predictions can be made by calling the predict() function and specifying the index of the
time or times to be predicted.
print(model.summary())
plt.plot(x)
plt.plot(model.predict(), color='red')
plt.title('RSS: %.4f'% sum((model.fittedvalues-x)**2))
/home/edouard/anaconda3/lib/python3.7/site-packages/statsmodels/tsa/base/tsa_model.
˓→py:165: ValueWarning: No frequency information was provided, so inferred frequency MS␣
˓→will be used.
% freq, ValueWarning)
/home/edouard/anaconda3/lib/python3.7/site-packages/statsmodels/tsa/kalmanf/kalmanfilter.
˓→py:221: RuntimeWarning: divide by zero encountered in true_divide
Z_mat, R_mat, T_mat)
FIVE
MACHINE LEARNING
5.1.1 Introduction
In machine learning and statistics, dimensionality reduction or dimension reduction is the pro-
cess of reducing the number of features under consideration, and can be divided into feature
selection (not addressed here) and feature extraction.
Feature extraction starts from an initial set of measured data and builds derived values (fea-
tures) intended to be informative and non-redundant, facilitating the subsequent learning and
generalization steps, and in some cases leading to better human interpretations. Feature extrac-
tion is related to dimensionality reduction.
The input matrix X, of dimension 𝑁 × 𝑃 , is
⎡ ⎤
𝑥11 . . . 𝑥1𝑃
⎢ ⎥
⎢ .. .. ⎥
⎢ ⎥
⎢ . X . ⎥
⎢ ⎥
⎣ ⎦
𝑥𝑁 1 . . . 𝑥 𝑁 𝑃
where the rows represent the samples and columns represent the variables.
The goal is to learn a transformation that extracts a few relevant features. This is generally
done by exploiting the covariance ΣXX between the input features.
Decompose the data matrix X𝑁 ×𝑃 into a product of a mixing matrix U𝑁 ×𝐾 and a dictionary
matrix V𝑃 ×𝐾 .
X = UV𝑇 ,
X ≈ X̂ = UV𝑇 ,
147
Statistics and Machine Learning in Python, Release 0.3 beta
X = UDV𝑇 ,
where
⎡ ⎤ ⎡ ⎤
𝑥11 𝑥1𝑃 𝑢11⎡ 𝑢1𝐾
⎤⎡ ⎤
⎢ ⎥ ⎢ ⎥ 𝑑1 0 𝑣11 𝑣1𝑃
⎢ ⎥ ⎢ ⎥
⎢
⎢ X ⎥=⎢
⎥ ⎢ U ⎥
⎥
⎣ D ⎦ ⎣ V𝑇 ⎦.
⎣ ⎦ ⎣ ⎦ 0 𝑑𝐾 𝑣𝐾1 𝑣𝐾𝑃
𝑥𝑁 1 𝑥𝑁 𝑃 𝑢𝑁 1 𝑢𝑁 𝐾
U: right-singular
• V = [v1 , · · · , v𝐾 ] is a 𝑃 × 𝐾 orthogonal matrix.
• It is a dictionary of patterns to be combined (according to the mixing coefficients) to
reconstruct the original samples.
• V perfoms the initial rotations (projection) along the 𝐾 = min(𝑁, 𝑃 ) principal compo-
nent directions, also called loadings.
• Each v𝑗 performs the linear combination of the variables that has maximum sample vari-
ance, subject to being uncorrelated with the previous v𝑗−1 .
D: singular values
• D is a 𝐾 × 𝐾 diagonal matrix made of the singular values of X with 𝑑1 ≥ 𝑑2 ≥ · · · ≥
𝑑𝐾 ≥ 0.
• D scale the projection along the coordinate axes by 𝑑1 , 𝑑2 , · · · , 𝑑𝐾 .
V transforms correlated variables (X) into a set of uncorrelated ones (UD) that better expose
the various relationships among the original data items.
X = UDV𝑇 , (5.1)
𝑇
XV = UDV V, (5.2)
XV = UDI, (5.3)
XV = UD (5.4)
At the same time, SVD is a method for identifying and ordering the dimensions along which
data points exhibit the most variation.
import numpy as np
import scipy
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
np.random.seed(42)
# dataset
n_samples = 100
experience = np.random.normal(size=n_samples)
salary = 1500 + experience + np.random.normal(size=n_samples, scale=.5)
X = np.column_stack([experience, salary])
plt.figure(figsize=(9, 3))
plt.subplot(131)
plt.scatter(U[:, 0], U[:, 1], s=50)
plt.axis('equal')
plt.title("U: Rotated and scaled data")
plt.subplot(132)
# Project data
PC = np.dot(X, Vh.T)
plt.scatter(PC[:, 0], PC[:, 1], s=50)
plt.axis('equal')
plt.title("XV: Rotated data")
plt.xlabel("PC1")
plt.ylabel("PC2")
plt.subplot(133)
plt.scatter(X[:, 0], X[:, 1], s=50)
for i in range(Vh.shape[0]):
plt.arrow(x=0, y=0, dx=Vh[i, 0], dy=Vh[i, 1], head_width=0.2,
head_length=0.2, linewidth=2, fc='r', ec='r')
plt.text(Vh[i, 0], Vh[i, 1],'v%i' % (i+1), color="r", fontsize=15,
horizontalalignment='right', verticalalignment='top')
plt.axis('equal')
plt.ylim(-4, 4)
plt.tight_layout()
Sources:
• C. M. Bishop Pattern Recognition and Machine Learning, Springer, 2006
• Everything you did and didn’t know about PCA
• Principal Component Analysis in 3 Simple Steps
Principles
• Principal components analysis is the main method used for linear dimension reduction.
• The idea of principal component analysis is to find the 𝐾 principal components di-
rections (called the loadings) V𝐾×𝑃 that capture the variation in the data as much as
possible.
• It converts a set of 𝑁 𝑃 -dimensional observations N𝑁 ×𝑃 of possibly correlated variables
into a set of 𝑁 𝐾-dimensional samples C𝑁 ×𝐾 , where the 𝐾 < 𝑃 . The new variables are
linearly uncorrelated. The columns of C𝑁 ×𝐾 are called the principal components.
• The dimension reduction is obtained by using only 𝐾 < 𝑃 components that exploit corre-
lation (covariance) among the original variables.
• PCA is mathematically defined as an orthogonal linear transformation V𝐾×𝑃 that trans-
forms the data to a new coordinate system such that the greatest variance by some projec-
tion of the data comes to lie on the first coordinate (called the first principal component),
the second greatest variance on the second coordinate, and so on.
C𝑁 ×𝐾 = X𝑁 ×𝑃 V𝑃 ×𝐾
• PCA can be thought of as fitting a 𝑃 -dimensional ellipsoid to the data, where each axis of
the ellipsoid represents a principal component. If some axis of the ellipse is small, then the
variance along that axis is also small, and by omitting that axis and its corresponding prin-
cipal component from our representation of the dataset, we lose only a commensurately
small amount of information.
• Finding the 𝐾 largest axes of the ellipse will permit to project the data onto a space having
dimensionality 𝐾 < 𝑃 while maximizing the variance of the projected data.
Dataset preprocessing
Centering
Consider a data matrix, X , with column-wise zero empirical mean (the sample mean of each
column has been shifted to zero), ie. X is replaced by X − 1x̄𝑇 .
Standardizing
Optionally, standardize the columns, i.e., scale them by their standard-deviation. Without stan-
dardization, a variable with a high variance will capture most of the effect of the PCA. The
principal direction will be aligned with this variable. Standardization will, however, raise noise
variables to the save level as informative variables.
The covariance matrix of centered standardized data is the correlation matrix.
To begin with, consider the projection onto a one-dimensional space (𝐾 = 1). We can define
the direction of this space using a 𝑃 -dimensional vector v, which for convenience (and without
loss of generality) we shall choose to be a unit vector so that ‖v‖2 = 1 (note that we are only
interested in the direction defined by v, not in the magnitude of v itself). PCA consists of two
mains steps:
Projection in the directions that capture the greatest variance
Each 𝑃 -dimensional data point x𝑖 is then projected onto v, where the coordinate (in the co-
ordinate system of v) is a scalar value, namely x𝑇𝑖 v. I.e., we want to find the vector v that
maximizes these coordinates along v, which we will see corresponds to maximizing the vari-
ance of the projected data. This is equivalently expressed as
1 ∑︁ (︀ 𝑇 )︀2
v = arg max x𝑖 v .
‖v‖=1 𝑁
𝑖
where SXX is a biased estiamte of the covariance matrix of the data, i.e.
1 𝑇
SXX = X X.
𝑁
We now maximize the projected variance v𝑇 SXX v with respect to v. Clearly, this has to be a
constrained maximization to prevent ‖v2 ‖ → ∞. The appropriate constraint comes from the
normalization condition ‖v‖2 ≡ ‖v‖22 = v𝑇 v = 1. To enforce this constraint, we introduce a
Lagrange multiplier that we shall denote by 𝜆, and then make an unconstrained maximization
of
By setting the gradient with respect to v equal to zero, we see that this quantity has a stationary
point when
SXX v = 𝜆v.
v𝑇 SXX v = 𝜆,
and so the variance will be at a maximum when v is equal to the eigenvector corresponding to
the largest eigenvalue, 𝜆. This eigenvector is known as the first principal component.
We can define additional principal components in an incremental fashion by choosing each new
direction to be that which maximizes the projected variance amongst all possible directions that
are orthogonal to those already considered. If we consider the general case of a 𝐾-dimensional
projection space, the optimal linear projection for which the variance of the projected data is
maximized is now defined by the 𝐾 eigenvectors, v1 , . . . , vK , of the data covariance matrix
SXX that corresponds to the 𝐾 largest eigenvalues, 𝜆1 ≥ 𝜆2 ≥ · · · ≥ 𝜆𝐾 .
Back to SVD
X𝑇 X = (UDV𝑇 )𝑇 (UDV𝑇 )
= VD𝑇 U𝑇 UDV𝑇
= VD2 V𝑇
V𝑇 X𝑇 XV = D2
1 1
V𝑇 X𝑇 XV = D2
𝑁 −1 𝑁 −1
1
V𝑇 SXX V = D2
𝑁 −1
.
Considering only the 𝑘 𝑡ℎ right-singular vectors v𝑘 associated to the singular value 𝑑𝑘
1
vk 𝑇 SXX vk = 𝑑2 ,
𝑁 −1 𝑘
It turns out that if you have done the singular value decomposition then you already have
the Eigenvalue decomposition for X𝑇 X. Where - The eigenvectors of SXX are equivalent to
the right singular vectors, V, of X. - The eigenvalues, 𝜆𝑘 , of SXX , i.e. the variances of the
components, are equal to 𝑁 1−1 times the squared singular values, 𝑑𝑘 .
Moreover computing PCA with SVD do not require to form the matrix X𝑇 X, so computing the
SVD is now the standard way to calculate a principal components analysis from a data matrix,
unless only a handful of components are required.
PCA outputs
The SVD or the eigendecomposition of the data covariance matrix provides three main quanti-
ties:
1. Principal component directions or loadings are the eigenvectors of X𝑇 X. The V𝐾×𝑃
or the right-singular vectors of an SVD of X are called principal component directions of
X. They are generally computed using the SVD of X.
2. Principal components is the 𝑁 × 𝐾 matrix C which is obtained by projecting X onto the
principal components directions, i.e.
C𝑁 ×𝐾 = X𝑁 ×𝑃 V𝑃 ×𝐾 .
Since X = UDV𝑇 and V is orthogonal (V𝑇 V = I):
C𝑁 ×𝐾 = UDV𝑇𝑁 ×𝑃 V𝑃 ×𝐾 (5.5)
C𝑁 ×𝐾 = UD𝑇𝑁 ×𝐾 I𝐾×𝐾 (5.6)
C𝑁 ×𝐾 = UD𝑇𝑁 ×𝐾 (5.7)
(5.8)
Thus c𝑗 = Xv𝑗 = u𝑗 𝑑𝑗 , for 𝑗 = 1, . . . 𝐾. Hence u𝑗 is simply the projection of the row vectors of
X, i.e., the input predictor vectors, on the direction v𝑗 , scaled by 𝑑𝑗 .
⎡ ⎤
𝑥1,1 𝑣1,1 + . . . + 𝑥1,𝑃 𝑣1,𝑃
⎢ 𝑥2,1 𝑣1,1 + . . . + 𝑥2,𝑃 𝑣1,𝑃 ⎥
c1 = ⎢ ..
⎢ ⎥
⎥
⎣ . ⎦
𝑥𝑁,1 𝑣1,1 + . . . + 𝑥𝑁,𝑃 𝑣1,𝑃
1
𝑣𝑎𝑟(c𝑘 ) = (Xv𝑘 )2 (5.9)
𝑁 −1
1
= (u𝑘 𝑑𝑘 )2 (5.10)
𝑁 −1
1
= 𝑑2 (5.11)
𝑁 −1 𝑘
We must choose 𝐾 * ∈ [1, . . . , 𝐾], the number of required components. This can be done by
calculating the explained variance ratio of the 𝐾 * first components and by choosing 𝐾 * such
that the cumulative explained variance ratio is greater than some given threshold (e.g., ≈
90%). This is expressed as
∑︀𝐾 *
𝑗 𝑣𝑎𝑟(c𝑘 )
cumulative explained variance(c𝑘 ) = ∑︀𝐾 .
𝑗 𝑣𝑎𝑟(c𝑘 )
PCs
Plot the samples projeted on first the principal components as e.g. PC1 against PC2.
PC directions
Exploring the loadings associated with a component provides the contribution of each original
variable in the component.
Remark: The loadings (PC directions) are the coefficients of multiple regression of PC on origi-
nal variables:
c = Xv (5.12)
𝑇 𝑇
X c = X Xv (5.13)
(X𝑇 X)−1 X𝑇 c = v (5.14)
Another way to evaluate the contribution of the original variables in each PC can be obtained
by computing the correlation between the PCs and the original variables, i.e. columns of X,
denoted x𝑗 , for 𝑗 = 1, . . . , 𝑃 . For the 𝑘 𝑡ℎ PC, compute and plot the correlations with all original
variables
𝑐𝑜𝑟(c𝑘 , x𝑗 ), 𝑗 = 1 . . . 𝐾, 𝑗 = 1 . . . 𝐾.
import numpy as np
from sklearn.decomposition import PCA
import matplotlib.pyplot as plt
np.random.seed(42)
# dataset
n_samples = 100
experience = np.random.normal(size=n_samples)
salary = 1500 + experience + np.random.normal(size=n_samples, scale=.5)
X = np.column_stack([experience, salary])
PC = pca.transform(X)
plt.subplot(121)
plt.scatter(X[:, 0], X[:, 1])
plt.xlabel("x1"); plt.ylabel("x2")
plt.subplot(122)
plt.scatter(PC[:, 0], PC[:, 1])
plt.xlabel("PC1 (var=%.2f)" % pca.explained_variance_ratio_[0])
plt.ylabel("PC2 (var=%.2f)" % pca.explained_variance_ratio_[1])
plt.axis('equal')
plt.tight_layout()
[0.93646607 0.06353393]
Resources:
• http://www.stat.pitt.edu/sungkyu/course/2221Fall13/lec8_mds_combined.pdf
• https://en.wikipedia.org/wiki/Multidimensional_scaling
• Hastie, Tibshirani and Friedman (2009). The Elements of Statistical Learning: Data Mining,
Inference, and Prediction. New York: Springer, Second Edition.
The purpose of MDS is to find a low-dimensional projection of the data in which the pairwise
distances between data points is preserved, as closely as possible (in a least-squares sense).
• Let D be the (𝑁 × 𝑁 ) pairwise distance matrix where 𝑑𝑖𝑗 is a distance between points 𝑖
and 𝑗.
• The MDS concept can be extended to a wide variety of data types specified in terms of a
similarity matrix.
Given the dissimilarity (distance) matrix D𝑁 ×𝑁 = [𝑑𝑖𝑗 ], MDS attempts to find 𝐾-dimensional
projections of the 𝑁 points x1 , . . . , x𝑁 ∈ R𝐾 , concatenated in an X𝑁 ×𝐾 matrix, so that 𝑑𝑖𝑗 ≈
‖x𝑖 − x𝑗 ‖ are as close as possible. This can be obtained by the minimization of a loss function
called the stress function
∑︁
stress(X) = (𝑑𝑖𝑗 − ‖x𝑖 − x𝑗 ‖)2 .
𝑖̸=𝑗
The Sammon mapping performs better at preserving small distances compared to the least-
squares scaling.
Example
The eurodist datset provides the road distances (in kilometers) between 21 cities in Europe.
Given this matrix of pairwise (non-Euclidean) distances D = [𝑑𝑖𝑗 ], MDS can be used to recover
the coordinates of the cities in some Euclidean referential whose orientation is arbitrary.
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
print(df.iloc[:5, :5])
city = df["city"]
D = np.array(df.iloc[:, 1:]) # Distance matrix
for i in range(len(city)):
plt.text(Xr[i, 0], Xr[i, 1], city[i])
plt.axis('equal')
(-1894.1017744377398,
2914.3652937179477,
-1712.9885463201906,
2145.4522453884565)
We must choose 𝐾 * ∈ {1, . . . , 𝐾} the number of required components. Plotting the values of
the stress function, obtained using 𝑘 ≤ 𝑁 − 1 components. In general, start with 1, . . . 𝐾 ≤ 4.
Choose 𝐾 * where you can clearly distinguish an elbow in the stress curve.
Thus, in the plot below, we choose to retain information accounted for by the first two compo-
nents, since this is where the elbow is in the stress curve.
print(stress)
plt.plot(k_range, stress)
plt.xlabel("k")
plt.ylabel("stress")
Sources:
• Scikit-learn documentation
• Wikipedia
Nonlinear dimensionality reduction or manifold learning cover unsupervised methods that
attempt to identify low-dimensional manifolds within the original 𝑃 -dimensional space that
represent high data density. Then those methods provide a mapping from the high-dimensional
space to the low-dimensional embedding.
Isomap
ax = fig.add_subplot(121, projection='3d')
ax.scatter(X[:, 0], X[:, 1], X[:, 2], c=color, cmap=plt.cm.Spectral)
ax.view_init(4, -72)
plt.title('2D "S shape" manifold in 3D')
Y = manifold.Isomap(n_neighbors=10, n_components=2).fit_transform(X)
ax = fig.add_subplot(122)
plt.scatter(Y[:, 0], Y[:, 1], c=color, cmap=plt.cm.Spectral)
plt.title("Isomap")
plt.xlabel("First component")
plt.ylabel("Second component")
plt.axis('tight')
5.1.6 Exercises
PCA
• fit(X) that estimates the data mean, principal components directions V and the explained
variance of each component.
• transform(X) that projects the data onto the principal components.
Check that your BasicPCA gave similar results, compared to the results from sklearn.
MDS
5.2 Clustering
Wikipedia: Cluster analysis or clustering is the task of grouping a set of objects in such a way
that objects in the same group (called a cluster) are more similar (in some sense or another)
to each other than to those in other groups (clusters). Clustering is one of the main task of
exploratory data mining, and a common technique for statistical data analysis, used in many
fields, including machine learning, pattern recognition, image analysis, information retrieval,
and bioinformatics.
Sources: http://scikit-learn.org/stable/modules/clustering.html