ScipyLectures Simple
ScipyLectures Simple
2017
SciPy Python EDITION
IP[y]:
Cython IPython
Scipy Edited by
Gal Varoquaux
4 Matplotlib: plotting 97
4.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 97
4.2 Simple plot . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 98
4.3 Figures, Subplots, Axes and Ticks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 106
4.4 Other Types of Plots: examples and exercises . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 108
4.5 Beyond this tutorial . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 115
4.6 Quick references . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116
4.7 Full code examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 118
i
5.4 Interpolation: scipy.interpolate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.5 Optimization and fit: scipy.optimize . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 188
5.6 Statistics and random numbers: scipy.stats . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 194
5.7 Numerical integration: scipy.integrate . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 196
5.8 Fast Fourier transforms: scipy.fftpack . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 198
5.9 Signal processing: scipy.signal . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 201
5.10 Image manipulation: scipy.ndimage . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 203
5.11 Summary exercises on scientific computing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 208
5.12 Full code examples for the scipy chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 221
ii
13.5 Practical guide to optimization with scipy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 432
13.6 Special case: non-linear least-squares . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 435
13.7 Optimization with constraints . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 437
13.8 Full code examples . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 438
13.9 Examples for the mathematical optimization chapter . . . . . . . . . . . . . . . . . . . . . . . . . . 438
iii
20.1 Introduction: problem settings . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 573
20.2 Basic principles of machine learning with scikit-learn . . . . . . . . . . . . . . . . . . . . . . . . . . 577
20.3 Supervised Learning: Classification of Handwritten Digits . . . . . . . . . . . . . . . . . . . . . . . 582
20.4 Supervised Learning: Regression of Housing Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . 586
20.5 Measuring prediction performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 589
20.6 Unsupervised Learning: Dimensionality Reduction and Visualization . . . . . . . . . . . . . . . . 594
20.7 The eigenfaces example: chaining PCA and SVMs . . . . . . . . . . . . . . . . . . . . . . . . . . . . 597
20.8 Parameter selection, Validation, and Testing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 603
20.9 Examples for the scikit-learn chapter . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 610
Index 652
iv
Scipy lecture notes, Edition 2017.1
Contents 1
Part I
2
Scipy lecture notes, Edition 2017.1
This part of the Scipy lecture notes is a self-contained introduction to everything that is needed to use Python
for science, from the language itself, to numerical computing or plotting.
3
CHAPTER 1
Python scientific computing ecosystem
Batteries included Rich collection of already existing bricks of classic numerical methods, plotting or
data processing tools. We dont want to re-program the plotting of a curve, a Fourier transform or a fitting
algorithm. Dont reinvent the wheel!
Easy to learn Most scientists are not payed as programmers, neither have they been trained so. They
need to be able to draw a curve, smooth a signal, do a Fourier transform in a few minutes.
Easy communication To keep code alive within a lab or a company it should be as readable as a book
by collaborators, students, or maybe customers. Python syntax is simple, avoiding strange symbols or
lengthy routine specifications that would divert the reader from mathematical or scientific understand-
ing of the code.
Efficient code Python numerical modules are computationally efficient. But needless to say that a very
fast code becomes useless if too much time is spent writing it. Python aims for quick development times
and quick execution times.
4
Scipy lecture notes, Edition 2017.1
Universal Python is a language used for many different problems. Learning Python avoids learning a
new software for each new problem.
Pros
Very fast. For heavy computations, its difficult to outperform these languages.
Cons
Painful usage: no interactivity during development, mandatory compilation steps, ver-
bose syntax, manual memory management. These are difficult languages for non pro-
grammers.
Pros
Very rich collection of libraries with numerous algorithms, for many different domains.
Fast execution because these libraries are often written in a compiled language.
Pleasant development environment: comprehensive and help, integrated editor, etc.
Commercial support is available.
Cons
Base language is quite poor and can become restrictive for advanced users.
Not free.
Julia
Pros
Fast code, yet interactive and simple.
Easily connects to Python or C.
Cons
Ecosystem limited to numerical computing.
Still young.
Pros
Open-source, free, or at least cheaper than Matlab.
Some features can be very advanced (statistics in R, etc.)
Cons
Fewer available algorithms than in Matlab, and the language is not more advanced.
Some software are dedicated to one domain. Ex: Gnuplot to draw curves. These pro-
grams are very powerful, but they are restricted to a single type of usage, such as plotting.
Python
Pros
Very rich scientific computing libraries
Well thought out language, allowing to write very readable and well structured code: we
code what we think.
Many libraries beyond scientific computing (web server, serial port access, etc.)
Free and open-source software, widely spread, with a vibrant community.
A variety of powerful environments to work in, such as IPython, Spyder, Jupyter note-
books, Pycharm
Cons
Not all the algorithms that can be found in more specialized software or toolboxes.
Unlike Matlab, or R, Python does not come with a pre-bundled set of modules for scientific computing. Below
are the basic building blocks that can be combined to obtain a scientific computing environment:
Numpy: numerical computing with powerful numerical arrays objects, and routines to manipulate
them. http://www.numpy.org/
See also:
chapter on numpy
Scipy : high-level numerical routines. Optimization, regression, interpolation, etc http://www.scipy.org/
See also:
chapter on scipy
Matplotlib : 2-D visualization, publication-ready plots http://matplotlib.org/
See also:
chapter on matplotlib
Domain-specific packages,
Mayavi for 3-D visualization
pandas, statsmodels, seaborn for statistics
sympy for symbolic computing
scikit-image for image processing
scikit-learn for machine learning
and much more packages not documented in the scipy lectures.
See also:
chapters on advanced topics
chapters on packages and applications
Python comes in many flavors, and there are many ways to install it. However, we recommend to install a
scientific-computing distribution, that comes readily with optimized versions of scientific modules.
Under Linux
If you have a recent distribution, most of the tools are probably packaged, and it is recommended to use your
package manager.
Other systems
There are several fully-featured Scientific Python distributions:
Anaconda
EPD
WinPython
Python 3 or Python 2?
In 2008, Python 3 was released. It is a major evolution of the language that made a few changes. Some old
scientific code does not yet run under Python 3. However, this is infrequent and Python 3 comes with many
benefits. We advise that you install Python 3.
Interactive work to test and understand algorithms: In this section, we describe a workflow combining inter-
active work and consolidation.
Python is a general-purpose language. As such, there is not one blessed environment to work in, and not only
one way of using it. Although this makes it harder for beginners to find their way, it makes it possible for Python
to be used for programs, in web servers, or embedded devices.
We recommend an interactive work with the IPython console, or its offspring, the Jupyter notebook. They are
handy to explore and understand algorithms.
Start ipython:
In [2]: print?
Type: builtin_function_or_method
Base Class: <type 'builtin_function_or_method'>
String Form: <built-in function print>
Namespace: Python builtin
Docstring:
print(value, ..., sep=' ', end='\n', file=sys.stdout)
See also:
IPython user manual: http://ipython.org/ipython-doc/dev/index.html
Jupyter Notebook QuickStart: http://jupyter.readthedocs.io/en/latest/content-quickstart.html
As you move forward, it will be important to not only work interactively, but also to create and reuse Python
files. For this, a powerful code editor will get you far. Here are several good easy-to-use editors:
Spyder: integrates an IPython console, a debugger, a profiler. . .
PyCharm: integrates an IPython console, notebooks, a debugger. . . (freely available, but commercial)
Atom
Some of these are shipped by the various scientific Python distributions, and you can find them in the menus.
As an exercise, create a file my_file.py in a code editor, and add the following lines:
s = 'Hello world'
print(s)
Now, you can run it in IPython console or a notebook and explore the resulting variables:
In [2]: s
Out[2]: 'Hello world'
In [3]: %whos
Variable Type Data/Info
----------------------------
s str Hello world
While it is tempting to work only with scripts, that is a file full of instructions following each other, do plan
to progressively evolve the script to a set of functions:
A script is not reusable, functions are.
Thinking in terms of functions helps breaking the problem in small blocks.
The user manuals contain a wealth of information. Here we give a quick introduction to four useful features:
history, tab completion, magic functions, and aliases.
Command history Like a UNIX shell, the IPython console supports command history. Type up and down to
navigate previously typed commands:
In [1]: x = 10
In [2]: <UP>
In [2]: x = 10
Tab completion Tab completion, is a convenient way to explore the structure of any object youre dealing
with. Simply type object_name.<TAB> to view the objects attributes. Besides Python objects and keywords,
tab completion also works on file and directory names.*
In [1]: x = 10
In [2]: x.<TAB>
x.bit_length x.denominator x.imag x.real
x.conjugate x.from_bytes x.numerator x.to_bytes
Magic functions The console and the notebooks support so-called magic functions by prefixing a command
with the % character. For example, the run and whos functions from the previous section are magic functions.
Note that, the setting automagic, which is enabled by default, allows you to omit the preceding % sign. Thus,
you can just type the magic function and it will work.
Other useful magic functions are:
%cd to change the current directory.
In [1]: cd /tmp
/tmp
%cpaste allows you to paste code, especially code from websites which has been prefixed with the stan-
dard Python prompt (e.g. >>>) or with an ipython prompt, (e.g. in [3]):
In [2]: %cpaste
Pasting code; enter '--' alone on the line to stop or use Ctrl-D.
:>>> for i in range(3):
:... print(i)
:--
0
1
2
%timeit allows you to time the execution of short snippets using the timeit module from the standard
library:
In [3]: %timeit x = 10
10000000 loops, best of 3: 39 ns per loop
See also:
Chapter on optimizing code
%debug allows you to enter post-mortem debugging. That is to say, if the code you try to execute, raises
an exception, using %debug will enter the debugger at the point where the exception was thrown.
In [4]: x === 10
File "<ipython-input-6-12fd421b5f28>", line 1
x === 10
^
SyntaxError: invalid syntax
In [5]: %debug
> /.../IPython/core/compilerop.py (87)ast_parse()
86 and are passed to the built-in compile function."""
---> 87 return compile(source, filename, symbol, self.flags | PyCF_ONLY_AST, 1)
88
ipdb>locals()
{'source': u'x === 10\n', 'symbol': 'exec', 'self':
<IPython.core.compilerop.CachingCompiler instance at 0x2ad8ef0>,
'filename': '<ipython-input-6-12fd421b5f28>'}
See also:
Chapter on debugging
Aliases Furthermore IPython ships with various aliases which emulate common UNIX command line tools
such as ls to list files, cp to copy files and rm to remove files (a full list of aliases is shown when typing alias).
Getting help
We introduce here the Python language. Only the bare minimum necessary for getting started with Numpy
and Scipy is addressed here. To learn more about the language, consider going through the excellent tutorial
https://docs.python.org/tutorial. Dedicated books are also available, such as http://www.diveintopython.
net/.
Tip: Python is a programming language, as are C, Fortran, BASIC, PHP, etc. Some specific features of Python
are as follows:
an interpreted (as opposed to compiled) language. Contrary to e.g. C or Fortran, one does not compile
Python code before executing it. In addition, Python can be used interactively: many Python inter-
preters are available, from which commands and scripts can be executed.
a free software released under an open-source license: Python can be used and distributed free of
charge, even for building commercial software.
multi-platform: Python is available for all major operating systems, Windows, Linux/Unix, MacOS X,
most likely your mobile phone OS, etc.
a very readable language with clear non-verbose syntax
12
Scipy lecture notes, Edition 2017.1
a language for which a large variety of high-quality packages are available for various applications, from
web frameworks to scientific computing.
a language very easy to interface with other languages, in particular C and C++.
Some other features of the language are illustrated just below. For example, Python is an object-oriented
language, with dynamic typing (the same variable can contain objects of different types during the
course of a program).
See https://www.python.org/about/ for more information about distinguishing features of Python.
Tip: If you dont have Ipython installed on your computer, other Python shells are available, such as the plain
Python shell started by typing python in a terminal, or the Idle interpreter. However, we advise to use the
Ipython shell because of its enhanced features, especially for interactive scientific computing.
Tip: The message Hello, world! is then displayed. You just executed your first Python instruction, congratu-
lations!
>>> a = 3
>>> b = 2*a
>>> type(b)
<type 'int'>
>>> print(b)
6
>>> a*b
18
>>> b = 'hello'
>>> type(b)
<type 'str'>
>>> b + b
'hellohello'
>>> 2*b
'hellohello'
Tip: Two variables a and b have been defined above. Note that one does not declare the type of a variable
before assigning its value. In C, conversely, one should write:
int a = 3;
In addition, the type of a variable may change, in the sense that at one point in time it can be equal to a value
of a certain type, and a second point in time, it can be equal to a value of a different type. b was first equal to an
integer, but it became equal to a string when it was assigned the value hello. Operations on integers (b=2*a)
are coded natively in Python, and so are some operations on strings such as additions and multiplications,
which amount respectively to concatenation and repetition.
Integer
>>> 1 + 1
2
>>> a = 4
>>> type(a)
<type 'int'>
Floats
>>> c = 2.1
>>> type(c)
<type 'float'>
Complex
Booleans
>>> 3 > 4
False
>>> test = (3 > 4)
>>> test
False
>>> type(test)
<type 'bool'>
Tip: A Python shell can therefore replace your pocket calculator, with the basic arithmetic operations +, -, *,
/, % (modulo) natively implemented
>>> 7 * 3.
21.0
>>> 2**10
1024
>>> 8 % 3
2
>>> float(1)
1.0
In Python 3:
>>> 3 / 2
1.5
>>> a = 3
>>> b = 2
>>> a / b # In Python 2
1
>>> a / float(b)
1.5
2.2.2 Containers
Tip: Python provides many efficient types of containers, in which collections of objects can be stored.
Lists
Tip: A list is an ordered collection of objects, that may have different types. For example:
>>> colors[2]
'green'
>>> colors[-1]
'white'
>>> colors[-2]
'black'
>>> colors
['red', 'blue', 'green', 'black', 'white']
>>> colors[2:4]
['green', 'black']
Warning: Note that colors[start:stop] contains the elements with indices i such as start<= i <
stop (i ranging from start to stop-1). Therefore, colors[start:stop] has (stop - start) elements.
>>> colors
['red', 'blue', 'green', 'black', 'white']
>>> colors[3:]
['black', 'white']
>>> colors[:3]
['red', 'blue', 'green']
>>> colors[::2]
['red', 'green', 'white']
Tip: For collections of numerical data that all have the same type, it is often more efficient to use the array
type provided by the numpy module. A NumPy array is a chunk of memory containing fixed-sized items. With
NumPy arrays, operations on elements can be faster because elements are regularly spaced in memory and
more operations are performed through specialized C functions instead of Python loops.
Tip: Python offers a large panel of functions to modify lists, or query them. Here are a few examples; for more
details, see https://docs.python.org/tutorial/datastructures.html#more-on-lists
Reverse:
Tip: Sort:
The notation rcolors.method() (e.g. rcolors.append(3) and colors.pop()) is our first example of
object-oriented programming (OOP). Being a list, the object rcolors owns the method function that is
called using the notation .. No further knowledge of OOP than understanding the notation . is necessary for
going through this tutorial.
Discovering methods:
Strings
Tip: Strings are collections like lists. Hence they can be indexed and sliced, using the same syntax and rules.
Indexing:
>>> a = "hello"
>>> a[0]
'h'
>>> a[1]
'e'
>>> a[-1]
'o'
Tip: (Remember that negative indices correspond to counting from the right end.)
Slicing:
Tip: Accents and special characters can also be handled in Unicode strings (see https://docs.python.org/
tutorial/introduction.html#unicode-strings).
A string is an immutable object and it is not possible to modify its contents. One may however create new
strings from the original one.
Tip: Strings have many useful methods, such as a.replace as seen above. Remember the a. object-oriented
notation and use tab completion or help(str) to search for new methods.
See also:
Python offers advanced possibilities for manipulating strings, looking for patterns or formatting. The inter-
ested reader is referred to https://docs.python.org/library/stdtypes.html#string-methods and https://docs.
python.org/library/string.html#new-string-formatting
String formatting:
>>> 'An integer: %i ; a float: %f ; another string: %s ' % (1, 0.1, 'string')
'An integer: 1; a float: 0.100000; another string: string'
>>> i = 102
>>> filename = 'processing_of_dataset_%d .txt' % i
>>> filename
'processing_of_dataset_102.txt'
Dictionaries
Tip: A dictionary is basically an efficient table that maps keys to values. It is an unordered container
Tip: It can be used to conveniently store and retrieve values associated with a name (a string for a date, a
name, etc.). See https://docs.python.org/tutorial/datastructures.html#dictionaries for more information.
A dictionary can have keys (resp. values) with different types:
Tuples
Tuples are basically immutable lists. The elements of a tuple are written between parentheses, or just separated
by commas:
Things to note:
a single object can have several names bound to it:
In [1]: a = [1, 2, 3]
In [2]: b = a
In [3]: a
Out[3]: [1, 2, 3]
In [4]: b
Out[4]: [1, 2, 3]
In [5]: a is b
Out[5]: True
In [1]: a = [1, 2, 3]
In [3]: a
Out[3]: [1, 2, 3]
In [4]: a = ['a', 'b', 'c'] # Creates another object.
In [5]: a
Out[5]: ['a', 'b', 'c']
In [6]: id(a)
Out[6]: 138641676
In [7]: a[:] = [1, 2, 3] # Modifies object in place.
In [8]: a
Out[8]: [1, 2, 3]
In [9]: id(a)
Out[9]: 138641676 # Same as in Out[6], yours will differ...
2.3.1 if/elif/else
>>> if 2**2 == 4:
... print('Obvious!')
...
Obvious!
Tip: Type the following lines in your Python interpreter, and be careful to respect the indentation depth. The
Ipython shell automatically increases the indentation depth after a colon : sign; to decrease the indentation
depth, go four spaces to the left with the Backspace key. Press the Enter key twice to leave the logical block.
>>> a = 10
>>> if a == 1:
... print(1)
... elif a == 2:
... print(2)
... else:
... print('A lot')
A lot
Indentation is compulsory in scripts as well. As an exercise, re-type the previous lines with the same indenta-
tion in a script condition.py, and execute the script with run condition.py in Ipython.
2.3.2 for/range
2.3.3 while/break/continue
>>> z = 1 + 1j
>>> while abs(z) < 100:
... z = z**2 + 1
>>> z
(-134+352j)
>>> z = 1 + 1j
>>> a = [1, 0, 2, 4]
>>> for element in a:
... if element == 0:
... continue
... print(1. / element)
1.0
0.5
0.25
if <OBJECT>
Evaluates to False:
>>> 1 == 1.
True
>>> 1 is 1.
False
>>> a = 1
>>> b = 1
>>> a is b
True
>>> b = [1, 2, 3]
>>> 2 in b
True
>>> 5 in b
False
You can iterate over any sequence (string, list, keys in a dictionary, lines in a file, . . . ):
Tip: Few languages (in particular, languages for scientific computing) allow to loop over anything but in-
tegers/indices. With Python it is possible to loop exactly over the objects of interest without bothering with
indices you often dont care about. This feature can often be used to make code more readable.
Warning: Not safe to modify the sequence you are iterating over.
Common task is to iterate over a sequence while keeping track of the item number.
Could use while loop with a counter as above. Or a for loop:
Use items:
Note: The ordering of a dictionary in random, thus we use sorted() which will sort on the keys.
Exercise
In [57]: test()
in test function
In [8]: disk_area(1.5)
Out[8]: 7.0649999999999995
2.4.3 Parameters
In [82]: double_it(3)
Out[82]: 6
In [83]: double_it()
---------------------------------------------------------------------------
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: double_it() takes exactly 1 argument (0 given)
In [85]: double_it()
Out[85]: 4
In [86]: double_it(3)
Out[86]: 6
Warning: Default values are evaluated when the function is defined, not when it is called. This can be
problematic when using mutable types (e.g. dictionary or list) and modifying them in the function body,
since the modifications will be persistent across invocations of the function.
Using an immutable type in a keyword argument:
In [124]: bigx = 10
In [128]: double_it()
Out[128]: 20
Using an mutable type in a keyword argument (and modifying it inside the function body):
In [2]: def add_to_dict(args={'a': 1, 'b': 2}):
...: for i in args.keys():
...: args[i] += 1
...: print args
...:
In [3]: add_to_dict
Out[3]: <function __main__.add_to_dict>
In [4]: add_to_dict()
{'a': 2, 'b': 3}
In [5]: add_to_dict()
{'a': 3, 'b': 4}
In [6]: add_to_dict()
{'a': 4, 'b': 5}
In [101]: rhyme = 'one fish, two fish, red fish, blue fish'.split()
In [102]: rhyme
Out[102]: ['one', 'fish,', 'two', 'fish,', 'red', 'fish,', 'blue', 'fish']
In [103]: slicer(rhyme)
Out[103]: ['one', 'fish,', 'two', 'fish,', 'red', 'fish,', 'blue', 'fish']
but it is good practice to use the same ordering as the functions definition.
Keyword arguments are a very convenient feature for defining functions with a variable number of arguments,
especially when default values are to be used in most calls to the function.
Tip: Can you modify the value of a variable inside a function? Most languages (C, Java, . . . ) distinguish
passing by value and passing by reference. In Python, such a distinction is somewhat artificial, and it is a
bit subtle whether your variables are going to be modified or not. Fortunately, there exist clear rules.
Parameters to functions are references to objects, which are passed by value. When you pass a variable to a
function, python passes the reference to the object to which the variable refers (the value). Not the variable
itself.
If the value passed in a function is immutable, the function does not modify the callers variable. If the value
is mutable, the function may modify the callers variable in-place:
Variables declared outside the function can be referenced within the function:
In [114]: x = 5
In [116]: addx(10)
Out[116]: 15
But these global variables cannot be modified within the function, unless declared global in the function.
This doesnt work:
In [118]: setx(10)
x is 10
In [120]: x
Out[120]: 5
This works:
In [122]: setx(10)
x is 10
In [123]: x
Out[123]: 10
2.4.7 Docstrings
Documentation about what the function does and its parameters. General convention:
In [68]: funcname?
Type: function
Base Class: type 'function'>
String Form: <function funcname at 0xeaa0f0>
Namespace: Interactive
File: <ipython console>
Definition: funcname(params)
Docstring:
Concise one-line sentence describing the function.
In [38]: va = variable_args
2.4.9 Methods
Methods are functions attached to objects. Youve seen these in our examples on lists, dictionaries, strings,
etc. . .
2.4.10 Exercises
Write a function that displays the n first terms of the Fibonacci sequence, defined by:
U0 = 0
U =1
1
Un+2 = Un+1 +Un
Exercise: Quicksort
function quicksort(array)
var list less, greater
if length(array) < 2
return array
select and remove a pivot value pivot from array
for each x in array
if x < pivot + 1 then append x to less
else append x to greater
return concatenate(quicksort(less), pivot, quicksort(greater))
For now, we have typed all instructions in the interpreter. For longer sets of instructions we need to change
track and write the code in text files (using a text editor), that we will call either scripts or modules. Use your fa-
vorite text editor (provided it offers syntax highlighting for Python), or the editor that comes with the Scientific
Python Suite you may be using.
2.5.1 Scripts
Tip: Let us first write a script, that is a file with a sequence of instructions that are executed each time the script
is called. Instructions may be e.g. copied-and-pasted from the interpreter (but take care to respect indentation
rules!).
The extension for Python files is .py. Write or copy-and-paste the following lines in a file called test.py
Tip: Let us now execute the script interactively, that is inside the Ipython interpreter. This is maybe the most
common use of scripts in scientific computing.
Note: in Ipython, the syntax to execute a script is %run script.py. For example,
In [2]: message
Out[2]: 'Hello how are you?'
The script has been executed. Moreover the variables defined in the script (such as message) are now available
inside the interpreters namespace.
Tip: Other interpreters also offer the possibility to execute scripts (e.g., execfile in the plain Python inter-
preter, etc.).
It is also possible In order to execute this script as a standalone program, by executing the script inside a shell
terminal (Linux/Mac console or cmd Windows console). For example, if we are in the same directory as the
test.py file, we can execute this in a console:
$ python test.py
Hello
how
are
you?
import sys
print sys.argv
Warning: Dont implement option parsing yourself. Use modules such as optparse, argparse or
:moddocopt.
In [1]: import os
In [2]: os
Out[2]: <module 'os' from '/usr/lib/python2.6/os.pyc'>
In [3]: os.listdir('.')
Out[3]:
['conf.py',
'basic_types.rst',
'control_flow.rst',
'functions.rst',
'python_language.rst',
'reusing.rst',
'file_io.rst',
'exceptions.rst',
'workflow.rst',
'index.rst']
And also:
Importing shorthands:
Warning:
from os import *
Tip: Modules are thus a good way to organize code in a hierarchical way. Actually, all the scientific computing
tools we are going to use are modules:
Tip: If we want to write larger and better organized programs (compared to simple scripts), where some
objects are defined, (variables, functions, classes) and that we want to reuse several times, we have to create
def print_b():
"Prints b."
print 'b'
def print_a():
"Prints a."
print 'a'
c = 2
d = 2
Tip: In this file, we defined two functions print_a and print_b. Suppose we want to call the print_a
function from the interpreter. We could execute the file as a script, but since we just want to have access to the
function print_a, we are rather going to import it as a module. The syntax is as follows.
In [2]: demo.print_a()
a
In [3]: demo.print_b()
b
Importing the module gives access to its objects, using the module.object syntax. Dont forget to put the
modules name before the objects name, otherwise Python wont recognize the instruction.
Introspection
In [4]: demo?
Type: module
Base Class: <type 'module'>
String Form: <module 'demo' from 'demo.py'>
Namespace: Interactive
File: /home/varoquau/Projects/Python_talks/scipy_2009_tutorial/source/demo.py
Docstring:
A demo module.
In [5]: who
demo
In [6]: whos
Variable Type Data/Info
------------------------------
demo module <module 'demo' from 'demo.py'>
In [7]: dir(demo)
Out[7]:
['__builtins__',
'__doc__',
'__file__',
'__name__',
'__package__',
'c',
'd',
'print_a',
'print_b']
In [8]: demo.
demo.c demo.print_a demo.py
demo.d demo.print_b demo.pyc
In [10]: whos
Variable Type Data/Info
--------------------------------
demo module <module 'demo' from 'demo.py'>
print_a function <function print_a at 0xb7421534>
print_b function <function print_b at 0xb74214c4>
In [11]: print_a()
a
In Python3 instead reload is not builtin, so you have to import the importlib module first and then do:
In [10]: importlib.reload(demo)
Tip: Sometimes we want code to be executed when a module is run directly, but not when it is imported by
another module. if __name__ == '__main__' allows us to check whether the module is being run directly.
File demo2.py:
def print_b():
"Prints b."
print 'b'
def print_a():
"Prints a."
print 'a'
if __name__ == '__main__':
# print_a() is only executed when the module is run directly.
print_a()
Importing it:
Running it:
When the import mymodule statement is executed, the module mymodule is searched in a given list of direc-
tories. This list includes a list of installation-dependent default path (e.g., /usr/lib/python) as well as the
list of directories specified by the environment variable PYTHONPATH.
The list of directories searched by Python is given by the sys.path variable
In [2]: sys.path
Out[2]:
['',
'/home/varoquau/.local/bin',
'/usr/lib/python2.7',
'/home/varoquau/.local/lib/python2.7/site-packages',
'/usr/lib/python2.7/dist-packages',
'/usr/local/lib/python2.7/dist-packages',
...]
Tip: On Linux/Unix, add the following line to a file read by the shell at startup (e.g. /etc/profile, .profile)
export PYTHONPATH=$PYTHONPATH:/home/emma/user_defined_modules
Tip:
import sys
new_path = '/home/emma/user_defined_modules'
if new_path not in sys.path:
sys.path.append(new_path)
This method is not very robust, however, because it makes the code less portable (user-dependent path)
and because you have to add the directory to your sys.path each time you want to import from a module
in this directory.
See also:
See https://docs.python.org/tutorial/modules.html for more information about modules.
2.5.6 Packages
A directory that contains many modules is called a package. A package is a module with submodules (which
can have submodules themselves, etc.). A special file called __init__.py (which may be empty) tells Python
that the directory is a Python package, from which modules can be imported.
$ ls
cluster/ io/ README.txt@ stsci/
__config__.py@ LATEST.txt@ setup.py@ __svn_version__.py@
__config__.pyc lib/ setup.pyc __svn_version__.pyc
constants/ linalg/ setupscons.py@ THANKS.txt@
fftpack/ linsolve/ setupscons.pyc TOCHANGE.txt@
__init__.py@ maxentropy/ signal/ version.py@
__init__.pyc misc/ sparse/ version.pyc
INSTALL.txt@ ndimage/ spatial/ weave/
integrate/ odr/ special/
interpolate/ optimize/ stats/
$ cd ndimage
$ ls
doccer.py@ fourier.pyc interpolation.py@ morphology.pyc setup.pyc
doccer.pyc info.py@ interpolation.pyc _nd_image.so
setupscons.py@
filters.py@ info.pyc measurements.py@ _ni_support.py@
setupscons.pyc
filters.pyc __init__.py@ measurements.pyc _ni_support.pyc tests/
fourier.py@ __init__.pyc morphology.py@ setup.py@
From Ipython:
In [2]: scipy.__file__
Out[2]: '/usr/lib/python2.6/dist-packages/scipy/__init__.pyc'
In [4]: scipy.version.version
Out[4]: '0.7.0'
In [17]: morphology.binary_dilation?
Type: function
Base Class: <type 'function'>
String Form: <function binary_dilation at 0x9bedd84>
Namespace: Interactive
File: /usr/lib/python2.6/dist-packages/scipy/ndimage/morphology.py
Definition: morphology.binary_dilation(input, structure=None,
iterations=1, mask=None, output=None, border_value=0, origin=0,
brute_force=False)
Docstring:
Multi-dimensional binary dilation with the given structure.
Tip: Indenting is compulsory in Python! Every command block following a colon bears an additional
indentation level with respect to the previous line with a colon. One must therefore indent after def
f(): or while:. At the end of such logical blocks, one decreases the indentation depth (and re-increases
it if a new block is entered, etc.)
Strict respect of indentation is the price to pay for getting rid of { or ; characters that delineate logical
blocks in other languages. Improper indentation leads to errors such as
------------------------------------------------------------
IndentationError: unexpected indent (test.py, line 2)
All this indentation business can be a bit confusing in the beginning. However, with the clear indenta-
tion, and in the absence of extra characters, the resulting code is very nice to read compared to other
languages.
Indentation depth: Inside your text editor, you may choose to indent with any positive number of spaces
(1, 2, 3, 4, . . . ). However, it is considered good practice to indent with 4 spaces. You may configure your
editor to map the Tab key to a 4-space indentation.
Style guidelines
Long lines: you should not write very long lines that span over more than (e.g.) 80 characters. Long lines
can be broken with the \ character
Spaces
Write well-spaced code: put whitespaces after commas, around arithmetic operators, etc.:
>>> a = 1 # yes
>>> a=1 # too cramped
A certain number of rules for writing beautiful code (and more importantly using the same conven-
tions as anybody else!) are given in the Style Guide for Python Code.
Quick read
If you want to do a first quick pass through the Scipy lectures to learn the ecosystem, you can directly skip
to the next chapter: NumPy: creating and manipulating numerical data.
The remainder of this chapter is not necessary to follow the rest of the intro part. But be sure to come back
and finish this chapter later.
To be exhaustive, here are some information about input and output in Python. Since we will use the Numpy
methods to read and write files, you may skip this chapter at first reading.
We write or read strings to/from files (other types must be converted to strings). To write in a file:
In [2]: s = f.read()
In [3]: print(s)
This is a test
and another test
In [4]: f.close()
See also:
For more details: https://docs.python.org/tutorial/inputoutput.html
In [8]: f.close()
File modes
Read-only: r
Write-only: w
Note: Create a new file or overwrite existing file.
Append a file: a
Read and Write: r+
Binary mode: b
Note: Use for binary files, especially on Windows.
Current directory:
In [17]: os.getcwd()
Out[17]: '/Users/cburns/src/scipy2009/scipy_2009_tutorial/source'
List a directory:
In [31]: os.listdir(os.curdir)
Out[31]:
['.index.rst.swo',
'.python_language.rst.swp',
'.view_array.py.swp',
'_static',
'_templates',
'basic_types.rst',
'conf.py',
'control_flow.rst',
'debugging.rst',
...
Make a directory:
In [32]: os.mkdir('junkdir')
In [41]: os.rmdir('foodir')
Delete a file:
In [45]: fp.close()
In [47]: os.remove('junk.txt')
In [71]: fp.close()
In [72]: a = os.path.abspath('junk.txt')
In [73]: a
Out[73]: '/Users/cburns/src/scipy2009/scipy_2009_tutorial/source/junk.txt'
In [74]: os.path.split(a)
Out[74]: ('/Users/cburns/src/scipy2009/scipy_2009_tutorial/source',
'junk.txt')
In [78]: os.path.dirname(a)
Out[78]: '/Users/cburns/src/scipy2009/scipy_2009_tutorial/source'
In [79]: os.path.basename(a)
Out[79]: 'junk.txt'
In [80]: os.path.splitext(os.path.basename(a))
Out[80]: ('junk', '.txt')
In [84]: os.path.exists('junk.txt')
Out[84]: True
In [86]: os.path.isfile('junk.txt')
Out[86]: True
In [87]: os.path.isdir('junk.txt')
Out[87]: False
In [88]: os.path.expanduser('~/local')
Out[88]: '/Users/cburns/local'
In [8]: os.system('ls')
basic_types.rst demo.py functions.rst python_language.rst standard_library.rst
control_flow.rst exceptions.rst io.rst python-logo.png
demo2.py first_steps.rst oop.rst reusing_code.rst
In [20]: import sh
In [20]: com = sh.ls()
Walking a directory
Environment variables:
In [9]: import os
In [11]: os.environ.keys()
Out[11]:
['_',
'FSLDIR',
'TERM_PROGRAM_VERSION',
'FSLREMOTECALL',
'USER',
'HOME',
'PATH',
'PS1',
'SHELL',
'EDITOR',
'WORKON_HOME',
'PYTHONPATH',
...
In [12]: os.environ['PYTHONPATH']
Out[12]: '.:/Users/cburns/src/utils:/Users/cburns/src/nitools:
/Users/cburns/local/lib/python2.5/site-packages/:
/usr/local/lib/python2.5/site-packages/:
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5'
In [16]: os.getenv('PYTHONPATH')
Out[16]: '.:/Users/cburns/src/utils:/Users/cburns/src/nitools:
/Users/cburns/local/lib/python2.5/site-packages/:
/usr/local/lib/python2.5/site-packages/:
/Library/Frameworks/Python.framework/Versions/2.5/lib/python2.5'
In [19]: glob.glob('*.txt')
Out[19]: ['holy_grail.txt', 'junk.txt', 'newfile.txt']
In [117]: sys.platform
Out[117]: 'darwin'
In [118]: sys.version
Out[118]: '2.5.2 (r252:60911, Feb 22 2008, 07:57:53) \n
[GCC 4.0.1 (Apple Computer, Inc. build 5363)]'
In [119]: sys.prefix
Out[119]: '/Library/Frameworks/Python.framework/Versions/2.5'
In [100]: sys.argv
Out[100]: ['/Users/cburns/local/bin/ipython']
sys.path is a list of strings that specifies the search path for modules. Initialized from PYTHONPATH:
In [121]: sys.path
Out[121]:
['',
'/Users/cburns/local/bin',
'/Users/cburns/local/lib/python2.5/site-packages/grin-1.1-py2.5.egg',
'/Users/cburns/local/lib/python2.5/site-packages/argparse-0.8.0-py2.5.egg',
'/Users/cburns/local/lib/python2.5/site-packages/urwid-0.9.7.1-py2.5.egg',
'/Users/cburns/local/lib/python2.5/site-packages/yolk-0.4.1-py2.5.egg',
'/Users/cburns/local/lib/python2.5/site-packages/virtualenv-1.2-py2.5.egg',
...
In [4]: pickle.load(file('test.pkl'))
Out[4]: [1, None, 'Stan']
Exercise
path_site
It is likely that you have raised Exceptions if you have typed all the previous commands of the tutorial. For
example, you may have raised an exception if you entered a command with a typo.
Exceptions are raised by different kinds of errors arising when executing Python code. In your own code, you
may also catch errors, or define custom error types. You may want to look at the descriptions of the the built-in
Exceptions when looking for the right exception type.
2.8.1 Exceptions
In [1]: 1/0
---------------------------------------------------------------------------
ZeroDivisionError: integer division or modulo by zero
In [2]: 1 + 'e'
---------------------------------------------------------------------------
TypeError: unsupported operand type(s) for +: 'int' and 'str'
In [4]: d[3]
---------------------------------------------------------------------------
KeyError: 3
In [5]: l = [1, 2, 3]
In [6]: l[4]
---------------------------------------------------------------------------
IndexError: list index out of range
In [7]: l.foobar
---------------------------------------------------------------------------
AttributeError: 'list' object has no attribute 'foobar'
As you can see, there are different types of exceptions for different errors.
try/except
In [9]: x
Out[9]: 1
try/finally
In [10]: try:
....: x = int(raw_input('Please enter a number: '))
....: finally:
....: print('Thank you for your input')
....:
....:
Please enter a number: a
Thank you for your input
---------------------------------------------------------------------------
ValueError: invalid literal for int() with base 10: 'a'
In [14]: print_sorted('132')
132
In [16]: filter_name('Gal')
OK, Gal
Out[16]: 'Ga\xc3\xabl'
In [17]: filter_name('Stfan')
---------------------------------------------------------------------------
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 2: ordinal not in
,range(128)
In [18]: x = 0
....:
In [20]: x
Out[20]: 0.9990234375
Use exceptions to notify certain conditions are met (e.g. StopIteration) or not (e.g. custom error raising)
In the previous example, the Student class has __init__, set_age and set_major methods. Its at-
tributes are name, age and major. We can call these methods and attributes with the following notation:
classinstance.method or classinstance.attribute. The __init__ constructor is a special method we
call with: MyClass(init parameters if any).
Now, suppose we want to create a new class MasterStudent with the same methods and attributes as the pre-
vious one, but with an additional internship attribute. We wont copy the previous class, but inherit from
it:
The MasterStudent class inherited from the Student attributes and methods.
Thanks to classes and object-oriented programming, we can organize code with different classes correspond-
ing to different objects we encounter (an Experiment class, an Image class, a Flow class, etc.), with their own
methods and attributes. Then we can use inheritance to consider variations around a base class and re-use
code. Ex : from a Flow base class, we can create derived StokesFlow, TurbulentFlow, PotentialFlow, etc.
Authors: Emmanuelle Gouillart, Didrik Pinte, Gal Varoquaux, and Pauli Virtanen
This chapter gives an overview of NumPy, the core tool for performant numerical computing with Python.
Section contents
47
Scipy lecture notes, Edition 2017.1
NumPy arrays
Python objects
high-level number objects: integers, floating point
containers: lists (costless insertion and append), dictionaries (fast lookup)
NumPy provides
extension package to Python for multi-dimensional arrays
closer to hardware (efficiency)
designed for scientific computation (convenience)
Also known as array oriented computing
In [1]: L = range(1000)
In [3]: a = np.arange(1000)
In [5]: np.array?
String Form:<built-in function array>
Docstring:
array(object, dtype=None, copy=True, order=None, subok=False, ndmin=0, ...
In [6]: np.con*?
np.concatenate
np.conj
np.conjugate
np.convolve
Import conventions
1-D:
2-D, 3-D, . . . :
[[3],
[4]]])
>>> c.shape
(2, 2, 1)
Create a simple two dimensional array. First, redo the examples from above. And then create your
own: how about odd numbers counting backwards on the first row, and even numbers on the second?
Use the functions len(), numpy.shape() on these arrays. How do they relate to each other? And to
the ndim attribute of the arrays?
Evenly spaced:
or by number of points:
Common arrays:
You may have noticed that, in some instances, array elements are displayed with a trailing dot (e.g. 2. vs 2).
This is due to a difference in the data-type used:
Tip: Different data-types allow us to store data more compactly in memory, but most of the time we simply
work with floating point numbers. Note that, in the example above, NumPy auto-detects the data-type from
the input.
Bool
Strings
Much more
int32
int64
uint32
uint64
Now that we have our first data arrays, we are going to visualize them.
Start by launching IPython:
$ ipython
Or the notebook:
$ ipython notebook
>>> %matplotlib
The inline is important for the notebook, so that plots are displayed in the notebook and not in a new window.
Matplotlib is a 2D plotting package. We can import its functions as below:
And then use (note that you have to use show explicitly if you have not enabled interactive plots with
%matplotlib):
1D plotting:
See also:
More in the: matplotlib chapter
The items of an array can be accessed and assigned to the same way as other Python sequences (e.g. lists):
>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
Warning: Indices begin at 0, like other Python sequences (and C/C++). In contrast, in Fortran or Matlab,
indices begin at 1.
>>> a[::-1]
array([9, 8, 7, 6, 5, 4, 3, 2, 1, 0])
>>> a = np.diag(np.arange(3))
>>> a
array([[0, 0, 0],
[0, 1, 0],
[0, 0, 2]])
>>> a[1, 1]
1
>>> a[2, 1] = 10 # third line, second column
>>> a
array([[ 0, 0, 0],
[ 0, 1, 0],
[ 0, 10, 2]])
>>> a[1]
array([0, 1, 0])
Note:
In 2D, the first dimension corresponds to rows, the second to columns.
for multidimensional a, a[0] is interpreted by taking all elements in the unspecified dimensions.
>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> a[2:9:3] # [start:end:step]
array([2, 5, 8])
>>> a[:4]
array([0, 1, 2, 3])
All three slice components are not required: by default, start is 0, end is the last and step is 1:
>>> a[1:3]
array([1, 2])
>>> a[::2]
array([0, 2, 4, 6, 8])
>>> a[3:]
array([3, 4, 5, 6, 7, 8, 9])
>>> a = np.arange(10)
>>> a[5:] = 10
>>> a
array([ 0, 1, 2, 3, 4, 10, 10, 10, 10, 10])
>>> b = np.arange(5)
>>> a[5:] = b[::-1]
>>> a
array([0, 1, 2, 3, 4, 4, 3, 2, 1, 0])
Try the different flavours of slicing, using start, end and step: starting from a linspace, try to obtain
odd numbers counting backwards, and even numbers counting forwards.
Reproduce the slices in the diagram above. You may use the following expression to create the array:
>>> np.arange(6) + np.arange(0, 51, 10)[:, np.newaxis]
array([[ 0, 1, 2, 3, 4, 5],
[10, 11, 12, 13, 14, 15],
[20, 21, 22, 23, 24, 25],
[30, 31, 32, 33, 34, 35],
[40, 41, 42, 43, 44, 45],
[50, 51, 52, 53, 54, 55]])
Skim through the documentation for np.tile, and use this function to construct the array:
[[4, 3, 4, 3, 4, 3],
[2, 1, 2, 1, 2, 1],
[4, 3, 4, 3, 4, 3],
[2, 1, 2, 1, 2, 1]]
A slicing operation creates a view on the original array, which is just a way of accessing array data. Thus the
original array is not copied in memory. You can use np.may_share_memory() to check if two arrays share the
same memory block. Note however, that this uses heuristics and may give you false positives.
When modifying the view, the original array is modified as well:
>>> a = np.arange(10)
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> b = a[::2]
>>> b
array([0, 2, 4, 6, 8])
>>> np.may_share_memory(a, b)
True
>>> b[0] = 12
>>> b
array([12, 2, 4, 6, 8])
>>> a # (!)
array([12, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> a = np.arange(10)
>>> c = a[::2].copy() # force a copy
>>> c[0] = 12
>>> a
array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])
>>> np.may_share_memory(a, c)
False
This behavior can be surprising at first sight. . . but it allows to save both memory and time.
For each integer j starting from 2, cross out its higher multiples:
>>> N_max = int(np.sqrt(len(is_prime) - 1))
>>> for j in range(2, N_max + 1):
... is_prime[2*j::j] = False
Tip: NumPy arrays can be indexed with slices, but also with boolean or integer arrays (masks). This method
is called fancy indexing. It creates copies not views.
>>> np.random.seed(3)
>>> a = np.random.randint(0, 21, 15)
>>> a
array([10, 3, 8, 0, 19, 10, 11, 9, 10, 6, 0, 20, 12, 7, 14])
>>> (a % 3 == 0)
array([False, True, False, True, False, False, False, True, False,
True, True, False, True, False, False], dtype=bool)
>>> mask = (a % 3 == 0)
>>> extract_from_a = a[mask] # or, a[a%3==0]
>>> extract_from_a # extract a sub-array with the mask
array([ 3, 0, 9, 6, 0, 12])
Indexing with a mask can be very useful to assign a new value to a sub-array:
>>> a[a % 3 == 0] = -1
>>> a
array([10, -1, 8, -1, 19, 10, 11, -1, 10, -1, -1, 20, -1, 7, 14])
Indexing can be done with an array of integers, where the same index is repeated several time:
Tip: When a new array is created by indexing with an array of integers, the new array has the same shape than
the array of integers:
>>> a = np.arange(10)
>>> idx = np.array([[3, 4], [9, 7]])
>>> idx.shape
(2, 2)
>>> a[idx]
array([[3, 4],
[9, 7]])
Section contents
Elementwise operations
Basic reductions
Broadcasting
Array shape manipulation
Sorting data
Summary
Basic operations
With scalars:
>>> b = np.ones(4) + 1
>>> a - b
array([-1., 0., 1., 2.])
>>> a * b
array([ 2., 4., 6., 8.])
>>> j = np.arange(5)
>>> 2**(j + 1) - j
array([ 2, 3, 6, 13, 28])
These operations are of course much faster than if you did them in pure python:
>>> a = np.arange(10000)
>>> %timeit a + 1
10000 loops, best of 3: 24.3 us per loop
>>> l = range(10000)
>>> %timeit [i+1 for i in l]
1000 loops, best of 3: 861 us per loop
>>> c.dot(c)
array([[ 3., 3., 3.],
[ 3., 3., 3.],
[ 3., 3., 3.]])
Try simple arithmetic elementwise operations: add even elements with odd elements
Time them against their pure python counterparts using %timeit.
Generate:
[2**0, 2**1, 2**2, 2**3, 2**4]
a_j = 2^(3*j) - j
Other operations
Comparisons:
Logical operations:
Transcendental functions:
>>> a = np.arange(5)
>>> np.sin(a)
array([ 0. , 0.84147098, 0.90929743, 0.14112001, -0.7568025 ])
>>> np.log(a)
array([ -inf, 0. , 0.69314718, 1.09861229, 1.38629436])
>>> np.exp(a)
array([ 1. , 2.71828183, 7.3890561 , 20.08553692, 54.59815003])
Shape mismatches
>>> a = np.arange(4)
>>> a + np.array([1, 2])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: operands could not be broadcast together with shapes (4) (2)
It will work for small arrays (because of buffering) but fail for large one, in unpredictable ways.
The sub-module numpy.linalg implements basic linear algebra, such as solving linear systems, singular
value decomposition, etc. However, it is not guaranteed to be compiled using efficient routines, and thus
we recommend the use of scipy.linalg, as detailed in section Linear algebra operations: scipy.linalg
Computing sums
>>> x = np.random.rand(2, 2, 2)
>>> x.sum(axis=2)[0, 1]
1.14764...
Other reductions
Logical operations:
Statistics:
Exercise: Reductions
Given there is a sum, what other function might you expect to see?
What is the difference between sum and cumsum?
Data in populations.txt describes the populations of hares and lynxes (and carrots) in northern Canada
during 20 years.
You can view the data in an editor, or alternatively in IPython (both shell and notebook):
In [1]: !cat data/populations.txt
Tip: Let us consider a simple 1D random walk process: at each time step a walker jumps right or left with
equal probability.
We are interested in finding the typical distance from the origin of a random walker after t left or right
jumps? We are going to simulate many walkers to find this law, and we are going to do so using array com-
puting tricks: we are going to create a 2D array with the stories (each walker has a story) in one direction,
and the time in the other:
>>> t = np.arange(t_max)
>>> steps = 2 * np.random.randint(0, 1 + 1, (n_stories, t_max)) - 1 # +1 because the high
,value is exclusive
>>> np.unique(steps) # Verification: all steps are 1 or -1
array([-1, 1])
3.2.3 Broadcasting
Lets verify:
An useful trick:
Tip: Broadcasting seems a bit magical, but it is actually quite natural to use it when we want to solve a problem
whose output data is an array with more dimensions than input data.
Lets construct an array of distances (in miles) between cities of Route 66: Chicago, Springfield, Saint-Louis,
Tulsa, Oklahoma City, Amarillo, Santa Fe, Albuquerque, Flagstaff and Los Angeles.
>>> mileposts = np.array([0, 198, 303, 736, 871, 1175, 1475, 1544,
... 1913, 2448])
>>> distance_array = np.abs(mileposts - mileposts[:, np.newaxis])
>>> distance_array
array([[ 0, 198, 303, 736, 871, 1175, 1475, 1544, 1913, 2448],
[ 198, 0, 105, 538, 673, 977, 1277, 1346, 1715, 2250],
[ 303, 105, 0, 433, 568, 872, 1172, 1241, 1610, 2145],
[ 736, 538, 433, 0, 135, 439, 739, 808, 1177, 1712],
[ 871, 673, 568, 135, 0, 304, 604, 673, 1042, 1577],
[1175, 977, 872, 439, 304, 0, 300, 369, 738, 1273],
[1475, 1277, 1172, 739, 604, 300, 0, 69, 438, 973],
[1544, 1346, 1241, 808, 673, 369, 69, 0, 369, 904],
[1913, 1715,
3.2. Numerical 1610, on
operations 1177, 1042, 738, 438, 369,
arrays 0, 535], 67
[2448, 2250, 2145, 1712, 1577, 1273, 973, 904, 535, 0]])
Scipy lecture notes, Edition 2017.1
A lot of grid-based or network-based problems can also use broadcasting. For instance, if we want to compute
the distance from the origin of points on a 10x10 grid, we can do
Or in color:
>>> plt.pcolor(distance)
>>> plt.colorbar()
Tip: So, np.ogrid is very useful as soon as we have to handle computations on a grid. On the other hand,
np.mgrid directly provides matrices full of indices for cases where we cant (or dont want to) benefit from
broadcasting:
See also:
Broadcasting: discussion of broadcasting in the Advanced NumPy chapter.
Flattening
Reshaping
>>> a.shape
(2, 3)
>>> b = a.ravel()
>>> b = b.reshape((2, 3))
>>> b
array([[1, 2, 3],
[4, 5, 6]])
Or,
Tip:
>>> b[0, 0] = 99
>>> a
array([[99, 2, 3],
[ 4, 5, 6]])
To understand this you need to learn more about the memory layout of a numpy array.
Adding a dimension
Indexing with the np.newaxis object allows us to add an axis to an array (you have seen this already above in
the broadcasting section):
>>> z[np.newaxis, :]
array([[1, 2, 3]])
Dimension shuffling
>>> a = np.arange(4*3*2).reshape(4, 3, 2)
>>> a.shape
(4, 3, 2)
>>> a[0, 2, 1]
5
>>> b = a.transpose(1, 2, 0)
>>> b.shape
(3, 2, 4)
>>> b[2, 1, 0]
5
>>> b[2, 1, 0] = -1
>>> a[0, 2, 1]
-1
Resizing
>>> a = np.arange(4)
>>> a.resize((8,))
>>> a
array([0, 1, 2, 3, 0, 0, 0, 0])
>>> b = a
>>> a.resize((4,))
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
ValueError: cannot resize an array that has been referenced or is
referencing another array in this way. Use the resize function
Look at the docstring for reshape, especially the notes section which has some more information
about copies and views.
Use flatten as an alternative to ravel. What is the difference? (Hint: check which one returns a view
and which a copy)
Experiment with transpose for dimension shuffling.
In-place sort:
>>> a.sort(axis=1)
>>> a
array([[3, 4, 5],
[1, 1, 2]])
Exercise: Sorting
3.2.6 Summary
Know miscellaneous operations on arrays, such as finding the mean or max (array.max(), array.
mean()). No need to retain everything, but have the reflex to search in the documentation (online docs,
help(), lookfor())!!
For advanced use: master the indexing with arrays of integers, as well as broadcasting. Know more
NumPy functions to handle various array operations.
Quick read
If you want to do a first quick pass through the Scipy lectures to learn the ecosystem, you can directly skip
to the next chapter: Matplotlib: plotting.
The remainder of this chapter is not necessary to follow the rest of the intro part. But be sure to come back
and finish this chapter, as well as to do some more exercices.
Section contents
Casting
Forced casts:
Rounding:
Integers (signed):
int8 8 bits
int16 16 bits
int32 32 bits (same as int on 32-bit platform)
int64 64 bits (same as int on 64-bit platform)
Unsigned integers:
uint8 8 bits
uint16 16 bits
uint32 32 bits
uint64 64 bits
Long integers
Python 2 has a specific type for long integers, that cannot overflow, represented with an L at the end. In
Python 3, all integers are long, and thus cannot overflow.
>>> np.iinfo(np.int64).max, 2**63 - 1
(9223372036854775807, 9223372036854775807L)
Floating-point numbers:
float16 16 bits
float32 32 bits
float64 64 bits (same as float)
float96 96 bits, platform-dependent (same as np.longdouble)
float128 128 bits, platform-dependent (same as np.longdouble)
>>> np.finfo(np.float32).eps
1.1920929e-07
>>> np.finfo(np.float64).eps
2.2204460492503131e-16
If you dont know you need special data types, then you probably dont.
Comparison on using float32 instead of float64:
Half the size in memory and on disk
Half the memory bandwidth required (may be a bit faster in some operations)
In [1]: a = np.zeros((1e6,), dtype=np.float64)
But: bigger rounding errors sometimes in surprising places (i.e., dont use them unless you really
need them)
>>> samples['sensor_code']
array(['ALFA', 'BETA', 'TAU', 'ALFA', 'ALFA', 'TAU'],
dtype='|S4')
>>> samples['value']
array([ 0.37, 0.11, 0.13, 0.37, 0.11, 0.13])
>>> samples[0]
('ALFA', 1.0, 0.37)
Note: There are a bunch of other syntaxes for constructing structured arrays, see here and here.
For floats one could use NaNs, but masks work for all types:
masked_array(data = [1 -- 3 --],
mask = [False True False True],
fill_value = 999999)
While it is off topic in a chapter on numpy, lets take a moment to recall good coding practice, which really do
pay off in the long run:
Good practices
Explicit variable names (no need of a comment to explain what is in the variable)
Style: spaces after commas, around =, etc.
A certain number of rules for writing beautiful code (and, more importantly, using the same conven-
tions as everybody else!) are given in the Style Guide for Python Code and the Docstring Conventions
page (to manage help strings).
Except some rare cases, variable names and comments in English.
Section contents
Polynomials
Loading data files
3.4.1 Polynomials
array([-1. , 0.33333333])
>>> p.order
2
See http://docs.scipy.org/doc/numpy/reference/
routines.polynomials.poly1d.html for more.
NumPy also has a more sophisticated polynomial interface, which supports e.g. the Chebyshev basis.
3x 2 + 2x 1:
Example using polynomials in Chebyshev basis, for polynomials in range [-1, 1]:
Text files
Example: populations.txt:
# year hare lynx carrot
1900 30e3 4e3 48300
1901 47.2e3 6.1e3 48200
1902 70.2e3 9.8e3 41500
1903 77.4e3 35.2e3 38200
Note: If you have a complicated text file, what you can try are:
np.genfromtxt
Using Pythons I/O functions and e.g. regexps for parsing (Python is quite well suited for this)
Images
Using Matplotlib:
>>> plt.imshow(plt.imread('red_elephant.png'))
<matplotlib.image.AxesImage object at ...>
Other libraries:
NumPy has its own binary format, not portable but with efficient I/O:
Write a Python script that loads data from populations.txt:: and drop the last column and the first 5 rows.
Save the smaller dataset to pop2.txt.
NumPy internals
If you are interested in the NumPy internals, there is a good discussion in Advanced NumPy.
[[1, 6, 11],
[2, 7, 12],
[3, 8, 13],
[4, 9, 14],
[5, 10, 15]]
and generate a new array containing its 2nd and 4th rows.
2. Divide each column of the array:
elementwise with the array b = np.array([1., 5, 10, 15, 20]). (Hint: np.newaxis).
3. Harder one: Generate a 10 x 3 array of random numbers (in range [0,1]). For each row, pick the number
closest to 0.5.
Use abs and argsort to find the column j closest for each row.
Use fancy indexing to extract the numbers. (Hint: a[i,j] the array i must contain the row num-
bers corresponding to stuff in j.)
Lets do some manipulations on numpy arrays by starting with an image of a racoon. scipy provides a 2D
array of this image with the scipy.misc.face function:
Here are a few images we will be able to obtain with our manipulations: use different colormaps, crop the
image, change some parts of the image.
The face is displayed in false colors. A colormap must be specified for it to be displayed in grey.
Create an array of the image with a narrower centering [for example,] remove 100 pixels from all the
borders of the image. To check the result, display this new array with imshow.
We will now frame the face with a black locket. For this, we need to create a mask corresponding to
the pixels we want to be black. The center of the face is around (660, 330), so we defined the mask
by this condition (y-300)**2 + (x-660)**2
then we assign the value 0 to the pixels of the image corresponding to the mask. The syntax is
extremely simple and intuitive:
>>> face[mask] = 0
>>> plt.imshow(face)
<matplotlib.image.AxesImage object at 0x...>
Follow-up: copy all instructions of this exercise in a script called face_locket.py then execute this
script in IPython with %run face_locket.py.
Change the circle to an ellipsoid.
The data in populations.txt describes the populations of hares and lynxes (and carrots) in northern Canada
during 20 years:
6. Compare (plot) the change in hare population (see help(np.gradient)) and the number of lynxes.
Check correlation (see help(np.corrcoef)).
. . . all without for-loops.
Solution: Python source file
Write a function f(a, b, c) that returns a b c. Form a 24x12x6 array containing its values in parameter
ranges [0,1] x [0,1] x [0,1].
Approximate the 3-d integral
Z 1Z 1Z 1
(a b c)d a d b d c
0 0 0
over this volume with the mean. The exact result is: ln 2 12 0.1931 . . . what is your relative error?
(Hints: use elementwise operations and broadcasting. You can make np.ogrid give a number of points in
given range with np.ogrid[0:1:20j].)
Reminder Python functions:
N_max = 50
some_threshold = 50
c = x + 1j*y
z = 0
for j in range(N_max):
z = z**2 + c
2. Do the iteration
3. Form the 2-d boolean mask indicating which points are in the set
4. Save the result to an image with:
2D plotting
import numpy as np
import matplotlib.pyplot as plt
1D plotting
import numpy as np
import matplotlib.pyplot as plt
x = np.linspace(0, 3, 20)
y = np.linspace(0, 9, 20)
plt.plot(x, y)
plt.plot(x, y, 'o')
plt.show()
Distances exercise
import numpy as np
import matplotlib.pyplot as plt
Fitting to polynomial
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(12)
x = np.linspace(0, 1, 20)
y = np.cos(x) + 0.3*np.random.rand(20)
p = np.poly1d(np.polyfit(x, y, 3))
t = np.linspace(0, 1, 200)
plt.plot(x, y, 'o', t, p(t), '-')
plt.show()
import numpy as np
import matplotlib.pyplot as plt
np.random.seed(0)
x = np.linspace(-1, 1, 2000)
y = np.cos(x) + 0.3*np.random.rand(2000)
p = np.polynomial.Chebyshev.fit(x, y, 90)
t = np.linspace(-1, 1, 200)
plt.plot(x, y, 'r.')
plt.plot(t, p(t), 'k-', lw=3)
plt.show()
Population exercise
import numpy as np
import matplotlib.pyplot as plt
data = np.loadtxt('../data/populations.txt')
year, hares, lynxes, carrots = data.T
import numpy as np
import matplotlib.pyplot as plt
original figure
plt.figure()
img = plt.imread('../data/elephant.png')
plt.imshow(img)
plt.figure()
img_red = img[:, :, 0]
plt.imshow(img_red, cmap=plt.cm.gray)
lower resolution
plt.figure()
img_tiny = img[::6, ::6]
plt.imshow(img_tiny, interpolation='nearest')
plt.show()
Mandelbrot set
import numpy as np
import matplotlib.pyplot as plt
from numpy import newaxis
c = x[:,newaxis] + 1j*y[newaxis,:]
# Mandelbrot iteration
z = c
for j in range(N_max):
z = z**2 + c
return mandelbrot_set
Generated by Sphinx-Gallery
Plot distance as a function of time for a random walk together with the theoretical result
import numpy as np
import matplotlib.pyplot as plt
t = np.arange(t_max)
# Steps can be -1 or 1 (note that randint excludes the upper limit)
steps = 2 * np.random.randint(0, 1 + 1, (n_stories, t_max)) - 1
Thanks
Many thanks to Bill Wing and Christoph Deil for review and corrections.
Chapter contents
Introduction
Simple plot
Figures, Subplots, Axes and Ticks
Other Types of Plots: examples and exercises
Beyond this tutorial
Quick references
Full code examples
4.1 Introduction
Tip: Matplotlib is probably the most used Python package for 2D-graphics. It provides both a quick way to
visualize data from Python and publication-quality figures in many formats. We are going to explore matplotlib
97
Scipy lecture notes, Edition 2017.1
Tip: The Jupyter notebook and the IPython enhanced interactive Python, are tuned for the scientific-
computing workflow in Python, in combination with Matplotlib:
In [1]: %matplotlib
Jupyter notebook In the notebook, insert, at the beginning of the notebook the following magic:
%matplotlib inline
4.1.2 pyplot
Tip: pyplot provides a procedural interface to the matplotlib object-oriented plotting library. It is modeled
closely after Matlab. Therefore, the majority of plotting commands in pyplot have Matlab analogs with
similar arguments. Important commands are explained with interactive examples.
Tip: In this section, we want to draw the cosine and sine functions on the same plot. Starting from the default
settings, well enrich the figure step by step to make it nicer.
First step is to get the data for the sine and cosine functions:
import numpy as np
X is now a numpy array with 256 values ranging from - to + (included). C is the cosine (256 values) and S is
the sine (256 values).
To run the example, you can type them in an IPython interactive session:
$ ipython --pylab
Tip: You can also download each of the examples and run it using regular python, but you will lose interactive
data manipulation:
$ python plot_exercise_1.py
You can get source for each step by clicking on the corresponding figure.
Hint: Documentation
plot tutorial
plot() command
Tip: Matplotlib comes with a set of default settings that allow customizing all kinds of properties. You can
control the defaults of almost every property in matplotlib: figure size and dpi, line width, color and style,
axes, axis and grid properties, text and font properties and so on.
import numpy as np
import matplotlib.pyplot as plt
plt.plot(X, C)
plt.plot(X, S)
plt.show()
Hint: Documentation
Customizing matplotlib
In the script below, weve instantiated (and commented) all the figure settings that influence the appearance
of the plot.
Tip: The settings have been explicitly set to their default values, but now you can interactively play with the
values to explore their affect (see Line properties and Line styles below).
import numpy as np
import matplotlib.pyplot as plt
# Set x limits
plt.xlim(-4.0, 4.0)
# Set x ticks
plt.xticks(np.linspace(-4, 4, 9, endpoint=True))
# Set y limits
plt.ylim(-1.0, 1.0)
# Set y ticks
plt.yticks(np.linspace(-1, 1, 5, endpoint=True))
Hint: Documentation
Controlling line properties
Line API
Tip: First step, we want to have the cosine in blue and the sine in red and a slighty thicker line for both of
them. Well also slightly alter the figure size to make it more horizontal.
...
plt.figure(figsize=(10, 6), dpi=80)
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-")
...
Hint: Documentation
xlim() command
ylim() command
Tip: Current limits of the figure are a bit too tight and we want to make some space in order to clearly see all
data points.
...
plt.xlim(X.min() * 1.1, X.max() * 1.1)
plt.ylim(C.min() * 1.1, C.max() * 1.1)
...
Hint: Documentation
xticks() command
yticks() command
Tick container
Tick locating and formatting
Tip: Current ticks are not ideal because they do not show the interesting values (+/-,+/-/2) for sine and
cosine. Well change them such that they show only these values.
...
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi])
plt.yticks([-1, 0, +1])
...
Hint: Documentation
Working with text
xticks() command
yticks() command
set_xticklabels()
set_yticklabels()
Tip: Ticks are now properly placed but their label is not very explicit. We could guess that 3.142 is but it
would be better to make it explicit. When we set tick values, we can also provide a corresponding label in the
second argument list. Note that well use latex to allow for nice rendering of the label.
...
plt.xticks([-np.pi, -np.pi/2, 0, np.pi/2, np.pi],
[r'$-\pi$', r'$-\pi/2$', r'$0$', r'$+\pi/2$', r'$+\pi$'])
plt.yticks([-1, 0, +1],
[r'$-1$', r'$0$', r'$+1$'])
...
Hint: Documentation
Spines
Axis container
Transformations tutorial
Tip: Spines are the lines connecting the axis tick marks and noting the boundaries of the data area. They can
be placed at arbitrary positions and until now, they were on the border of the axis. Well change that since we
want to have them in the middle. Since there are four of them (top/bottom/left/right), well discard the top
and right by setting their color to none and well move the bottom and left ones to coordinate 0 in data space
coordinates.
...
ax = plt.gca() # gca stands for 'get current axis'
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
...
Hint: Documentation
Legend guide
legend() command
Legend API
Tip: Lets add a legend in the upper left corner. This only requires adding the keyword argument label (that
will be used in the legend box) to the plot commands.
...
plt.plot(X, C, color="blue", linewidth=2.5, linestyle="-", label="cosine")
plt.plot(X, S, color="red", linewidth=2.5, linestyle="-", label="sine")
plt.legend(loc='upper left')
...
Hint: Documentation
Annotating axis
annotate() command
Tip: Lets annotate some interesting points using the annotate command. We chose the 2/3 value and we
want to annotate both the sine and the cosine. Well first draw a marker on the curve as well as a straight dotted
line. Then, well use the annotate command to display some text with an arrow.
...
t = 2 * np.pi / 3
plt.plot([t, t], [0, np.cos(t)], color='blue', linewidth=2.5, linestyle="--")
plt.scatter([t, ], [np.cos(t), ], 50, color='blue')
Hint: Documentation
Artists
BBox
Tip: The tick labels are now hardly visible because of the blue and red lines. We can make them bigger and we
can also adjust their properties such that theyll be rendered on a semi-transparent white background. This
will allow us to see both the data and the labels.
...
for label in ax.get_xticklabels() + ax.get_yticklabels():
label.set_fontsize(16)
A figure in matplotlib means the whole window in the user interface. Within this figure there can be sub-
plots.
Tip: So far we have used implicit figure and axes creation. This is handy for fast plots. We can have more
control over the display using figure, subplot, and axes explicitly. While subplot positions the plots in a regular
grid, axes allows free placement within the figure. Both can be useful depending on your intention. Weve
already worked with figures and subplots without explicitly calling them. When we call plot, matplotlib calls
gca() to get the current axes and gca in turn calls gcf() to get the current figure. If there is none it calls
figure() to make one, strictly speaking, to make a subplot(111). Lets look at the details.
4.3.1 Figures
Tip: A figure is the windows in the GUI that has Figure # as title. Figures are numbered starting from 1 as
opposed to the normal Python way starting from 0. This is clearly MATLAB-style. There are several parameters
that determine what the figure looks like:
Tip: The defaults can be specified in the resource file and will be used most of the time. Only the number of
the figure is frequently changed.
As with other objects, you can set figure properties also setp or with the set_something methods.
When you work with the GUI you can close a figure by clicking on the x in the upper right corner. But you can
close a figure programmatically by calling close. Depending on the argument it closes (1) the current figure
(no argument), (2) a specific figure (figure number or figure instance as argument), or (3) all figures ("all" as
argument).
4.3.2 Subplots
Tip: With subplot you can arrange plots in a regular grid. You need to specify the number of rows and columns
and the number of the plot. Note that the gridspec command is a more powerful alternative.
4.3.3 Axes
Axes are very similar to subplots but allow placement of plots at any location in the fig-
ure. So if we want to put a smaller plot inside a bigger one we do so with axes.
4.3.4 Ticks
Well formatted ticks are an important part of publishing-ready figures. Matplotlib provides a totally config-
urable system for ticks. There are tick locators to specify where ticks should appear and tick formatters to
give ticks the appearance you want. Major and minor ticks can be located and formatted independently from
each other. Per default minor ticks are not shown, i.e. there is only an empty list for them because it is as
NullLocator (see below).
Tick Locators
Tick locators control the positions of the ticks. They are set as follows:
ax = plt.gca()
ax.xaxis.set_major_locator(eval(locator))
cator deriving from it. Handling dates as ticks can be especially tricky. Therefore, matplotlib provides special
locators in matplotlib.dates.
n = 256
X = np.linspace(-np.pi, np.pi, n, endpoint=True)
Y = np.sin(2 * X)
n = 1024
X = np.random.normal(0,1,n)
Y = np.random.normal(0,1,n)
plt.scatter(X,Y)
n = 12
X = np.arange(n)
Y1 = (1 - X / float(n)) * np.random.uniform(0.5, 1.0, n)
Y2 = (1 - X / float(n)) * np.random.uniform(0.5, 1.0, n)
plt.ylim(-1.25, +1.25)
n = 256
x = np.linspace(-3, 3, n)
y = np.linspace(-3, 3, n)
X, Y = np.meshgrid(x, y)
4.4.5 Imshow
Hint: You need to take care of the origin of the image in the imshow command and use a colorbar
n = 10
x = np.linspace(-3, 3, 4 * n)
y = np.linspace(-3, 3, 3 * n)
X, Y = np.meshgrid(x, y)
plt.imshow(f(X, Y))
Z = np.random.uniform(0, 1, 20)
plt.pie(Z)
n = 8
X, Y = np.mgrid[0:n, 0:n]
plt.quiver(X, Y)
4.4.8 Grids
axes = plt.gca()
axes.set_xlim(0, 4)
axes.set_ylim(0, 3)
axes.set_xticklabels([])
axes.set_yticklabels([])
plt.subplot(2, 2, 1)
plt.subplot(2, 2, 3)
plt.subplot(2, 2, 4)
plt.axes([0, 0, 1, 1])
N = 20
theta = np.arange(0., 2 * np.pi, 2 * np.pi / N)
radii = 10 * np.random.rand(N)
width = np.pi / 4 * np.random.rand(N)
bars = plt.bar(theta, radii, width=width, bottom=0.0)
4.4.11 3D Plots
fig = plt.figure()
ax = Axes3D(fig)
X = np.arange(-4, 4, 0.25)
Y = np.arange(-4, 4, 0.25)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X**2 + Y**2)
Z = np.sin(R)
4.4.12 Text
Quick read
If you want to do a first quick pass through the Scipy lectures to learn the ecosystem, you can directly skip
to the next chapter: Scipy : high-level scientific computing.
The remainder of this chapter is not necessary to follow the rest of the intro part. But be sure to come back
and finish this chapter later.
Matplotlib benefits from extensive documentation as well as a large community of users and developers. Here
are some links of interest:
4.5.1 Tutorials
Pyplot tutorial
Introduction
Controlling line properties
Working with multiple figures and axes
Working with text
Image tutorial
Startup commands
Importing image data into Numpy arrays
Plotting numpy arrays as images
Text tutorial
Text introduction
Basic text commands
Text properties and layout
Writing mathematical expressions
Text rendering With LaTeX
Annotating text
Artist tutorial
Introduction
Customizing your objects
Object containers
Figure container
Axes container
Axis containers
Tick containers
Path tutorial
Introduction
Bzier example
Compound paths
Transforms tutorial
Introduction
Data coordinates
Axes coordinates
Blended transformations
Using offset transforms to create a shadow effect
The transformation pipeline
User guide
FAQ
Installation
Usage
How-To
Troubleshooting
Environment Variables
Screenshots
The code is well documented and you can quickly access a specific command from within a python session:
plot(*args, **kwargs)
Plot lines and/or markers to the
:class:`~matplotlib.axes.Axes`. *args* is a variable length
argument, allowing for multiple *x*, *y* pairs with an
optional format string. For example, each of the following is
legal::
4.5.4 Galleries
The matplotlib gallery is also incredibly useful when you search how to render a given graphic. Each example
comes with its source.
Finally, there is a user mailing list where you can ask for help and a developers mailing list that is more techni-
cal.
4.6.3 Markers
4.6.4 Colormaps
All colormaps can be reversed by appending _r. For instance, gray_r is the reverse of gray.
If you want to know more about colormaps, checks Documenting the matplotlib colormaps.
The examples here are only examples relevant to the points raised in this chapter. The matplotlib documenta-
tion comes with a much more exhaustive gallery.
Pie chart
import numpy as np
import matplotlib.pyplot as plt
n = 20
Z = np.ones(n)
Z[-1] *= 2
plt.show()
import numpy as np
import matplotlib.pyplot as plt
n = 1024
X = np.random.normal(0, 1, n)
Y = np.random.normal(0, 1, n)
T = np.arctan2(Y, X)
plt.xlim(-1.5, 1.5)
plt.xticks(())
plt.ylim(-1.5, 1.5)
plt.yticks(())
plt.show()
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
plt.show()
Subplots
fig = plt.figure()
fig.subplots_adjust(bottom=0.025, left=0.025, top = 0.975, right=0.975)
plt.subplot(2, 1, 1)
plt.xticks(()), plt.yticks(())
plt.subplot(2, 3, 4)
plt.xticks(())
plt.yticks(())
plt.subplot(2, 3, 5)
plt.xticks(())
plt.yticks(())
plt.subplot(2, 3, 6)
plt.xticks(())
plt.yticks(())
plt.show()
plt.show()
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
plt.figure(figsize=(6, 4))
plt.subplot(2, 1, 1)
plt.xticks(())
plt.yticks(())
plt.text(0.5, 0.5, 'subplot(2,1,1)', ha='center', va='center',
size=24, alpha=.5)
plt.subplot(2, 1, 2)
plt.xticks(())
plt.yticks(())
plt.text(0.5, 0.5, 'subplot(2,1,2)', ha='center', va='center',
size=24, alpha=.5)
plt.tight_layout()
plt.show()
plt.figure(figsize=(6, 4))
plt.subplot(1, 2, 1)
plt.xticks(())
plt.yticks(())
plt.text(0.5, 0.5, 'subplot(1,2,1)', ha='center', va='center',
size=24, alpha=.5)
plt.subplot(1, 2, 2)
plt.xticks(())
plt.yticks(())
plt.text(0.5, 0.5, 'subplot(1,2,2)', ha='center', va='center',
size=24, alpha=.5)
plt.tight_layout()
plt.show()
3D plotting
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
X = np.arange(-4, 4, 0.25)
Y = np.arange(-4, 4, 0.25)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X ** 2 + Y ** 2)
Z = np.sin(R)
plt.show()
Imshow elaborate
import numpy as np
import matplotlib.pyplot as plt
n = 10
x = np.linspace(-3, 3, 3.5 * n)
y = np.linspace(-3, 3, 3.0 * n)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)
plt.xticks(())
plt.yticks(())
plt.show()
A simple example showing how to plot a vector field (quiver) with matplotlib.
import numpy as np
import matplotlib.pyplot as plt
n = 8
X, Y = np.mgrid[0:n, 0:n]
T = np.arctan2(Y - n / 2., X - n/2.)
R = 10 + np.sqrt((Y - n / 2.0) ** 2 + (X - n / 2.0) ** 2)
U, V = R * np.cos(T), R * np.sin(T)
plt.xlim(-1, n)
plt.xticks(())
plt.ylim(-1, n)
plt.yticks(())
plt.show()
import numpy as np
import matplotlib.pyplot as plt
N = 20
theta = np.arange(0.0, 2 * np.pi, 2 * np.pi / N)
radii = 10 * np.random.rand(N)
width = np.pi / 4 * np.random.rand(N)
bars = plt.bar(theta, radii, width=width, bottom=0.0)
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.show()
import numpy as np
import matplotlib.pyplot as plt
def f(x,y):
return (1 - x / 2 + x**5 + y**3) * np.exp(-x**2 -y**2)
n = 256
x = np.linspace(-3, 3, n)
y = np.linspace(-3, 3, n)
X,Y = np.meshgrid(x, y)
plt.xticks(())
plt.yticks(())
plt.show()
import numpy as np
import matplotlib
matplotlib.use('Agg')
import matplotlib.pyplot as plt
fig = plt.figure(figsize=(5,4),dpi=72)
axes = fig.add_axes([0.01, 0.01, .98, 0.98], axisbg='.75')
X = np.linspace(0, 2, 40, endpoint=True)
Y = np.sin(2 * np.pi * X)
plt.plot(X, Y, lw=.05, c='b', antialiased=False)
plt.xticks(())
plt.yticks(np.arange(-1., 1., 0.2))
plt.grid()
ax = plt.gca()
plt.show()
import numpy as np
import matplotlib.pyplot as plt
n = 256
X = np.linspace(-np.pi, np.pi, n, endpoint=True)
Y = np.sin(2 * X)
plt.xlim(-np.pi, np.pi)
plt.xticks(())
plt.ylim(-2.5, 2.5)
plt.yticks(())
plt.show()
Bar plots
import numpy as np
import matplotlib.pyplot as plt
n = 12
X = np.arange(n)
Y1 = (1 - X / float(n)) * np.random.uniform(0.5, 1.0, n)
Y2 = (1 - X / float(n)) * np.random.uniform(0.5, 1.0, n)
plt.xlim(-.5, n)
plt.xticks(())
plt.ylim(-1.25, 1.25)
plt.yticks(())
plt.show()
Subplot grid
plt.figure(figsize=(6, 4))
plt.subplot(2, 2, 1)
plt.xticks(())
plt.yticks(())
plt.text(0.5, 0.5, 'subplot(2,2,1)', ha='center', va='center',
size=20, alpha=.5)
plt.subplot(2, 2, 2)
plt.xticks(())
plt.yticks(())
plt.text(0.5, 0.5, 'subplot(2,2,2)', ha='center', va='center',
size=20, alpha=.5)
plt.subplot(2, 2, 3)
plt.xticks(())
plt.yticks(())
plt.subplot(2, 2, 4)
plt.xticks(())
plt.yticks(())
plt.text(0.5, 0.5, 'subplot(2,2,4)', ha='center', va='center',
size=20, alpha=.5)
plt.tight_layout()
plt.show()
Axes
plt.xticks(())
plt.yticks(())
plt.text(0.1, 0.1, 'axes([0.3, 0.3, .5, .5])', ha='left', va='center',
size=16, alpha=.5)
plt.show()
Grid
ax.set_xlim(0,4)
ax.set_ylim(0,3)
ax.xaxis.set_major_locator(plt.MultipleLocator(1.0))
ax.xaxis.set_minor_locator(plt.MultipleLocator(0.1))
ax.yaxis.set_major_locator(plt.MultipleLocator(1.0))
ax.yaxis.set_minor_locator(plt.MultipleLocator(0.1))
ax.grid(which='major', axis='x', linewidth=0.75, linestyle='-', color='0.75')
ax.grid(which='minor', axis='x', linewidth=0.25, linestyle='-', color='0.75')
ax.grid(which='major', axis='y', linewidth=0.75, linestyle='-', color='0.75')
ax.grid(which='minor', axis='y', linewidth=0.25, linestyle='-', color='0.75')
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.show()
3D plotting
ax = plt.gca(projection='3d')
X, Y, Z = axes3d.get_test_data(0.05)
cset = ax.contourf(X, Y, Z)
plt.xticks(())
plt.yticks(())
ax.set_zticks(())
plt.show()
GridSpec
plt.figure(figsize=(6, 4))
G = gridspec.GridSpec(3, 3)
plt.tight_layout()
plt.show()
import numpy as np
import matplotlib.pyplot as plt
eqs = []
eqs.append((r"$W^{3\beta}_{\delta_1 \rho_1 \sigma_2} = U^{3\beta}_{\delta_1 \rho_1} + \frac{1}
,{8 \pi 2} \int^{\alpha_2}_{\alpha_2} d \alpha^\prime_2 \left[\frac{ U^{2\beta}_{\delta_1
,\rho_1} - \alpha^\prime_2U^{1\beta}_{\rho_1 \sigma_2} }{U^{0\beta}_{\rho_1 \sigma_2}}\right]$
,"))
eqs.append((r"$\frac{d\rho}{d t} + \rho \vec{v}\cdot\nabla\vec{v} = -\nabla p + \mu\nabla^2
,\vec{v} + \rho \vec{g}$"))
eqs.append((r"$\int_{-\infty}^\infty e^{-x^2}dx=\sqrt{\pi}$"))
eqs.append((r"$E = mc^2 = \sqrt{{m_0}^2c^4 + p^2c^2}$"))
eqs.append((r"$F_G = G\frac{m_1m_2}{r^2}$"))
for i in range(24):
index = np.random.randint(0, len(eqs))
eq = eqs[index]
size = np.random.uniform(12, 32)
x,y = np.random.uniform(0, 1, 2)
alpha = np.random.uniform(0.25, .75)
plt.text(x, y, eq, ha='center', va='center', color="#11557c", alpha=alpha,
transform=plt.gca().transAxes, fontsize=size, clip_on=True)
plt.xticks(())
plt.yticks(())
plt.show()
Excercise 1
import numpy as np
import matplotlib.pyplot as plt
n = 256
X = np.linspace(-np.pi, np.pi, 256, endpoint=True)
C,S = np.cos(X), np.sin(X)
plt.plot(X, C)
plt.plot(X,S)
plt.show()
Exercise 4
import numpy as np
import matplotlib.pyplot as plt
plt.show()
Exercise 3
import numpy as np
import matplotlib.pyplot as plt
plt.xlim(-4.0, 4.0)
plt.xticks(np.linspace(-4, 4, 9, endpoint=True))
plt.ylim(-1.0, 1.0)
plt.yticks(np.linspace(-1, 1, 5, endpoint=True))
plt.show()
Exercise 5
import numpy as np
import matplotlib.pyplot as plt
plt.show()
Exercise 6
import numpy as np
import matplotlib.pyplot as plt
plt.show()
Exercise 2
import numpy as np
import matplotlib.pyplot as plt
# Create a new figure of size 8x6 points, using 100 dots per inch
plt.figure(figsize=(8, 6), dpi=80)
# Plot cosine using blue color with a continuous line of width 1 (pixels)
plt.plot(X, C, color="blue", linewidth=1.0, linestyle="-")
# Plot sine using green color with a continuous line of width 1 (pixels)
plt.plot(X, S, color="green", linewidth=1.0, linestyle="-")
# Set x limits
plt.xlim(-4., 4.)
# Set x ticks
plt.xticks(np.linspace(-4, 4, 9, endpoint=True))
# Set y limits
plt.ylim(-1.0, 1.0)
# Set y ticks
plt.yticks(np.linspace(-1, 1, 5, endpoint=True))
plt.show()
Exercise 7
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(8,5), dpi=80)
plt.subplot(111)
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.show()
Exercise 8
import numpy as np
import matplotlib.pyplot as plt
plt.figure(figsize=(8,5), dpi=80)
plt.subplot(111)
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
plt.legend(loc='upper left')
plt.show()
Exercise 9
import numpy as np
import matplotlib.pyplot as plt
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data',0))
t = 2*np.pi/3
plt.plot([t, t], [0, np.cos(t)],
color='blue', linewidth=1.5, linestyle="--")
plt.scatter([t, ], [np.cos(t), ], 50, color='blue')
plt.annotate(r'$sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$',
xy=(t, np.sin(t)), xycoords='data',
xytext=(+10, +30), textcoords='offset points', fontsize=16,
arrowprops=dict(arrowstyle="->", connectionstyle="arc3,rad=.2"))
plt.legend(loc='upper left')
plt.show()
Exercise
import numpy as np
import matplotlib.pyplot as plt
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data', 0))
ax.yaxis.set_ticks_position('left')
ax.spines['left'].set_position(('data', 0))
plt.legend(loc='upper left')
t = 2*np.pi/3
plt.plot([t, t], [0, np.cos(t)],
color='blue', linewidth=1.5, linestyle="--")
plt.scatter([t, ], [np.cos(t), ], 50, color='blue')
plt.annotate(r'$sin(\frac{2\pi}{3})=\frac{\sqrt{3}}{2}$',
xy=(t, np.sin(t)), xycoords='data',
plt.show()
size = 256, 16
dpi = 72.0
figsize = size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, 0.1, 1, .8], frameon=False)
for i in range(1,11):
plt.plot([i, i], [0, 1], lw=1.5)
plt.xlim(0, 11)
plt.xticks(())
plt.yticks(())
plt.show()
Linewidth
size = 256, 16
dpi = 72.0
figsize = size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, .1, 1, .8], frameon=False)
plt.xlim(0, 11)
plt.ylim(0, 1)
plt.xticks(())
plt.yticks(())
plt.show()
Alpha: transparency
size = 256,16
dpi = 72.0
figsize= size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, 0.1, 1, .8], frameon=False)
plt.xlim(0, 11)
plt.xticks(())
plt.yticks(())
plt.show()
size = 128, 16
dpi = 72.0
figsize= size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.rcParams['text.antialiased'] = False
plt.text(0.5, 0.5, "Aliased", ha='center', va='center')
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.xticks(())
plt.yticks(())
plt.show()
size = 128, 16
dpi = 72.0
figsize= size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, 0, 1, 1], frameon=False)
plt.rcParams['text.antialiased'] = True
plt.text(0.5, 0.5, "Anti-aliased", ha='center', va='center')
plt.xlim(0, 1)
plt.ylim(0, 1)
plt.xticks(())
plt.yticks(())
plt.show()
Marker size
size = 256, 16
dpi = 72.0
figsize = size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, 0, 1, 1], frameon=False)
plt.xlim(0, 11)
plt.xticks(())
plt.yticks(())
plt.show()
size = 256, 16
dpi = 72.0
figsize= size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, 0, 1, 1], frameon=False)
for i in range(1,11):
plt.plot([i, ], [1, ], 's', markersize=5,
markeredgewidth=1 + i/10., markeredgecolor='k', markerfacecolor='w')
plt.xlim(0, 11)
plt.xticks(())
plt.yticks(())
plt.show()
import numpy as np
import matplotlib.pyplot as plt
size = 256,16
dpi = 72.0
figsize= size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, 0, 1, 1], frameon=False)
plt.xlim(0, 11)
plt.xticks(())
plt.yticks(())
plt.show()
import numpy as np
import matplotlib.pyplot as plt
size = 256, 16
dpi = 72.0
figsize = size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, 0, 1, 1], frameon=False)
Colormaps
import numpy as np
import matplotlib.pyplot as plt
plt.rc('text', usetex=False)
a = np.outer(np.arange(0, 1, 0.01), np.ones(10))
plt.figure(figsize=(10, 5))
plt.subplots_adjust(top=0.8, bottom=0.05, left=0.01, right=0.99)
maps = [m for m in plt.cm.datad if not m.endswith("_r")]
maps.sort()
l = len(maps) + 1
for i, m in enumerate(maps):
plt.subplot(1, l, i+1)
plt.axis("off")
plt.imshow(a, aspect='auto', cmap=plt.get_cmap(m), origin="lower")
plt.title(m, rotation=90, fontsize=10, va='bottom')
plt.show()
import numpy as np
import matplotlib.pyplot as plt
size = 256, 16
dpi = 72.0
figsize= size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, 0, 1, 1], frameon=False)
plt.xlim(0, 14)
plt.xticks(())
plt.yticks(())
plt.show()
import numpy as np
import matplotlib.pyplot as plt
size = 256, 16
dpi = 72.0
figsize = size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, 0, 1, 1], frameon=False)
plt.xlim(0, 12)
plt.ylim(-1, 2)
plt.xticks(())
plt.yticks(())
plt.show()
Dash capstyle
import numpy as np
import matplotlib.pyplot as plt
size = 256, 16
dpi = 72.0
figsize = size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, 0, 1, 1], frameon=False)
plt.xlim(0, 14)
plt.xticks(())
plt.yticks(())
plt.show()
import numpy as np
import matplotlib.pyplot as plt
size = 256, 16
dpi = 72.0
figsize= size[0] / float(dpi), size[1] / float(dpi)
fig = plt.figure(figsize=figsize, dpi=dpi)
fig.patch.set_alpha(0)
plt.axes([0, 0, 1, 1], frameon=False)
plt.xlim(0, 12)
plt.ylim(-1, 2)
plt.xticks(())
plt.yticks(())
plt.show()
Linestyles
import numpy as np
import matplotlib.pyplot as plt
linestyles = ['-', '--', ':', '-.', '.', ',', 'o', '^', 'v', '<', '>', 's',
'+', 'x', 'd', '1', '2', '3', '4', 'h', 'p', '|', '_', 'D', 'H']
n_lines = len(linestyles)
for i, ls in enumerate(linestyles):
linestyle(ls, i)
plt.xlim(-.2, .2 + .5*n_lines)
plt.xticks(())
plt.yticks(())
plt.show()
Markers
import numpy as np
import matplotlib.pyplot as plt
n_markers = len(markers)
for i, m in enumerate(markers):
marker(m, i)
plt.xlim(-.2, .2 + .5 * n_markers)
plt.xticks(())
plt.yticks(())
plt.show()
import numpy as np
import matplotlib.pyplot as plt
def tickline():
plt.xlim(0, 10), plt.ylim(-1, 1), plt.yticks([])
ax = plt.gca()
ax.spines['right'].set_color('none')
ax.spines['left'].set_color('none')
ax.spines['top'].set_color('none')
ax.xaxis.set_ticks_position('bottom')
ax.spines['bottom'].set_position(('data',0))
ax.yaxis.set_ticks_position('none')
ax.xaxis.set_minor_locator(plt.MultipleLocator(0.1))
ax.plot(np.arange(11), np.zeros(11))
return ax
locators = [
'plt.NullLocator()',
'plt.MultipleLocator(1.0)',
'plt.FixedLocator([0, 2, 8, 9, 10])',
'plt.IndexLocator(3, 1)',
'plt.LinearLocator(5)',
'plt.LogLocator(2, [1.0])',
'plt.AutoLocator()',
]
n_locators = len(locators)
import numpy as np
import matplotlib.pyplot as plt
plt.subplot(1, 1, 1, polar=True)
N = 20
theta = np.arange(0.0, 2 * np.pi, 2 * np.pi / N)
radii = 10 * np.random.rand(N)
width = np.pi / 4 * np.random.rand(N)
bars = plt.bar(theta, radii, width=width, bottom=0.0)
for r, bar in zip(radii, bars):
bar.set_facecolor(plt.cm.jet(r / 10.))
bar.set_alpha(0.5)
plt.gca().set_xticklabels([])
plt.gca().set_yticklabels([])
plt.show()
3D plotting vignette
import numpy as np
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
ax = Axes3D(fig)
X = np.arange(-4, 4, 0.25)
Y = np.arange(-4, 4, 0.25)
X, Y = np.meshgrid(X, Y)
R = np.sqrt(X ** 2 + Y ** 2)
Z = np.sin(R)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
n = 256
X = np.linspace(0, 2, n)
Y = np.sin(2 * np.pi * X)
plt.show()
ax = plt.subplot(2, 1, 1)
ax.set_xticklabels([])
ax.set_yticklabels([])
ax = plt.subplot(2, 2, 3)
ax.set_xticklabels([])
ax.set_yticklabels([])
ax = plt.subplot(2, 2, 4)
ax.set_xticklabels([])
ax.set_yticklabels([])
plt.show()
import numpy as np
import matplotlib.pyplot as plt
n = 5
Z = np.zeros((n, 4))
X = np.linspace(0, 2, n, endpoint=True)
Y = np.random.random((n, 4))
plt.boxplot(Y)
plt.xticks(())
plt.yticks(())
plt.show()
import numpy as np
import matplotlib.pyplot as plt
n = 1024
X = np.random.normal(0, 1, n)
Y = np.random.normal(0, 1, n)
T = np.arctan2(Y,X)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
n = 20
X = np.ones(n)
X[-1] *= 2
plt.pie(X, explode=X*.05, colors = ['%f ' % (i/float(n)) for i in range(n)])
fig = plt.gcf()
w, h = fig.get_figwidth(), fig.get_figheight()
r = h / float(w)
plt.xlim(-1.5, 1.5)
plt.ylim(-1.5 * r, 1.5 * r)
plt.xticks(())
plt.yticks(())
plt.show()
import numpy as np
import matplotlib.pyplot as plt
n = 16
X = np.arange(n)
Y1 = (1 - X / float(n)) * np.random.uniform(0.5, 1.0, n)
Y2 = (1 - X / float(n)) * np.random.uniform(0.5, 1.0, n)
plt.bar(X, Y1, facecolor='#9999ff', edgecolor='white')
plt.bar(X, -Y2, facecolor='#ff9999', edgecolor='white')
plt.xlim(-.5, n)
plt.xticks(())
plt.ylim(-1, 1)
plt.yticks(())
horizontalalignment='left',
verticalalignment='top',
size='large',
transform=plt.gca().transAxes)
plt.show()
import numpy as np
import matplotlib.pyplot as plt
n = 8
X, Y = np.mgrid[0:n, 0:n]
T = np.arctan2(Y - n/ 2., X - n / 2.)
R = 10 + np.sqrt((Y - n / 2.) ** 2 + (X - n / 2.) ** 2)
U, V = R * np.cos(T), R * np.sin(T)
plt.quiver(X, Y, U, V, R, alpha=.5)
plt.quiver(X, Y, U, V, edgecolor='k', facecolor='None', linewidth=.5)
plt.xlim(-1, n)
plt.xticks(())
plt.ylim(-1, n)
plt.yticks(())
plt.show()
Imshow demo
Demoing imshow
import numpy as np
import matplotlib.pyplot as plt
n = 10
x = np.linspace(-3, 3, 8 * n)
y = np.linspace(-3, 3, 6 * n)
X, Y = np.meshgrid(x, y)
Z = f(X, Y)
plt.imshow(Z, interpolation='nearest', cmap='bone', origin='lower')
plt.xticks(())
plt.yticks(())
plt.show()
An example demoing how to plot the contours of a function, with additional layout tweeks.
import numpy as np
import matplotlib.pyplot as plt
def f(x,y):
return (1 - x / 2 + x ** 5 + y ** 3) * np.exp(-x ** 2 - y ** 2)
n = 256
x = np.linspace(-3, 3, n)
y = np.linspace(-3, 3, n)
X, Y = np.meshgrid(x, y)
plt.text(-0.05, 1.01, "\n\n Draw contour lines and filled contours ",
horizontalalignment='left',
verticalalignment='top',
size='large',
transform=plt.gca().transAxes)
plt.show()
Grid elaborate
axes.xaxis.set_major_locator(MultipleLocator(1.0))
axes.xaxis.set_minor_locator(MultipleLocator(0.1))
axes.yaxis.set_major_locator(MultipleLocator(1.0))
axes.yaxis.set_minor_locator(MultipleLocator(0.1))
axes.grid(which='major', axis='x', linewidth=0.75, linestyle='-', color='0.75')
axes.grid(which='minor', axis='x', linewidth=0.25, linestyle='-', color='0.75')
axes.grid(which='major', axis='y', linewidth=0.75, linestyle='-', color='0.75')
axes.grid(which='minor', axis='y', linewidth=0.25, linestyle='-', color='0.75')
axes.set_xticklabels([])
axes.set_yticklabels([])
verticalalignment='top',
size='xx-large',
transform=axes.transAxes)
import numpy as np
import matplotlib.pyplot as plt
fig = plt.figure()
plt.xticks(())
plt.yticks(())
eqs = []
eqs.append((r"$W^{3\beta}_{\delta_1 \rho_1 \sigma_2} = U^{3\beta}_{\delta_1 \rho_1} + \frac{1}
,{8 \pi 2} \int^{\alpha_2}_{\alpha_2} d \alpha^\prime_2 \left[\frac{ U^{2\beta}_{\delta_1
,\rho_1} - \alpha^\prime_2U^{1\beta}_{\rho_1 \sigma_2} }{U^{0\beta}_{\rho_1 \sigma_2}}\right]$
,"))
4.7. Full code examples 182
Scipy lecture notes, Edition 2017.1
for i in range(24):
index = np.random.randint(0,len(eqs))
eq = eqs[index]
size = np.random.uniform(12,32)
x,y = np.random.uniform(0,1,2)
alpha = np.random.uniform(0.25,.75)
plt.text(x, y, eq, ha='center', va='center', color="#11557c", alpha=alpha,
transform=plt.gca().transAxes, fontsize=size, clip_on=True)
plt.show()
Authors: Gal Varoquaux, Adrien Chauve, Andre Espaze, Emmanuelle Gouillart, Ralf Gommers
Scipy
The scipy package contains various toolboxes dedicated to common issues in scientific computing. Its
different submodules correspond to different applications, such as interpolation, integration, optimization,
image processing, statistics, special functions, etc.
Tip: scipy can be compared to other standard scientific-computing libraries, such as the GSL (GNU Scientific
Library for C and C++), or Matlabs toolboxes. scipy is the core package for scientific routines in Python; it is
meant to operate efficiently on numpy arrays, so that numpy and scipy work hand in hand.
Before implementing a routine, it is worth checking if the desired data processing is not already implemented
in Scipy. As non-professional programmers, scientists often tend to re-invent the wheel, which leads to buggy,
non-optimal, difficult-to-share and unmaintainable code. By contrast, Scipys routines are optimized and
tested, and should therefore be used when possible.
Chapters contents
184
Scipy lecture notes, Edition 2017.1
Warning: This tutorial is far from an introduction to numerical computing. As enumerating the different
submodules and functions in scipy would be very boring, we concentrate instead on a few examples to give
a general idea of how to use scipy for scientific computing.
Tip: They all depend on numpy, but are mostly independent of each other. The standard way of importing
Numpy and these Scipy modules is:
The main scipy namespace mostly contains functions that are really numpy functions (try scipy.cos is
np.cos). Those are exposed for historical reasons; theres no reason to use import scipy in your code.
>>> data['a']
array([[ 1., 1., 1.],
[ 1., 1., 1.],
[ 1., 1., 1.]])
See also:
Load text files: numpy.loadtxt()/numpy.savetxt()
Clever loading of text/csv files: numpy.genfromtxt()/numpy.recfromcsv()
Fast and efficient, but numpy-specific, binary format: numpy.save()/numpy.load()
More advanced input/output of images in scikit-image: skimage.io
Special functions are transcendental functions. The docstring of the scipy.special module is well-written,
so we wont list all functions here. Frequently used ones are:
Bessel function, such as scipy.special.jn() (nth integer order Bessel function)
Elliptic function (scipy.special.ellipj() for the Jacobian elliptic function, . . . )
Gamma function: scipy.special.gamma(), also note scipy.special.gammaln() which will give the
log of Gamma to a higher numerical precision.
Erf, the area under a Gaussian curve: scipy.special.erf()
Tip: The scipy.linalg module provides standard linear algebra operations, relying on an underlying effi-
cient implementation (BLAS, LAPACK).
Finally computing the inverse of a singular matrix (its determinant is zero) will raise LinAlgError:
More advanced operations are available, for example singular-value decomposition (SVD):
>>> spec
array([ 14.88982544, 0.45294236, 0.29654967])
The original matrix can be re-composed by matrix multiplication of the outputs of svd with np.dot:
SVD is commonly used in statistics and signal processing. Many other standard decompositions (QR,
LU, Cholesky, Schur), as well as solvers for linear systems, are available in scipy.linalg.
scipy.interpolate is useful for fitting a function from experimental data and thus evaluating points where
no measure exists. The module is based on the FITPACK Fortran subroutines.
By imagining experimental data close to a sine function:
A cubic interpolation can also be selected by providing the kind optional keyword argument:
Tip: The scipy.optimize module provides algorithms for function minimization (scalar or multi-
dimensional), curve fitting and root finding.
If we know that the data lies on a sine wave, but not the amplitudes or the period, we can find those by least
squares curve fitting. First we have to define the test function to fit, here a sine with unknown amplitude and
period:
The temperature extremes in Alaska for each month, starting in January, are given by (in degrees
Celcius):
max: 17, 19, 21, 28, 33, 38, 37, 37, 31, 23, 19, 18
min: -62, -59, -56, -46, -32, -18, -9, -13, -25, -46, -52, -58
5. Is the time offset for min and max temperatures the same within the fit accuracy?
solution
This function has a global minimum around -1.3 and a local minimum around 3.8.
Searching for minimum can be done with scipy.optimize.minimize(), given a starting point x0, it returns
the location of the minimum that it has found:
result type
The result of scipy.optimize.minimize() is a compound object comprising all information on the con-
vergence
Methods: As the function is a smooth function, gradient-descent based methods are good options. The lBFGS
algorithm is a good choice in general:
Note how it cost only 12 functions evaluation above to find a good value for the minimum.
Global minimum: A possible issue with this approach is that, if the function has local minima, the algorithm
may find these local minima instead of the global minimum depending on the initial point x0:
If we dont know the neighborhood of the global minimum to choose the initial point, we need to resort to
costlier global optimization. To find the global minimum, we use scipy.optimize.basinhopping() (added
in version 0.12.0 of Scipy). It combines a local optimizer with sampling of starting points:
>>> optimize.basinhopping(f, 0)
nfev: 1725
minimization_failures: 0
fun: -7.9458233756152845
x: array([-1.30644001])
message: ['requested number of basinhopping iterations completed successfully']
njev: 575
nit: 100
Note: scipy used to contain the routine anneal, it has been removed in SciPy 0.16.0.
Constraints: We can constrain the variable to the interval (0, 10) using the bounds argument:
A list of bounds
As minimize() works in general with x multidimensionsal, the bounds argument is a list of bound on
each dimension.
Tip: What has happened? Why are we finding 0, which is not a mimimum of our function.
To minimize over several variables, the trick is to turn them into a function of a multi-dimensional variable
(a vector). See for instance the exercise on 2D minimization below.
See also:
Finding minima of function is discussed in more details in the advanced chapter: Mathematical optimization:
finding minima of functions.
To find a root, i.e. a point where f (x) = 0, of the function f above we can use scipy.optimize.root():
Note that only one root is found. Inspecting the plot of f reveals that there is a second root around -2.5. We
find the exact value of it by adjusting our initial guess:
Note: scipy.optimize.root() also comes with a variety of algorithms, set via the method argument.
Now that we have found the minima and roots of f and used curve fitting on it, we put all those results together
in a single plot:
See also:
You can find all algorithms and functions with similar functionalities in the documentation of scipy.
optimize.
See the summary exercise on Non linear least squares curve fitting: application to point extraction in topo-
graphical lidar data for another, more advanced example.
The module scipy.stats contains statistical tools and probabilistic descriptions of random processes. Ran-
dom number generators for various random process can be found in numpy.random.
Given observations of a random process, their histogram is an estimator of the random processs PDF (proba-
bility density function):
If we know that the random process belongs to a given family of random processes, such as normal processes,
we can do a maximum-likelihood fit of the observations to estimate the parameters of the underlying distri-
bution. Here we fit a normal process to the observed data:
Generate 1000 random variates from a gamma distribution with a shape parameter of 1, then plot a his-
togram from those samples. Can you plot the pdf on top (it should match)?
Extra: the distributions have many useful methods. Explore them by reading the docstring or by using tab
completion. Can you recover the shape parameter 1 by using the fit method on your random variates?
>>> np.mean(samples)
-0.0452567074...
The median another estimator of the center. It is the value with half of the observations below, and half above:
>>> np.median(samples)
-0.0580280347...
Tip: Unlike the mean, the median is not sensitive to the tails of the distribution. It is robust.
Which one seems to be the best estimator of the center for the Gamma distribution?
The median is also the percentile 50, because 50% of the observation are below it:
See also:
The chapter on statistics introduces much more elaborate tools for statistical testing and statistical data load-
ing and visualization outside of scipy.
scipy.integrate also features routines for integrating Ordinary Differential Equations (ODE). In particular,
dy
As an introduction, let us solve the ODE d t = 2y between t = 0 . . . 4, with the initial condition y(t = 0) = 1.
First the function computing the derivative of the position needs to be defined:
Let us integrate a more complex ODE: a damped spring-mass oscillator. The position of a mass attached to a
spring obeys the 2nd order ODE y 00 + 20 y 0 + 20 y = 0 with 20 = k/m with k the spring constant, m the mass
and = c/(2m0 ) with c the damping coefficient. We set:
Hence:
For odeint(), the 2nd order equation needs to be transformed in a system of two first-order equations for the
vector Y = (y, y 0 ): the function computes the velocity and acceleration:
Tip: scipy.integrate.odeint() uses the LSODA (Livermore Solver for Ordinary Differential equations
with Automatic method switching for stiff and non-stiff problems), see the ODEPACK Fortran library for more
details.
See also:
Partial Differental Equations
There is no Partial Differential Equations (PDE) solver in Scipy. Some Python packages for solving PDEs are
available, such as fipy or SfePy.
The scipy.fftpack module computes fast Fourier transforms (FFTs) and offers utilities to handle them. The
main functions are:
scipy.fftpack.fft() to compute the FFT
scipy.fftpack.fftfreq() to generate the sampling frequencies
scipy.fftpack.ifft() computes the inverse FFT, from frequency space to signal space
Signal FFT
As the signal comes from a real function, the Fourier transform is symmetric.
The peak signal frequency can be found with freqs[power.argmax()]
numpy.fft
Numpy also has an implementation of FFT (numpy.fft). However, the scipy one should be preferred, as it
uses more efficient underlying implementations.
1. Examine the provided image moonlanding.png, which is heavily contaminated with periodic noise.
In this exercise, we aim to clean up the noise using the Fast Fourier Transform.
2. Load the image using pylab.imread().
3. Find and use the 2-D FFT function in scipy.fftpack, and plot the spectrum (Fourier transform of )
the image. Do you have any trouble visualising the spectrum? If so, why?
4. The spectrum consists of high and low frequency components. The noise is contained in the high-
frequency part of the spectrum, so set some of those components to zero (use array slicing).
5. Apply the inverse Fourier transform to see the resulting image.
Solution
>>> plt.plot(t, x)
[<matplotlib.lines.Line2D object at ...>]
>>> plt.plot(t[::4], x_resampled, 'ko')
[<matplotlib.lines.Line2D object at ...>]
Tip: Notice how on the side of the window the resampling is less accurate and has a rippling effect.
This resampling is different from the interpolation provided by scipy.interpolate as it only applies to reg-
ularly sampled data.
>>> plt.plot(t, x)
[<matplotlib.lines.Line2D object at ...>]
>>> plt.plot(t, x_detrended)
[<matplotlib.lines.Line2D object at ...>]
Filtering: For non-linear filtering, scipy.signal has filtering (median filter scipy.signal.medfilt(),
Wiener scipy.signal.wiener()), but we will discuss this in the image section.
Tip: scipy.signal also has a full-blown set of tools for the design of linear filter (finite and infinite response
filters), but this is out of the scope of this tutorial.
>>> plt.subplot(151)
<matplotlib.axes._subplots.AxesSubplot object at 0x...>
>>> plt.axis('off')
(-0.5, 1023.5, 767.5, -0.5)
>>> # etc.
Exercise
Tip: Mathematical morphology stems from set theory. It characterizes and transforms geometrical structures.
Binary (black and white) images, in particular, can be transformed using this theory: the sets to be transformed
are the sets of neighboring non-zero-valued pixels. The theory was also extended to gray-valued images.
>>> el = ndimage.generate_binary_structure(2, 1)
>>> el
Erosion scipy.ndimage.binary_erosion()
Dilation scipy.ndimage.binary_dilation()
Opening scipy.ndimage.binary_opening()
[0, 1, 1, 1, 0],
[0, 1, 1, 1, 0],
[0, 1, 1, 1, 0],
[0, 0, 0, 0, 1]])
>>> # Opening removes small objects
>>> ndimage.binary_opening(a, structure=np.ones((3, 3))).astype(np.int)
array([[0, 0, 0, 0, 0],
[0, 1, 1, 1, 0],
[0, 1, 1, 1, 0],
[0, 1, 1, 1, 0],
[0, 0, 0, 0, 0]])
>>> # Opening can also smooth corners
>>> ndimage.binary_opening(a).astype(np.int)
array([[0, 0, 0, 0, 0],
[0, 0, 1, 0, 0],
[0, 1, 1, 1, 0],
[0, 0, 1, 0, 0],
[0, 0, 0, 0, 0]])
Closing: scipy.ndimage.binary_closing()
Exercise
An opening operation removes small structures, while a closing operation fills small holes. Such operations
can therefore be used to clean an image.
Exercise
Check that the area of the reconstructed square is smaller than the area of the initial square.
(The opposite would occur if the closing step was performed before the opening).
For gray-valued images, eroding (resp. dilating) amounts to replacing a pixel by the minimal (resp. maximal)
value among pixels covered by the structuring element centered on the pixel of interest.
>>> a
array([[0, 0, 0, 0, 0, 0, 0],
[0, 3, 3, 3, 3, 3, 0],
[0, 3, 3, 1, 3, 3, 0],
[0, 3, 3, 3, 3, 3, 0],
[0, 3, 3, 3, 2, 3, 0],
[0, 3, 3, 3, 3, 3, 0],
[0, 0, 0, 0, 0, 0, 0]])
>>> ndimage.grey_erosion(a, size=(3, 3))
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 3, 2, 2, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
Extract the 4th connected component, and crop the array around it:
>>> ndimage.find_objects(labels==4)
[(slice(30L, 48L, None), slice(30L, 48L, None))]
>>> sl = ndimage.find_objects(labels==4)
>>> from matplotlib import pyplot as plt
>>> plt.imshow(sig[sl[0]])
<matplotlib.image.AxesImage object at ...>
See the summary exercise on Image processing application: counting bubbles and unmolten grains for a more
advanced example.
The summary exercises use mainly Numpy, Scipy and Matplotlib. They provide some real-life examples of sci-
entific computing with Python. Now that the basics of working with Numpy and Scipy have been introduced,
the interested user is invited to try these exercises.
The exercise goal is to predict the maximum wind speed occurring every 50 years even if no measure exists
for such a period. The available data are only measured over 21 years at the Sprog meteorological station
located in Denmark. First, the statistical steps will be given and then illustrated with functions from the
scipy.interpolate module. At the end the interested readers are invited to compute results from raw data and
in a slightly different approach.
Statistical approach
The annual maxima are supposed to fit a normal probability density function. However such function is not
going to be estimated because it gives a probability from a wind speed maxima. Finding the maximum wind
speed occurring every 50 years requires the opposite approach, the result needs to be found from a defined
probability. That is the quantile function role and the exercise goal will be to find it. In the current model, it is
supposed that the maximum wind speed occurring every 50 years is defined as the upper 2% quantile.
By definition, the quantile function is the inverse of the cumulative distribution function. The latter describes
the probability distribution of an annual maxima. In the exercise, the cumulative probability p_i for a given
year i is defined as p_i = i/(N+1) with N = 21, the number of measured years. Thus it will be possible to
calculate the cumulative probability of every measured wind speed maxima. From those experimental points,
the scipy.interpolate module will be very useful for fitting the quantile function. Finally the 50 years maxima is
going to be evaluated from the cumulative probability of the 2% quantile.
The annual wind speeds maxima have already been computed and saved in the numpy format in the file
examples/max-speeds.npy, thus they will be loaded by using numpy:
Following the cumulative probability definition p_i from the previous section, the corresponding values will
be:
In this section the quantile function will be estimated by using the UnivariateSpline class which can
represent a spline from points. The default behavior is to build a spline of degree 3 and points can
have different weights according to their reliability. Variants are InterpolatedUnivariateSpline and
LSQUnivariateSpline on which errors checking is going to change. In case a 2D spline is wanted, the
BivariateSpline class family is provided. All those classes for 1D and 2D splines use the FITPACK For-
tran subroutines, thats why a lower library access is available through the splrep and splev functions for
respectively representing and evaluating a spline. Moreover interpolation functions without the use of FIT-
PACK parameters are also provided for simpler use (see interp1d, interp2d, barycentric_interpolate
and so on).
For the Sprog maxima wind speeds, the UnivariateSpline will be used because a spline of degree 3 seems
to correctly fit the data:
The quantile function is now going to be evaluated from the full range of probabilities:
In the current model, the maximum wind speed occurring every 50 years is defined as the upper 2% quantile.
As a result, the cumulative probability value will be:
So the storm wind speed occurring every 50 years can be guessed by:
The interested readers are now invited to make an exercise by using the wind speeds measured over 21 years.
The measurement period is around 90 minutes (the original period was around 10 minutes but the file size
has been reduced for making the exercise setup easier). The data are stored in numpy format inside the file
examples/sprog-windspeeds.npy. Do not look at the source code for the plots until you have completed
the exercise.
The first step will be to find the annual maxima by using numpy and plot them as a matplotlib bar figure.
The second step will be to use the Gumbell distribution on cumulative probabilities p_i defined as
-log( -log(p_i) ) for fitting a linear quantile function (remember that you can define the degree
of the UnivariateSpline). Plotting the annual maxima versus the Gumbell distribution should give
you the following figure.
The last step will be to find 34.23 m/s for the maximum wind speed occurring every 50 years.
5.11.2 Non linear least squares curve fitting: application to point extraction in
topographical lidar data
The goal of this exercise is to fit a model to some data. The data used in this tutorial are lidar data and are
described in details in the following introductory paragraph. If youre impatient and want to practice now,
please skip it and go directly to Loading and visualization.
Introduction
Lidars systems are optical rangefinders that analyze property of scattered light to measure distances. Most of
them emit a short light impulsion towards a target and record the reflected signal. This signal is then processed
to extract the distance between the lidar system and the target.
Topographical lidar systems are such systems embedded in airborne platforms. They measure distances be-
tween the platform and the Earth, so as to deliver information on the Earths topography (see1 for more details).
In this tutorial, the goal is to analyze the waveform recorded by the lidar system2 . Such a signal contains peaks
whose center and amplitude permit to compute the position and some characteristics of the hit target. When
the footprint of the laser beam is around 1m on the Earth surface, the beam can hit multiple targets during the
two-way propagation (for example the ground and the top of a tree or building). The sum of the contributions
of each target hit by the laser beam then produces a complex signal with multiple peaks, each one containing
information about one target.
One state of the art method to extract information from these data is to decompose them in a sum of Gaussian
functions where each function represents the contribution of a target hit by the laser beam.
Therefore, we use the scipy.optimize module to fit a waveform to one or a sum of Gaussian functions.
1 Mallet, C. and Bretar, F. Full-Waveform Topographic Lidar: State-of-the-Art. ISPRS Journal of Photogrammetry and Remote Sensing
64(1), pp.1-16, January 2009 http://dx.doi.org/10.1016/j.isprsjprs.2008.09.007
2 The data used for this tutorial are part of the demonstration data available for the FullAnalyze software and were kindly provided by
the GIS DRAIX.
As you can notice, this waveform is a 80-bin-length signal with a single peak.
The signal is very simple and can be modeled as a single Gaussian function and an offset corresponding to the
background noise. To fit the signal with the function, we must:
define the model
propose an initial solution
call scipy.optimize.leastsq
Model
t 2
B + A exp
can be defined in python by:
where
coeffs[0] is B (noise)
coeffs[1] is A (amplitude)
coeffs[2] is (center)
coeffs[3] is (width)
Initial solution
An approximative initial solution that we can find from looking at the graph is for instance:
Fit
scipy.optimize.leastsq minimizes the sum of squares of the function given as an argument. Basically, the
function to minimize is the residuals (the difference between the data and the model):
So lets get our solution by calling scipy.optimize.leastsq() with the following arguments:
the function to minimize
an initial solution
the additional arguments to pass to the function
Remark: from scipy v0.8 and above, you should rather use scipy.optimize.curve_fit() which takes the
model and the data as arguments, so you dont need to define the residuals any more.
Going further
Try with a more complex waveform (for instance data/waveform_2.npy) that contains three significant
peaks. You must adapt the model which is now a sum of Gaussian functions instead of only one Gaussian
peak.
In some cases, writing an explicit function to compute the Jacobian is faster than letting leastsq esti-
mate it numerically. Create a function to compute the Jacobian of the residuals and use it as an input for
leastsq.
When we want to detect very small peaks in the signal, or when the initial guess is too far from a good
solution, the result given by the algorithm is often not satisfying. Adding constraints to the parameters
of the model enables to overcome such limitations. An example of a priori knowledge we can add is the
sign of our variables (which are all positive).
With the following initial solution:
compare the result of scipy.optimize.leastsq() and what you can get with scipy.optimize.
fmin_slsqp() when adding boundary constraints.
1. Open the image file MV_HFV_012.jpg and display it. Browse through the keyword arguments in the doc-
string of imshow to display the image with the right orientation (origin in the bottom left corner, and not the
upper left corner as for standard arrays).
This Scanning Element Microscopy image shows a glass sample (light gray matrix) with some bubbles (on
black) and unmolten sand grains (dark gray). We wish to determine the fraction of the sample covered by
these three phases, and to estimate the typical size of sand grains and bubbles, their sizes, etc.
2. Crop the image to remove the lower panel with measure information.
3. Slightly filter the image with a median filter in order to refine its histogram. Check how the histogram
changes.
4. Using the histogram of the filtered image, determine thresholds that allow to define masks for sand pixels,
glass pixels and bubble pixels. Other option (homework): write a function that determines automatically the
thresholds from the minima of the histogram.
5. Display an image in which the three phases are colored with three different colors.
6. Use mathematical morphology to clean the different phases.
7. Attribute labels to all bubbles and sand grains, and remove from the sand mask grains that are smaller than
10 pixels. To do so, use ndimage.sum or np.bincount to compute the grain sizes.
8. Compute the mean size of bubbles.
Proposed solution
5.11.4 Example of solution for the image processing exercise: unmolten grains in
glass
1. Open the image file MV_HFV_012.jpg and display it. Browse through the keyword arguments in the
docstring of imshow to display the image with the right orientation (origin in the bottom left corner,
and not the upper left corner as for standard arrays).
2. Crop the image to remove the lower panel with measure information.
3. Slightly filter the image with a median filter in order to refine its histogram. Check how the histogram
changes.
4. Using the histogram of the filtered image, determine thresholds that allow to define masks for sand pix-
els, glass pixels and bubble pixels. Other option (homework): write a function that determines automat-
ically the thresholds from the minima of the histogram.
5. Display an image in which the three phases are colored with three different colors.
7. Attribute labels to all bubbles and sand grains, and remove from the sand mask grains that are smaller
than 10 pixels. To do so, use ndimage.sum or np.bincount to compute the grain sizes.
import numpy as np
import matplotlib.pyplot as plt
def f(x):
return x**2 + 10*np.sin(x)
Out:
fun: -7.945823375615215
hess_inv: array([[ 0.08589237]])
Out:
fun: array([-7.94582338])
hess_inv: <1x1 LbfgsInvHessProduct with dtype=float64>
jac: array([ -1.42108547e-06])
message: b'CONVERGENCE: NORM_OF_PROJECTED_GRADIENT_<=_PGTOL'
nfev: 12
nit: 5
status: 0
success: True
x: array([-1.30644013])
plt.show()
import numpy as np
t = np.linspace(0, 5, 100)
x = t + np.random.normal(size=100)
Detrend
Plot
import numpy as np
t = np.linspace(0, 5, 100)
x = np.sin(t)
Downsample it by a factor of 4
Plot
plt.legend(loc='best')
plt.show()
Solve the ODE dy/dt = -2y between t = 0..4, with the initial condition y(t=0) = 1.
import numpy as np
from scipy.integrate import odeint
from matplotlib import pyplot as plt
plt.figure(figsize=(4, 3))
plt.plot(time_vec, yvec)
plt.xlabel('t: Time')
plt.ylabel('y: Position')
plt.tight_layout()
import numpy as np
from matplotlib import pyplot as plt
plt.figure(figsize=(6, 4))
plt.hist(samples1, bins=bins, normed=True, label="Samples 1")
plt.hist(samples2, bins=bins, normed=True, label="Samples 2")
plt.legend(loc='best')
plt.show()
import numpy as np
from scipy.integrate import odeint
from matplotlib import pyplot as plt
mass = 0.5 # kg
kspring = 4 # N/m
cviscous = 0.4 # N s/m
plt.figure(figsize=(4, 3))
plt.plot(time_vec, yarr[:, 0], label='y')
plt.plot(time_vec, yarr[:, 1], label="y'")
plt.legend(loc='best')
plt.show()
Explore the normal distribution: a histogram built from samples and the PDF (probability density function).
import numpy as np
# Compute the PDF on the bin centers from scipy distribution object
from scipy import stats
pdf = stats.norm.pdf(bin_centers)
import numpy as np
# And plot it
import matplotlib.pyplot as plt
plt.figure(figsize=(6, 4))
plt.scatter(x_data, y_data)
print(params)
Out:
[ 3.05931973 1.45754553]
plt.figure(figsize=(6, 4))
plt.scatter(x_data, y_data, label='Data')
plt.plot(x_data, test_func(x_data, params[0], params[1]),
label='Fitted function')
plt.legend(loc='best')
plt.show()
import numpy as np
from matplotlib import pyplot as plt
time_step = .01
time_vec = np.arange(0, 70, time_step)
plt.figure(figsize=(8, 5))
plt.plot(time_vec, sig)
plt.figure(figsize=(5, 4))
plt.imshow(spectrogram, aspect='auto', cmap='hot_r', origin='lower')
plt.title('Spectrogram')
plt.ylabel('Frequency band')
plt.xlabel('Time window')
plt.tight_layout()
plt.figure(figsize=(5, 4))
plt.semilogx(freqs, psd)
plt.title('PSD: power spectral density')
plt.xlabel('Frequency')
plt.ylabel('Power')
plt.tight_layout()
plt.show()
# Generate data
import numpy as np
np.random.seed(0)
measured_time = np.linspace(0, 1, 10)
noise = 1e-1 * (np.random.random(10)*2 - 1)
measures = np.sin(2 * np.pi * measured_time) + noise
# Plot
from matplotlib import pyplot as plt
plt.figure(figsize=(12, 3.5))
plt.subplot(141)
plt.imshow(a, cmap=plt.cm.gray)
plt.axis('off')
plt.title('a')
plt.subplot(142)
plt.imshow(mask, cmap=plt.cm.gray)
plt.axis('off')
plt.title('mask')
plt.subplot(143)
plt.imshow(opened_mask, cmap=plt.cm.gray)
plt.axis('off')
plt.title('opened_mask')
plt.subplot(144)
plt.imshow(closed_mask, cmap=plt.cm.gray)
plt.title('closed_mask')
plt.axis('off')
plt.show()
Generated by Sphinx-Gallery
plt.figure(figsize=(15, 3))
plt.subplot(151)
plt.imshow(shifted_face, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(152)
plt.imshow(shifted_face2, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(153)
plt.imshow(rotated_face, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(154)
plt.imshow(cropped_face, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(155)
plt.imshow(zoomed_face, cmap=plt.cm.gray)
plt.axis('off')
plt.show()
import numpy as np
from matplotlib import pyplot as plt
np.random.seed(0)
x, y = np.indices((100, 100))
sig = np.sin(2*np.pi*x/50.) * np.sin(2*np.pi*y/50.) * (1+x*y/50.**2)**2
mask = sig > 1
plt.figure(figsize=(7, 3.5))
plt.subplot(1, 2, 1)
plt.imshow(sig)
plt.axis('off')
plt.title('sig')
plt.subplot(1, 2, 2)
plt.imshow(mask, cmap=plt.cm.gray)
plt.axis('off')
plt.title('mask')
plt.subplots_adjust(wspace=.05, left=.01, bottom=.01, right=.99, top=.9)
plt.figure(figsize=(3.5, 3.5))
plt.imshow(labels)
plt.title('label')
plt.axis('off')
Extract the 4th connected component, and crop the array around it
sl = ndimage.find_objects(labels==4)
plt.figure(figsize=(3.5, 3.5))
plt.imshow(sig[sl[0]])
plt.title('Cropped connected component')
plt.axis('off')
plt.show()
Generated by Sphinx-Gallery
import numpy as np
Find minima
# Global optimization
grid = (-10, 10, 0.1)
xmin_global = optimize.brute(f, (grid, ))
print("Global minima found %s " % xmin_global)
# Constrain optimization
xmin_local = optimize.fminbound(f, 0, 10)
print("Local minimum found %s " % xmin_local)
Out:
Root finding
Out:
import numpy as np
Simple visualization in 2D
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.set_zlabel('f(x, y)')
ax.set_title('Six-hump Camelback function')
plt.figure()
# Show the function in 2D
plt.imshow(sixhump([xg, yg]), extent=[-2, 2, -1, 1])
plt.colorbar()
# And the minimum that we've found:
plt.scatter(x_min.x[0], x_min.x[1])
plt.show()
import numpy as np
noisy_face = np.copy(face).astype(np.float)
noisy_face += face.std() * 0.5 * np.random.standard_normal(face.shape)
blurred_face = ndimage.gaussian_filter(noisy_face, sigma=3)
median_face = ndimage.median_filter(noisy_face, size=5)
wiener_face = signal.wiener(noisy_face, (5, 5))
plt.figure(figsize=(12, 3.5))
plt.subplot(141)
plt.imshow(noisy_face, cmap=plt.cm.gray)
plt.axis('off')
plt.title('noisy')
plt.subplot(142)
plt.imshow(blurred_face, cmap=plt.cm.gray)
plt.axis('off')
plt.title('Gaussian filter')
plt.subplot(143)
plt.imshow(median_face, cmap=plt.cm.gray)
plt.axis('off')
plt.title('median filter')
plt.subplot(144)
plt.imshow(wiener_face, cmap=plt.cm.gray)
plt.title('Wiener filter')
plt.axis('off')
plt.show()
Plot the power of the FFT of a signal and inverse FFT back to reconstruct a signal.
This example demonstrate scipy.fftpack.fft(), scipy.fftpack.fftfreq() and scipy.fftpack.
ifft(). It implements a basic filter that is very suboptimal, and should not be used.
import numpy as np
from scipy import fftpack
from matplotlib import pyplot as plt
time_step = 0.02
period = 5.
plt.figure(figsize=(6, 5))
plt.plot(time_vec, sig, label='Original signal')
# Find the peak frequency: we can focus on only the positive frequencies
pos_mask = np.where(sample_freq > 0)
freqs = sample_freq[pos_mask]
peak_freq = freqs[power[pos_mask].argmax()]
We now remove all the high frequencies and transform back from frequencies to signal.
high_freq_fft = sig_fft.copy()
high_freq_fft[np.abs(sample_freq) > peak_freq] = 0
filtered_sig = fftpack.ifft(high_freq_fft)
plt.figure(figsize=(6, 5))
plt.plot(time_vec, sig, label='Original signal')
plt.plot(time_vec, filtered_sig, linewidth=3, label='Filtered signal')
plt.xlabel('Time [s]')
plt.ylabel('Amplitude')
plt.legend(loc='best')
Note This is actually a bad way of creating a filter: such brutal cut-off in frequency space does not control
distorsion on the signal.
Filters should be created using the scipy filter design code
plt.show()
import numpy as np
data = np.loadtxt('../../../../data/populations.txt')
years = data[:, 0]
populations = data[:, 1:]
plt.figure()
plt.plot(periods, abs(ft_populations) * 1e-3, 'o')
plt.xlim(0, 22)
plt.xlabel('Period')
plt.ylabel('Power ($\cdot10^3$)')
plt.show()
Theres probably a period of around 10 years (obvious from the plot), but for this crude a method, theres not
enough data to say much more.
Total running time of the script: ( 0 minutes 0.073 seconds)
Download Python source code: plot_periodicity_finder.py
Download Jupyter notebook: plot_periodicity_finder.ipynb
Generated by Sphinx-Gallery
We have the min and max temperatures in Alaska for each months of the year. We would like to find a function
to describe this yearly evolution.
For this, we will fit a periodic function.
The data
import numpy as np
temp_max = np.array([17, 19, 21, 28, 33, 38, 37, 37, 31, 23, 19, 18])
temp_min = np.array([-62, -59, -56, -46, -32, -18, -9, -13, -25, -46, -52, -58])
plt.figure()
plt.plot(months, temp_max, 'ro')
plt.plot(days, yearly_temps(days, *res_max), 'r-')
plt.show()
import numpy as np
from scipy import fftpack
import matplotlib.pyplot as plt
# read image
img = plt.imread('../../../../data/elephant.png')
plt.figure()
plt.imshow(img)
# convolve
img_ft = fftpack.fft2(img, axes=(0, 1))
# the 'newaxis' is to match to color direction
# plot output
plt.figure()
plt.imshow(img2)
The above exercise was only for didactic reasons: there exists a function in scipy that will do this
for us, and probably do a better job: scipy.signal.fftconvolve()
Note that we still have a decay to zero at the border of the image. Using scipy.ndimage.gaussian_filter()
would get rid of this artifact
plt.show()
f1 () = K () f0 ()
import numpy as np
import matplotlib.pyplot as plt
im = plt.imread('../../../../data/moonlanding.png').astype(float)
plt.figure()
plt.imshow(im, plt.cm.gray)
plt.title('Original image')
def plot_spectrum(im_fft):
from matplotlib.colors import LogNorm
# A logarithmic colormap
plt.imshow(np.abs(im_fft), norm=LogNorm(vmin=5))
plt.colorbar()
plt.figure()
plot_spectrum(im_fft)
plt.title('Fourier transform')
Filter in FFT
# In the lines following, we'll make a copy of the original spectrum and
# truncate coefficients.
plt.figure()
plot_spectrum(im_fft2)
plt.title('Filtered Spectrum')
# Reconstruct the denoised image from the filtered spectrum, keep only the
# real part for display.
im_new = fftpack.ifft2(im_fft2).real
plt.figure()
plt.imshow(im_new, plt.cm.gray)
plt.title('Reconstructed Image')
Implementing filtering directly with FFTs is tricky and time consuming. We can use the Gaussian
filter from scipy.ndimage
plt.figure()
plt.imshow(im_blur, plt.cm.gray)
plt.title('Blurred image')
plt.show()
In Ipython it is not possible to open a separated window for help and documentation; how-
ever one can always open a second Ipython shell just to display help and docstrings. . .
Numpys and Scipys documentations can be browsed online on http://docs.scipy.org/doc. The search
button is quite useful inside the reference documentation of the two packages (http://docs.scipy.org/
doc/numpy/reference/ and http://docs.scipy.org/doc/scipy/reference/).
259
Scipy lecture notes, Edition 2017.1
Tutorials on various topics as well as the complete API with all docstrings are found on this website.
Numpys and Scipys documentation is enriched and updated on a regular basis by users on a wiki http:
//docs.scipy.org/doc/numpy/. As a result, some docstrings are clearer or more detailed on the wiki,
and you may want to read directly the documentation on the wiki instead of the official documentation
website. Note that anyone can create an account on the wiki and write better documentation; this is an
easy way to contribute to an open-source project and improve the tools you are using!
Scipy central http://central.scipy.org/ gives recipes on many common problems frequently encoun-
In [45]: numpy.lookfor('convolution')
Search results for 'convolution'
--------------------------------
numpy.convolve
Returns the discrete, linear convolution of two one-dimensional
sequences.
numpy.bartlett
Return the Bartlett window.
numpy.correlate
Discrete, linear correlation of two 1-dimensional sequences.
In [46]: numpy.lookfor('remove', module='os')
Search results for 'remove'
---------------------------
os.remove
260
Scipy lecture notes, Edition 2017.1
remove(path)
os.removedirs
removedirs(path)
os.rmdir
rmdir(path)
os.unlink
unlink(path)
os.walk
Directory tree generator.
If everything listed above fails (and Google doesnt have the answer). . . dont despair! Write to the
mailing-list suited to your problem: you should have a quick answer if you describe your problem well.
Experts on scientific python often give very enlightening explanations on the mailing-list.
Numpy discussion (numpy-discussion@scipy.org): all about numpy arrays, manipulating them,
indexation questions, etc.
SciPy Users List (scipy-user@scipy.org): scientific computing with Python, high-level data process-
ing, in particular with the scipy package.
matplotlib-users@lists.sourceforge.net for plotting with matplotlib.
261
Part II
Advanced topics
262
Scipy lecture notes, Edition 2017.1
This part of the Scipy lecture notes is dedicated to advanced usage. It strives to educate the proficient Python
coder to be an expert and tackles various specific topics.
263
CHAPTER 7
Advanced Python Constructs
Chapter contents
264
Scipy lecture notes, Edition 2017.1
7.1.1 Iterators
Simplicity
Duplication of effort is wasteful, and replacing the various home-grown approaches with a standard feature
usually ends up making things more readable, and interoperable as well.
Guido van Rossum Adding Optional Static Typing to Python
An iterator is an object adhering to the iterator protocol basically this means that it has a next method,
which, when called, returns the next item in the sequence, and when theres nothing to return, raises the
StopIteration exception.
An iterator object allows to loop just once. It holds the state (position) of a single iteration, or from the other
side, each loop over a sequence requires a single iterator object. This means that we can iterate over the same
sequence more than once concurrently. Separating the iteration logic from the sequence allows us to have
more than one way of iteration.
Calling the __iter__ method on a container to create an iterator object is the most straightforward way to get
hold of an iterator. The iter function does that for us, saving a few keystrokes.
>>> nums = [1, 2, 3] # note that ... varies: these are different objects
>>> iter(nums)
<...iterator object at ...>
>>> nums.__iter__()
<...iterator object at ...>
>>> nums.__reversed__()
<...reverseiterator object at ...>
>>> it = iter(nums)
>>> next(it)
1
>>> next(it)
2
>>> next(it)
3
>>> next(it)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
StopIteration
When used in a loop, StopIteration is swallowed and causes the loop to finish. But with explicit invocation,
we can see that once the iterator is exhausted, accessing it raises an exception.
Using the for..in loop also uses the __iter__ method. This allows us to transparently start the iteration over a
sequence. But if we already have the iterator, we want to be able to use it in an for loop in the same way. In
order to achieve this, iterators in addition to next are also required to have a method called __iter__ which
returns the iterator (self).
Support for iteration is pervasive in Python: all sequences and unordered containers in the standard library
allow this. The concept is also stretched to other things: e.g. file objects support iteration over lines.
>>> f = open('/etc/fstab')
>>> f is f.__iter__()
True
The file is an iterator itself and its __iter__ method doesnt create a separate object: only a single thread of
sequential access is allowed.
A second way in which iterator objects are created is through generator expressions, the basis for list compre-
hensions. To increase clarity, a generator expression must always be enclosed in parentheses or an expression.
If round parentheses are used, then a generator iterator is created. If rectangular parentheses are used, the pro-
cess is short-circuited and we get a list.
The list comprehension syntax also extends to dictionary and set comprehensions. A set is created when the
generator expression is enclosed in curly braces. A dict is created when the generator expression contains
pairs of the form key:value:
One gotcha should be mentioned: in old Pythons the index variable (i) would leak, and in versions >= 3 this is
fixed.
7.1.3 Generators
Generators
A third way to create iterator objects is to call a generator function. A generator is a function containing the
keyword yield. It must be noted that the mere presence of this keyword completely changes the nature of the
function: this yield statement doesnt have to be invoked, or even reachable, but causes the function to be
marked as a generator. When a normal function is called, the instructions contained in the body start to be
executed. When a generator is called, the execution stops before the first instruction in the body. An invocation
of a generator function creates a generator object, adhering to the iterator protocol. As with normal function
invocations, concurrent and recursive invocations are allowed.
When next is called, the function is executed until the first yield. Each encountered yield statement gives a
value becomes the return value of next. After executing the yield statement, the execution of this function is
suspended.
Lets go over the life of the single invocation of the generator function.
Contrary to a normal function, where executing f() would immediately cause the first print to be executed,
gen is assigned without executing any statements in the function body. Only when gen.next() is invoked by
next, the statements up to the first yield are executed. The second next prints -- middle -- and execution
halts on the second yield. The third next prints -- finished -- and falls of the end of the function. Since
no yield was reached, an exception is raised.
What happens with the function after a yield, when the control passes to the caller? The state of each generator
is stored in the generator object. From the point of view of the generator function, is looks almost as if it was
running in a separate thread, but this is just an illusion: execution is strictly single-threaded, but the interpreter
keeps and restores the state in between the requests for the next value.
Why are generators useful? As noted in the parts about iterators, a generator function is just a different way to
create an iterator object. Everything that can be done with yield statements, could also be done with next
methods. Nevertheless, using a function and having the interpreter perform its magic to create an iterator
has advantages. A function can be much shorter than the definition of a class with the required next and
__iter__ methods. What is more important, it is easier for the author of the generator to understand the state
which is kept in local variables, as opposed to instance attributes, which have to be used to pass data between
consecutive invocations of next on an iterator object.
A broader question is why are iterators useful? When an iterator is used to power a loop, the loop becomes very
simple. The code to initialise the state, to decide if the loop is finished, and to find the next value is extracted
into a separate place. This highlights the body of the loop the interesting part. In addition, it is possible to
reuse the iterator code in other places.
Each yield statement causes a value to be passed to the caller. This is the reason for the introduction of
generators by PEP 255 (implemented in Python 2.2). But communication in the reverse direction is also useful.
One obvious way would be some external state, either a global variable or a shared mutable object. Direct
communication is possible thanks to PEP 342 (implemented in 2.5). It is achieved by turning the previously
boring yield statement into an expression. When the generator resumes execution after a yield statement,
the caller can call a method on the generator object to either pass a value into the generator, which then is
returned by the yield statement, or a different method to inject an exception into the generator.
The first of the new methods is send(value), which is similar to next(), but passes value into the generator
to be used for the value of the yield expression. In fact, g.next() and g.send(None) are equivalent.
The second of the new methods is throw(type, value=None, traceback=None) which is equivalent to:
>>> it = g()
>>> next(it)
--start--
--yielding 0--
0
>>> it.send(11)
--yield returned 11--
--yielding 1--
1
>>> it.throw(IndexError)
--yield raised IndexError()--
--yielding 2--
2
>>> it.close()
--closing--
next or __next__?
In Python 2.x, the iterator method to retrieve the next value is called next. It is invoked implicitly through
the global function next, which means that it should be called __next__. Just like the global function iter
calls __iter__. This inconsistency is corrected in Python 3.x, where it.next becomes it.__next__. For
other generator methods send and throw the situation is more complicated, because they are not
called implicitly by the interpreter. Nevertheless, theres a proposed syntax extension to allow continue
to take an argument which will be passed to send of the loops iterator. If this extension is accepted, its
likely that gen.send will become gen.__send__. The last of generator methods, close, is pretty obviously
named incorrectly, because it is already invoked implicitly.
Note: This is a preview of PEP 380 (not yet implemented, but accepted for Python 3.3).
Lets say we are writing a generator and we want to yield a number of values generated by a second generator,
a subgenerator. If yielding of values is the only concern, this can be performed without much difficulty using
a loop such as
subgen = some_other_generator()
for v in subgen:
yield v
However, if the subgenerator is to interact properly with the caller in the case of calls to send(), throw()
and close(), things become considerably more difficult. The yield statement has to be guarded by a
try..except..finally structure similar to the one defined in the previous section to debug the generator func-
tion. Such code is provided in PEP 380#id13, here it suffices to say that new syntax to properly yield from a
subgenerator is being introduced in Python 3.3:
This behaves like the explicit loop above, repeatedly yielding values from some_other_generator until it is
exhausted, but also forwards send, throw and close to the subgenerator.
7.2 Decorators
Summary
This amazing feature appeared in the language almost apologetically and with concern that it might not be
that useful.
Bruce Eckel An Introduction to Python Decorators
Since functions and classes are objects, they can be passed around. Since they are mutable objects, they can
be modified. The act of altering a function or class object after it has been constructed but before is is bound
to its name is called decorating.
There are two things hiding behind the name decorator one is the function which does the work of deco-
rating, i.e. performs the real work, and the other one is the expression adhering to the decorator syntax, i.e. an
at-symbol and the name of the decorating function.
Function can be decorated by using the decorator syntax for functions:
@decorator #
def function(): #
pass
def function(): #
pass
function = decorator(function) #
Decorators can be stacked the order of application is bottom-to-top, or inside-out. The semantics are such
that the originally defined function is used as an argument for the first decorator, whatever is returned by
the first decorator is used as an argument for the second decorator, . . . , and whatever is returned by the last
decorator is attached under the name of the original function.
The decorator syntax was chosen for its readability. Since the decorator is specified before the header of the
function, it is obvious that its is not a part of the function body and its clear that it can only operate on the
whole function. Because the expression is prefixed with @ is stands out and is hard to miss (in your face,
according to the PEP :) ). When more than one decorator is applied, each one is placed on a separate line in an
easy to read way.
Decorators can either return the same function or class object or they can return a completely different ob-
ject. In the first case, the decorator can exploit the fact that function and class objects are mutable and add
attributes, e.g. add a docstring to a class. A decorator might do something useful even without modifying
the object, for example register the decorated class in a global registry. In the second case, virtually anything is
possible: when something different is substituted for the original function or class, the new object can be com-
pletely different. Nevertheless, such behaviour is not the purpose of decorators: they are intended to tweak
the decorated object, not do something unpredictable. Therefore, when a function is decorated by replacing
it with a different function, the new function usually calls the original function, after doing some preparatory
work. Likewise, when a class is decorated by replacing if with a new class, the new class is usually derived
from the original class. When the purpose of the decorator is to do something every time, like to log every call
to a decorated function, only the second type of decorators can be used. On the other hand, if the first type is
sufficient, it is better to use it, because it is simpler.
The only requirement on decorators is that they can be called with a single argument. This means that deco-
rators can be implemented as normal functions, or as classes with a __call__ method, or in theory, even as
lambda functions.
Lets compare the function and class approaches. The decorator expression (the part after @) can be either just
a name, or a call. The bare-name approach is nice (less to type, looks cleaner, etc.), but is only possible when
no arguments are needed to customise the decorator. Decorators written as functions can be used in those
two cases:
The two trivial decorators above fall into the category of decorators which return the original function. If they
were to return a new function, an extra level of nestedness would be required. In the worst case, three levels of
nested functions.
The _wrapper function is defined to accept all positional and keyword arguments. In general we cannot know
what arguments the decorated function is supposed to accept, so the wrapper function just passes everything
to the wrapped function. One unfortunate consequence is that the apparent argument list is misleading.
Compared to decorators defined as functions, complex decorators defined as classes are simpler. When an
object is created, the __init__ method is only allowed to return None, and the type of the created object
cannot be changed. This means that when a decorator is defined as a class, it doesnt make much sense to use
the argument-less form: the final decorated object would just be an instance of the decorating class, returned
by the constructor call, which is not very useful. Therefore its enough to discuss class-based decorators where
arguments are given in the decorator expression and the decorator __init__ method is used for decorator
construction.
Contrary to normal rules (PEP 8) decorators written as classes behave more like functions and therefore their
name often starts with a lowercase letter.
In reality, it doesnt make much sense to create a new class just to have a decorator which returns the original
function. Objects are supposed to hold state, and such decorators are more useful when the decorator returns
a new object.
A decorator like this can do pretty much anything, since it can modify the original function object and mangle
the arguments, call the original function or not, and afterwards mangle the return value.
7.2.3 Copying the docstring and other attributes of the original function
When a new function is returned by the decorator to replace the original function, an unfortunate conse-
quence is that the original function name, the original docstring, the original argument list are lost. Those
attributes of the original function can partially be transplanted to the new function by setting __doc__ (the
docstring), __module__ and __name__ (the full name of the function), and __annotations__ (extra informa-
tion about arguments and the return value of the function available in Python 3). This can be done automati-
cally by using functools.update_wrapper.
functools.update_wrapper(wrapper, wrapped)
One important thing is missing from the list of attributes which can be copied to the replacement func-
tion: the argument list. The default values for arguments can be modified through the __defaults__,
__kwdefaults__ attributes, but unfortunately the argument list itself cannot be set as an attribute. This
means that help(function) will display a useless argument list which will be confusing for the user of the
function. An effective but ugly way around this problem is to create the wrapper dynamically, using eval. This
can be automated by using the external decorator module. It provides support for the decorator decorator,
which takes a wrapper and turns it into a decorator which preserves the function signature.
To sum things up, decorators should always use functools.update_wrapper or some other means of copy-
ing function attributes.
First, it should be mentioned that theres a number of useful decorators available in the standard library. There
are three decorators which really form a part of the language:
classmethod causes a method to become a class method, which means that it can be invoked without
creating an instance of the class. When a normal method is invoked, the interpreter inserts the instance
object as the first positional parameter, self. When a class method is invoked, the class itself is given as
the first parameter, often called cls.
Class methods are still accessible through the class namespace, so they dont pollute the modules
namespace. Class methods can be used to provide alternative constructors:
class Array(object):
def __init__(self, data):
self.data = data
@classmethod
def fromfile(cls, file):
data = numpy.load(file)
return cls(data)
In this example, A.a is an read-only attribute. It is also documented: help(A) includes the docstring for
attribute a taken from the getter method. Defining a as a property allows it to be a calculated on the fly,
and has the side effect of making it read-only, because no setter is defined.
To have a setter and a getter, two methods are required, obviously. Since Python 2.6 the following syntax
is preferred:
class Rectangle(object):
def __init__(self, edge):
self.edge = edge
@property
def area(self):
"""Computed area.
@area.setter
def area(self, area):
self.edge = area ** 0.5
The way that this works, is that the property decorator replaces the getter method with a property
object. This object in turn has three methods, getter, setter, and deleter, which can be used as
decorators. Their job is to set the getter, setter and deleter of the property object (stored as attributes
fget, fset, and fdel). The getter can be set like in the example above, when creating the object. When
defining the setter, we already have the property object under area, and we add the setter to it by using
the setter method. All this happens when we are creating the class.
Afterwards, when an instance of the class has been created, the property object is special. When the
interpreter executes attribute access, assignment, or deletion, the job is delegated to the methods of the
property object.
Properties are a bit of a stretch for the decorator syntax. One of the premises of the decorator syntax
that the name is not duplicated is violated, but nothing better has been invented so far. It is just good
style to use the same name for the getter, setter, and deleter methods.
Some newer examples include:
functools.lru_cache memoizes an arbitrary function maintaining a limited cache of argu-
ments:answer pairs (Python 3.2)
functools.total_ordering is a class decorator which fills in missing ordering methods (__lt__,
__gt__, __le__, . . . ) based on a single available one (Python 2.7).
Lets say we want to print a deprecation warning on stderr on the first invocation of a function we dont like
anymore. If we dont want to modify the function, we can use a decorator:
class deprecated(object):
"""Print a deprecation warning once on first use of the function.
return self._wrapper
def _wrapper(self, *args, **kwargs):
self.count += 1
if self.count == 1:
print self.func.__name__, 'is deprecated'
return self.func(*args, **kwargs)
def deprecated(func):
"""Print a deprecation warning once on first use of the function.
Lets say we have function which returns a lists of things, and this list created by running a loop. If we dont
know how many objects will be needed, the standard way to do this is something like:
def find_answers():
answers = []
while True:
ans = look_for_next_answer()
if ans is None:
break
answers.append(ans)
return answers
This is fine, as long as the body of the loop is fairly compact. Once it becomes more complicated, as often
happens in real code, this becomes pretty unreadable. We could simplify this by using yield statements, but
then the user would have to explicitly call list(find_answers()).
We can define a decorator which constructs the list for us:
def vectorized(generator_func):
def wrapper(*args, **kwargs):
return list(generator_func(*args, **kwargs))
return functools.update_wrapper(wrapper, generator_func)
@vectorized
def find_answers():
while True:
ans = look_for_next_answer()
if ans is None:
break
yield ans
This is a class decorator which doesnt modify the class, but just puts it in a global registry. It falls into the
category of decorators returning the original object:
class WordProcessor(object):
PLUGINS = []
def process(self, text):
for plugin in self.PLUGINS:
text = plugin().cleanup(text)
return text
@classmethod
def plugin(cls, plugin):
cls.PLUGINS.append(plugin)
@WordProcessor.plugin
class CleanMdashesExtension(object):
def cleanup(self, text):
return text.replace('—', u'\N{em dash}')
Here we use a decorator to decentralise the registration of plugins. We call our decorator with a noun, instead
of a verb, because we use it to declare that our class is a plugin for WordProcessor. Method plugin simply
appends the class to the list of plugins.
A word about the plugin itself: it replaces HTML entity for em-dash with a real Unicode em-dash character.
It exploits the unicode literal notation to insert a character by using its name in the unicode database (EM
DASH). If the Unicode character was inserted directly, it would be impossible to distinguish it from an en-
dash in the source of a program.
See also:
More examples and reading
PEP 318 (function and method decorator syntax)
PEP 3129 (class decorator syntax)
http://wiki.python.org/moin/PythonDecoratorLibrary
https://docs.python.org/dev/library/functools.html
http://pypi.python.org/pypi/decorator
Bruce Eckel
Decorators I: Introduction to Python Decorators
Python Decorators II: Decorator Arguments
Python Decorators III: A Decorator-Based Build System
A context manager is an object with __enter__ and __exit__ methods which can be used in the with state-
ment:
var = manager.__enter__()
try:
do_something(var)
finally:
manager.__exit__()
In other words, the context manager protocol defined in PEP 343 permits the extraction of the boring part of a
try..except..finally structure into a separate class leaving only the interesting do_something block.
1. The __enter__ method is called first. It can return a value which will be assigned to var. The as-part is
optional: if it isnt present, the value returned by __enter__ is simply ignored.
2. The block of code underneath with is executed. Just like with try clauses, it can either execute success-
fully to the end, or it can break, continue or return, or it can throw an exception. Either way, after the
block is finished, the __exit__ method is called. If an exception was thrown, the information about the
exception is passed to __exit__, which is described below in the next subsection. In the normal case,
exceptions can be ignored, just like in a finally clause, and will be rethrown after __exit__ is finished.
Lets say we want to make sure that a file is closed immediately after we are done writing to it:
Here we have made sure that the f.close() is called when the with block is exited. Since closing files is such
a common operation, the support for this is already present in the file class. It has an __exit__ method
which calls close and can be used as a context manager itself:
The common use for try..finally is releasing resources. Various different cases are implemented similarly:
in the __enter__ phase the resource is acquired, in the __exit__ phase it is released, and the exception, if
thrown, is propagated. As with files, theres often a natural operation to perform after the object has been used
and it is most convenient to have the support built in. With each release, Python provides support in more
places:
all file-like objects:
file automatically closed
fileinput, tempfile (py >= 3.2)
bz2.BZ2File, gzip.GzipFile, tarfile.TarFile, zipfile.ZipFile
ftplib, nntplib close connection (py >= 3.2 or 3.3)
locks
multiprocessing.RLock lock and unlock
multiprocessing.Semaphore
memoryview automatically release (py >= 3.2 and 2.7)
decimal.localcontext modify precision of computations temporarily
_winreg.PyHKEY open and close hive key
warnings.catch_warnings kill warnings temporarily
When an exception is thrown in the with-block, it is passed as arguments to __exit__. Three arguments are
used, the same as returned by sys.exc_info(): type, value, traceback. When no exception is thrown, None is
used for all three arguments. The context manager can swallow the exception by returning a true value from
__exit__. Exceptions can be easily ignored, because if __exit__ doesnt use return and just falls of the end,
None is returned, a false value, and therefore the exception is rethrown after __exit__ is finished.
The ability to catch exceptions opens interesting possibilities. A classic example comes from unit-tests we
want to make sure that some code throws the right kind of exception:
class assert_raises(object):
# based on pytest and unittest.TestCase
def __init__(self, type):
self.type = type
def __enter__(self):
pass
def __exit__(self, type, value, traceback):
if type is None:
raise AssertionError('exception expected')
if issubclass(type, self.type):
return True # swallow the expected exception
raise AssertionError('wrong exception type')
with assert_raises(KeyError):
{}['foo']
When discussing generators, it was said that we prefer generators to iterators implemented as classes because
they are shorter, sweeter, and the state is stored as local, not instance, variables. On the other hand, as de-
scribed in Bidirectional communication, the flow of data between the generator and its caller can be bidirec-
tional. This includes exceptions, which can be thrown into the generator. We would like to implement context
managers as special generator functions. In fact, the generator protocol was designed to support this use case.
@contextlib.contextmanager
def some_generator(<arguments>):
<setup>
try:
yield <value>
finally:
<cleanup>
The contextlib.contextmanager helper takes a generator and turns it into a context manager. The gener-
ator has to obey some rules which are enforced by the wrapper function most importantly it must yield
exactly once. The part before the yield is executed from __enter__, the block of code protected by the con-
text manager is executed when the generator is suspended in yield, and the rest is executed in __exit__. If
an exception is thrown, the interpreter hands it to the wrapper through __exit__ arguments, and the wrap-
per function then throws it at the point of the yield statement. Through the use of generators, the context
manager is shorter and simpler.
@contextlib.contextmanager
def closing(obj):
try:
yield obj
finally:
obj.close()
@contextlib.contextmanager
def assert_raises(type):
try:
yield
except type:
return
except Exception as value:
raise AssertionError('wrong exception type')
else:
raise AssertionError('exception expected')
Prerequisites
NumPy
Cython
Pillow (Python imaging library, used in a couple of examples)
Chapter contents
Life of ndarray
Its. . .
281
Scipy lecture notes, Edition 2017.1
Block of memory
Data types
Indexing scheme: strides
Findings in dissection
Universal functions
What they are?
Exercise: building an ufunc from scratch
Solution: building an ufunc from scratch
Generalized ufuncs
Interoperability features
Sharing multidimensional, typed data
The old buffer protocol
The old buffer protocol
Array interface protocol
Array siblings: chararray, maskedarray, matrix
chararray: vectorized string operations
masked_array missing data
recarray: purely convenience
matrix: convenience?
Summary
Contributing to NumPy/Scipy
Why
Reporting bugs
Contributing to documentation
Contributing features
How to help, in general
8.1.1 Its. . .
ndarray =
block of memory + indexing scheme + data type descriptor
raw data
how to locate an element
/* Block of memory */
char *data;
/* Indexing scheme */
int nd;
npy_intp *dimensions;
npy_intp *strides;
/* Other stuff */
PyObject *base;
int flags;
PyObject *weakreflist;
} PyArrayObject;
>>> x.__array_interface__['data'][0]
64803824
>>> x.__array_interface__
{'data': (35828928, False),
'descr': [('', '<i4')],
'shape': (4,),
'strides': None,
'typestr': '<i4',
'version': 3}
x is a string (in Python 3 a bytes), we can represent its data as an array of ints:
>>> y.flags
C_CONTIGUOUS : True
F_CONTIGUOUS : True
OWNDATA : False
WRITEABLE : False
ALIGNED : True
UPDATEIFCOPY : False
The owndata and writeable flags indicate status of the memory block.
The descriptor
>>> np.dtype(int).type
<type 'numpy.int64'>
>>> np.dtype(int).itemsize
8
>>> np.dtype(int).byteorder
'='
chunk_id "RIFF"
chunk_size 4-byte unsigned little-endian integer
format "WAVE"
fmt_id "fmt "
fmt_size 4-byte unsigned little-endian integer
audio_fmt 2-byte unsigned little-endian integer
num_channels 2-byte unsigned little-endian integer
sample_rate 4-byte unsigned little-endian integer
byte_rate 4-byte unsigned little-endian integer
block_align 2-byte unsigned little-endian integer
bits_per_sample 2-byte unsigned little-endian integer
data_id "data"
data_size 4-byte unsigned little-endian integer
See also:
wavreader.py
>>> wav_header_dtype['format']
dtype('S4')
>>> wav_header_dtype.fields
dict_proxy({'block_align': (dtype('uint16'), 32), 'format': (dtype('S4'), 8), 'data_id':
,(dtype(('S1', (2, 2))), 36), 'fmt_id': (dtype('S4'), 12), 'byte_rate': (dtype('uint32'), 28),
, 'chunk_id': (dtype('S4'), 0), 'num_channels': (dtype('uint16'), 22), 'sample_rate': (dtype(
,'uint32'), 24), 'bits_per_sample': (dtype('uint16'), 34), 'chunk_size': (dtype('uint32'), 4),
, 'fmt_size': (dtype('uint32'), 16), 'data_size': (dtype('uint32'), 40), 'audio_fmt': (dtype(
,'uint16'), 20)})
>>> wav_header_dtype.fields['format']
(dtype('S4'), 8)
The first element is the sub-dtype in the structured data, corresponding to the name format
The second one is its offset (in bytes) from the beginning of the item
Exercise
Mini-exercise, make a sparse dtype by using offsets, and only some of the fields:
>>> wav_header_dtype = np.dtype(dict(
... names=['format', 'sample_rate', 'data_id'],
... offsets=[offset_1, offset_2, offset_3], # counted from start of structure in bytes
... formats=list of dtypes for each of the fields,
... ))
and use that to read the sample rate, and data_id (as sub-array).
>>> wav_header['data_id']
array([[['d', 'a'],
['t', 'a']]],
dtype='|S1')
>>> wav_header.shape
(1,)
>>> wav_header['data_id'].shape
(1, 2, 2)
Note: There are existing modules such as wavfile, audiolab, etc. for loading sound data. . .
casting
on assignment
on array construction
on arithmetic
etc.
and manually: .astype(dtype)
data re-interpretation
manually: .view(dtype)
Casting
Re-interpretation / viewing
4 of uint8, OR,
4 of int8, OR,
2 of int16, OR,
1 of int32, OR,
1 of float32, OR,
...
How to switch from one to another?
1. Switch the dtype:
>>> y = x.view("<i4")
>>> y
array([67305985], dtype=int32)
>>> 0x04030201
67305985
Note:
.view() makes views, does not copy (or alter) the memory block
only changes the dtype (and adjusts array shape):
>>> x[1] = 5
>>> y
array([328193], dtype=int32)
>>> y.base is x
True
See also:
view-colors.py
You have RGBA data in an array:
where the last three dimensions are the R, B, and G, and alpha channels.
How to make a (10, 10) structured array with field names r, g, b, a without copying data?
>>> y = ...
Solution
What happened?
. . . we need to look into what x[0,1] actually means
>>> 0x0301, 0x0402
(769, 1026)
Main point
The question:
>>> x.strides
(3, 1)
>>> byte_offset = 3*1 + 1*2 # to find x[1, 2]
>>> x.flat[byte_offset]
6
>>> x[1, 2]
6
- simple, **flexible**
>>> str(x.data)
'\x01\x00\x02\x00\x03\x00\x04\x00\x05\x00\x06\x00'
Transposition does not affect the memory layout of the data, only strides
>>> x.strides
(2, 1)
>>> y.strides
(1, 2)
>>> str(x.data)
'\x01\x02\x03\x04'
>>> str(y.data)
'\x01\x03\x02\x04'
Everything can be represented by changing only shape, strides, and possibly adjusting the data
pointer!
Never makes copies of the data
>>> y.strides
(-4,)
>>> y = x[2:]
>>> y.__array_interface__['data'][0] - x.__array_interface__['data'][0]
8
But: not all reshaping operations can be represented by playing with strides:
>>> str(a.data)
'\x00\x01\x02\x03\x04\x05'
>>> b
array([[0, 2, 4],
[1, 3, 5]], dtype=int8)
>>> c = b.reshape(3*2)
>>> c
array([0, 2, 4, 1, 3, 5], dtype=int8)
Here, there is no way to represent the array c given one stride and the block of memory for a. Therefore, the
reshape operation needs to make a copy here.
Stride manipulation
Warning: as_strided does not check that you stay inside the memory block bounds. . .
See also:
stride-fakedims.py
Exercise
Spoiler
Stride can also be 0:
Broadcasting
Doing something useful with it: outer product of [1, 2, 3, 4] and [5, 6, 7]
>>> x2 * y2
array([[ 5, 10, 15, 20],
[ 6, 12, 18, 24],
[ 7, 14, 21, 28]], dtype=int16)
See also:
stride-diagonals.py
Challenge
Pick diagonal entries of the matrix: (assume C memory order):
However,
>>> y.flags.owndata
False
Note This behavior has changed: before numpy 1.9, np.diag would make a copy.
See also:
stride-diagonals.py
Challenge
Compute the tensor trace:
>>> x = np.arange(5*5*5*5).reshape(5, 5, 5, 5)
>>> s = 0
>>> for i in range(5):
... for j in range(5):
... s += x[j, i, j, i]
Solution
In [1]: x = np.zeros((20000,))
In [2]: y = np.zeros((20000*67,))[::67]
Sometimes,
>>> a -= b
>>> a -= b.copy()
Parts of an Ufunc
1. Provided by user
char types[3]
Making it easier
z z2 + c
where c = x + i y is a complex number. This iteration is repeated if z stays finite no matter how long the
iteration runs, c belongs to the Mandelbrot set.
Make ufunc called mandel(z0, c) that computes:
z = z0
for k in range(iterations):
z = z*z + c
say, 100 iterations or until z.real**2 + z.imag**2 > 1000. Use it to determine which c are in the
Mandelbrot set.
Our function is a simple one, so make use of the PyUFunc_* helpers.
Write it in Cython
See also:
mandel.pyx, mandelplot.py
#
# Fix the parts marked by TODO
#
#
# Compile this file by (Cython >= 0.12 required because of the complex vars)
#
# cython mandel.pyx
# python setup.py build_ext -i
#
# and try it out with, in this directory,
#
# >>> import mandel
# >>> mandel.mandel(0, 1 + 2j)
#
#
#
# Some points of note:
#
# - It's *NOT* allowed to call any Python functions here.
#
# The Ufunc loop runs with the Python Global Interpreter Lock released.
# Hence, the ``nogil``.
#
# - And so all local variables must be declared with ``cdef``
#
# - Note also that this function receives *pointers* to the data
#
#
# TODO: write the Mandelbrot iteration for one point here,
# as you would write it in Python.
#
# Say, use 100 as the maximum number of iterations, and 1000
# as the cutoff for z.real**2 + z.imag**2.
#
PyUFunc_GG_G)
import_array()
import_ufunc()
#
# Reminder: some pre-made Ufunc loops:
#
# ================ =======================================================
# ``PyUfunc_f_f`` ``float elementwise_func(float input_1)``
# ``PyUfunc_ff_f`` ``float elementwise_func(float input_1, float input_2)``
# ``PyUfunc_d_d`` ``double elementwise_func(double input_1)``
# ``PyUfunc_dd_d`` ``double elementwise_func(double input_1, double input_2)``
# ``PyUfunc_D_D`` ``elementwise_func(complex_double *input, complex_double* complex_double)``
# ``PyUfunc_DD_D`` ``elementwise_func(complex_double *in1, complex_double *in2, complex_
,double* out)``
# ================ =======================================================
#
# The full list is above.
#
#
# Type codes:
#
# NPY_BOOL, NPY_BYTE, NPY_UBYTE, NPY_SHORT, NPY_USHORT, NPY_INT, NPY_UINT,
# NPY_LONG, NPY_ULONG, NPY_LONGLONG, NPY_ULONGLONG, NPY_FLOAT, NPY_DOUBLE,
# NPY_LONGDOUBLE, NPY_CFLOAT, NPY_CDOUBLE, NPY_CLONGDOUBLE, NPY_DATETIME,
# NPY_TIMEDELTA, NPY_OBJECT, NPY_STRING, NPY_UNICODE, NPY_VOID
#
mandel = PyUFunc_FromFuncAndData(
loop_func,
elementwise_funcs,
input_output_types,
1, # number of supported input types
TODO, # number of input args
TODO, # number of output args
0, # `identity` element, never mind this
"mandel", # function name
"mandel(z, c) -> computes z*z + c", # docstring
0 # unused
)
Type codes:
#
# Some points of note:
#
# - It's *NOT* allowed to call any Python functions here.
#
# The Ufunc loop runs with the Python Global Interpreter Lock released.
# Hence, the ``nogil``.
#
# - And so all local variables must be declared with ``cdef``
#
# - Note also that this function receives *pointers* to the data;
# the "traditional" solution to passing complex variables around
#
# Straightforward iteration
for k in range(100):
z = z*z + c
if z.real**2 + z.imag**2 > 1000:
break
import_array()
import_ufunc()
loop_func[0] = PyUFunc_DD_D
input_output_types[0] = NPY_CDOUBLE
input_output_types[1] = NPY_CDOUBLE
input_output_types[2] = NPY_CDOUBLE
elementwise_funcs[0] = <void*>mandel_single_point
mandel = PyUFunc_FromFuncAndData(
loop_func,
elementwise_funcs,
input_output_types,
1, # number of supported input types
2, # number of input args
1, # number of output args
0, # `identity` element, never mind this
"mandel", # function name
"mandel(z, c) -> computes iterated z*z + c", # docstring
0 # unused
)
"""
Plot Mandelbrot
================
"""
import numpy as np
import mandel
x = np.linspace(-1.7, 0.6, 1000)
y = np.linspace(-1.4, 1.4, 1000)
c = x[None,:] + 1j*y[:,None]
z = mandel.mandel(c, c)
loop_funcs[0] = PyUFunc_DD_D
input_output_types[0] = NPY_CDOUBLE
input_output_types[1] = NPY_CDOUBLE
input_output_types[2] = NPY_CDOUBLE
elementwise_funcs[0] = <void*>mandel_single_point
loop_funcs[1] = PyUFunc_FF_F
input_output_types[3] = NPY_CFLOAT
input_output_types[4] = NPY_CFLOAT
input_output_types[5] = NPY_CFLOAT
elementwise_funcs[1] = <void*>mandel_single_point_singleprec
mandel = PyUFunc_FromFuncAndData(
loop_func,
elementwise_funcs,
input_output_types,
2, # number of supported input types <----------------
ufunc
output = elementwise_function(input)
Both output and input can be a single array element only.
generalized ufunc
output and input can be arrays with a fixed number of dimensions
For example, matrix trace (sum of diag elements):
(n, n) -> ()
Matrix product:
Status in NumPy
in both examples the last two dimensions became core dimensions, and are modified as per the signature
otherwise, the g-ufunc operates elementwise
matrix multiplication this way could be useful for operating on many small matrices at once
int i;
input_1 += steps[0];
input_2 += steps[1];
output += steps[2];
}
}
Suppose you
1. Write a library than handles (multidimensional) binary data,
2. Want to make it easy to manipulate the data with NumPy, or whatever other library,
3. . . . but would not like to have NumPy as a dependency.
Currently, 3 solutions:
1. the old buffer interface
2. the array interface
3. the new buffer interface (PEP 3118)
Q:
Check what happens if data is now modified, and img saved again.
"""
From buffer
============
Show how to exchange data between numpy and a library that only knows
the buffer interface.
"""
import numpy as np
import Image
#
# Modify the original data, and save again.
#
# It turns out that PIL, which knows next to nothing about Numpy,
# happily shares the same data.
#
x[:,:,1] = 254
img.save('test2.png')
Multidimensional buffers
Data type information present
NumPy-specific approach; slowly deprecated (but not going away)
Not integrated in Python otherwise
See also:
Documentation: http://docs.scipy.org/doc/numpy/reference/arrays.interface.html
::
Note: .view() has a second meaning: it can make an ndarray an instance of a specialized ndarray subclass
Masked arrays are arrays that may have missing or invalid entries.
For example, suppose we have an array where the fourth entry is invalid:
>>> mx.mean()
2.75
>>> np.mean(mx)
2.75
Warning: Not all NumPy functions respect masks, for instance np.dot, so check the return types.
>>> mx[1] = 9
>>> x
array([ 1, 9, 3, -99, 5])
The mask
>>> mx[1] = 9
>>> mx
masked_array(data = [1 9 3 -- 5],
mask = [False False False True False],
fill_value = 999999)
>>> mx.mask
array([False, False, False, True, False], dtype=bool)
The masked entries can be filled with a given value to get an usual array back:
>>> x2 = mx.filled(-1)
>>> x2
array([ 1, 9, 3, -1, 5])
Domain-aware functions
Note: Streamlined and more seamless support for dealing with missing data in arrays is making its way into
NumPy 1.7. Stay tuned!
Canadian rangers were distracted when counting hares and lynxes in 1903-1910 and 1917-1918, and got
the numbers are wrong. (Carrot farmers stayed alert, though.) Compute the mean populations over time,
ignoring the invalid numbers.
>>> data = np.loadtxt('data/populations.txt')
>>> populations = np.ma.masked_array(data[:,1:])
>>> year = data[:, 0]
>>> populations.mean(axis=0)
masked_array(data = [40472.72727272727 18627.272727272728 42400.0],
mask = [False False False],
fill_value = 1e+20)
>>> populations.std(axis=0)
masked_array(data = [21087.656489006717 15625.799814240254 3322.5062255844787],
mask = [False False False],
fill_value = 1e+20)
>>> arr = np.array([('a', 1), ('b', 2)], dtype=[('x', 'S1'), ('y', int)])
>>> arr2 = arr.view(np.recarray)
>>> arr2.x
chararray(['a', 'b'],
dtype='|S1')
>>> arr2.y
array([1, 2])
always 2-D
* is the matrix product, not the elementwise one
8.5 Summary
8.6.1 Why
Theres a bug?
I dont understand what this is supposed to do?
I have this fancy code. Would you like to have it?
Id like to help! What can I do?
>>> np.random.permutation(12)
array([11, 5, 8, 4, 6, 1, 9, 3, 7, 2, 10, 0])
>>> np.random.permutation(12.)
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "mtrand.pyx", line 3311, in mtrand.RandomState.permutation
File "mtrand.pyx", line 3254, in mtrand.RandomState.shuffle
TypeError: len() of unsized object
I'm using NumPy 1.4.1, built from the official tarball, on Windows
64 with Visual studio 2008, on Python.org 64-bit Python.
>>> print(np.__version__)
1...
>>> print(np.__file__)
/...
1. Documentation editor
http://docs.scipy.org/doc/numpy
Registration
Register an account
Subscribe to scipy-dev mailing list (subscribers-only)
Problem with mailing lists: you get mail
To: scipy-dev@scipy.org
Hi,
Cheers,
N. N.
<edit stuff>
git commit -a
Prerequisites
Numpy
IPython
nosetests
pyflakes
gdb for the C-debugging part.
Chapter contents
Avoiding bugs
Coding best practices to avoid getting in trouble
pyflakes: fast static analysis
Debugging workflow
Using the Python debugger
Invoking the debugger
314
Scipy lecture notes, Edition 2017.1
Brian Kernighan
Everyone knows that debugging is twice as hard as writing a program in the first place. So if youre as clever
as you can be when you write it, how will you ever debug it?
In TextMate
Menu: TextMate -> Preferences -> Advanced -> Shell variables, add a shell variable:
TM_PYCHECKER = /Library/Frameworks/Python.framework/Versions/Current/bin/pyflakes
autocmd FileType python let &mp = 'echo "*** running % ***" ; pyflakes %'
autocmd FileType tex,mp,rst,python imap <Esc>[15~ <C-O>:make!^M
autocmd FileType tex,mp,rst,python map <Esc>[15~ :make!^M
autocmd FileType tex,mp,rst,python set autowrite
(define-minor-mode pyflakes-mode
"Toggle pyflakes mode.
With no argument, this command toggles the mode.
Non-null prefix argument turns on the mode.
Null prefix argument turns off the mode."
;; The initial value.
nil
;; The indicator for the mode line.
" Pyflakes"
;; The minor mode bindings.
'( ([f5] . pyflakes-thisfile) )
)
In vim
Use the pyflakes.vim plugin:
1. download the zip file from http://www.vim.org/scripts/script.php?script_id=2441
2. extract the files in ~/.vim/ftplugin/python
3. make sure your vimrc has filetype plugin indent on
Alternatively: use the syntastic plugin. This can be configured to use flake8 too and also handles
on-the-fly checking for many other languages.
(add-to-list 'flymake-allowed-file-name-masks
'("\\.py\\'" flymake-pyflakes-init)))
If you do have a non trivial bug, this is when debugging strategies kick in. There is no silver bullet. Yet, strategies
help:
For debugging a given problem, the favorable situation is when the problem is isolated in a
small number of lines of code, outside framework or application code, with short modify-run-
fail cycles
1. Make it fail reliably. Find a test case that makes the code fail every time.
2. Divide and Conquer. Once you have a failing test case, isolate the failing code.
Which module.
Which function.
Which line of code.
=> isolate a small reproducible failure: a test case
3. Change one thing at a time and re-run the failing test case.
4. Use the debugger to understand what is going wrong.
5. Take notes and be patient. It may take a while.
Note: Once you have gone through this process: isolated a tight piece of code reproducing the bug and fix the
bug using this piece of code, add the corresponding code to your test suite.
The python debugger, pdb: https://docs.python.org/library/pdb.html, allows you to inspect your code inter-
actively.
Specifically it allows you to:
View the source code.
Walk up and down the call stack.
Inspect values of variables.
Modify values of variables.
Set breakpoints.
Yes, print statements do work as a debugging tool. However to inspect runtime, it is often more efficient to
use the debugger.
Postmortem
/home/varoquau/dev/scipy-lecture-notes/advanced/debugging/index_error.py in index_error()
3 def index_error():
4 lst = list('foobar')
----> 5 print lst[len(lst)]
6
7 if __name__ == '__main__':
In [2]: %debug
> /home/varoquau/dev/scipy-lecture-notes/advanced/debugging/index_error.py(5)index_error()
4 lst = list('foobar')
----> 5 print lst[len(lst)]
ipdb> list
1 """Small snippet to raise an IndexError."""
2
3 def index_error():
4 lst = list('foobar')
----> 5 print lst[len(lst)]
6
7 if __name__ == '__main__':
8 index_error()
9
ipdb> len(lst)
6
ipdb> print lst[len(lst)-1]
r
ipdb> quit
In [3]:
In some situations you cannot use IPython, for instance to debug a script that wants to be called from the
command line. In this case, you can call the script with python -m pdb script.py:
$ python -m pdb index_error.py
> /home/varoquau/dev/scipy-lecture-notes/advanced/optimizing/index_error.py(1)<module>()
-> """Small snippet to raise an IndexError."""
(Pdb) continue
Traceback (most recent call last):
File "/usr/lib/python2.6/pdb.py", line 1296, in main
pdb._runscript(mainpyfile)
File "/usr/lib/python2.6/pdb.py", line 1215, in _runscript
self.run(statement)
File "/usr/lib/python2.6/bdb.py", line 372, in run
exec cmd in globals, locals
File "<string>", line 1, in <module>
File "index_error.py", line 8, in <module>
index_error()
File "index_error.py", line 5, in index_error
print lst[len(lst)]
IndexError: list index out of range
Uncaught exception. Entering post mortem debugging
Running 'cont' or 'step' will restart the program
> /home/varoquau/dev/scipy-lecture-notes/advanced/optimizing/index_error.py(5)index_error()
-> print lst[len(lst)]
(Pdb)
Step-by-step execution
Situation: You believe a bug exists in a module but are not sure where.
For instance we are trying to debug wiener_filtering.py. Indeed the code runs, but the filtering does not
work well.
Run the script in IPython with the debugger using %run -d wiener_filtering.p :
ipdb> n
> /home/varoquau/dev/scipy-lecture-notes/advanced/optimizing/wiener_filtering.py(4)
,<module>()
3
1---> 4 import numpy as np
5 import scipy as sp
ipdb> b 34
Breakpoint 2 at /home/varoquau/dev/scipy-lecture-notes/advanced/optimizing/wiener_
,filtering.py:34
ipdb> c
> /home/varoquau/dev/scipy-lecture-notes/advanced/optimizing/wiener_filtering.
,py(34)iterated_wiener()
33 """
2--> 34 noisy_img = noisy_img
35 denoised_img = local_mean(noisy_img, size=size)
Step into code with n(ext) and s(tep): next jumps to the next statement in the current execution
context, while step will go across execution contexts, i.e. enable exploring inside function calls:
ipdb> s
> /home/varoquau/dev/scipy-lecture-notes/advanced/optimizing/wiener_filtering.
,py(35)iterated_wiener()
2 34 noisy_img = noisy_img
---> 35 denoised_img = local_mean(noisy_img, size=size)
36 l_var = local_var(noisy_img, size=size)
ipdb> n
> /home/varoquau/dev/scipy-lecture-notes/advanced/optimizing/wiener_filtering.
,py(36)iterated_wiener()
35 denoised_img = local_mean(noisy_img, size=size)
---> 36 l_var = local_var(noisy_img, size=size)
37 for i in range(3):
ipdb> n
> /home/varoquau/dev/scipy-lecture-notes/advanced/optimizing/wiener_filtering.
,py(37)iterated_wiener()
36 l_var = local_var(noisy_img, size=size)
---> 37 for i in range(3):
38 res = noisy_img - denoised_img
ipdb> print l_var
[[5868 5379 5316 ..., 5071 4799 5149]
[5013 363 437 ..., 346 262 4355]
[5379 410 344 ..., 392 604 3377]
...,
[ 435 362 308 ..., 275 198 1632]
[ 548 392 290 ..., 248 263 1653]
[ 466 789 736 ..., 1835 1725 1940]]
Oh dear, nothing but integers, and 0 variation. Here is our bug, we are doing integer arithmetic.
When we run the wiener_filtering.py file, the following warnings are raised:
In [2]: %run wiener_filtering.py
wiener_filtering.py:40: RuntimeWarning: divide by zero encountered in divide
noise_level = (1 - noise/l_var )
We can turn these warnings in exception, which enables us to do post-mortem debugging on them, and find
our problem more quickly:
In [3]: np.seterr(all='raise')
Out[3]: {'divide': 'print', 'invalid': 'print', 'over': 'print', 'under': 'ignore'}
In [4]: %run wiener_filtering.py
---------------------------------------------------------------------------
FloatingPointError Traceback (most recent call last)
/home/esc/anaconda/lib/python2.7/site-packages/IPython/utils/py3compat.pyc in execfile(fname,
, *where)
176 else:
177 filename = fname
--> 178 __builtin__.execfile(filename, *where)
/home/esc/physique-cuso-python-2013/scipy-lecture-notes/advanced/debugging/wiener_filtering.
,py in <module>()
55 pl.matshow(noisy_face[cut], cmap=pl.cm.gray)
56
---> 57 denoised_face = iterated_wiener(noisy_face)
58 pl.matshow(denoised_face[cut], cmap=pl.cm.gray)
59
/home/esc/physique-cuso-python-2013/scipy-lecture-notes/advanced/debugging/wiener_filtering.
,py in iterated_wiener(noisy_img, size)
38 res = noisy_img - denoised_img
39 noise = (res**2).sum()/res.size
---> 40 noise_level = (1 - noise/l_var )
41 noise_level[noise_level<0] = 0
42 denoised_img += noise_level*res
Warning: When running nosetests, the output is captured, and thus it seems that the debugger does not
work. Simply run the nosetests with the -s flag.
For stepping through code and inspecting variables, you might find it more convenient to use a graph-
ical debugger such as winpdb.
Alternatively, pudb is a good semi-graphical debugger with a text user interface in the console.
Also, the pydbgr project is probably worth looking at.
ipdb> help
Undocumented commands:
======================
retval rv
If you have a segmentation fault, you cannot debug it with pdb, as it crashes the Python interpreter before it
can drop in the debugger. Similarly, if you have a bug in C code embedded in Python, pdb is useless. For this
we turn to the gnu debugger, gdb, available on Linux.
Before we start with gdb, let us add a few Python-specific tools to it. For this we add a few macros to our ~/.
gdbinit. The optimal choice of macro depends on your Python version and your gdb version. I have added a
simplified version in gdbinit, but feel free to read DebuggingWithGdb.
To debug with gdb the Python script segfault.py, we can run the script in gdb as follows
$ gdb python
...
(gdb) run segfault.py
Starting program: /usr/bin/python segfault.py
[Thread debugging using libthread_db enabled]
We get a segfault, and gdb captures it for post-mortem debugging in the C level stack (not the Python call
stack). We can debug the C call stack using gdbs commands:
(gdb) up
#1 0x004af4f5 in _copy_from_same_shape (dest=<value optimized out>,
src=<value optimized out>, myfunc=0x496780 <_strided_byte_copy>,
swap=0)
at numpy/core/src/multiarray/ctors.c:748
748 myfunc(dit->dataptr, dest->strides[maxaxis],
As you can see, right now, we are in the C code of numpy. We would like to know what is the Python code that
triggers this segfault, so we go up the stack until we hit the Python execution loop:
(gdb) up
#8 0x080ddd23 in call_function (f=
Frame 0x85371ec, for file /home/varoquau/usr/lib/python2.6/site-packages/numpy/core/
,arrayprint.py, line 156, in _leading_trailing (a=<numpy.ndarray at remote 0x85371b0>, _nc=
,<module at remote 0xb7f93a64>), throwflag=0)
at ../Python/ceval.c:3750
3750 ../Python/ceval.c: No such file or directory.
in ../Python/ceval.c
(gdb) up
#9 PyEval_EvalFrameEx (f=
Frame 0x85371ec, for file /home/varoquau/usr/lib/python2.6/site-packages/numpy/core/
,arrayprint.py, line 156, in _leading_trailing (a=<numpy.ndarray at remote 0x85371b0>, _nc=
,<module at remote 0xb7f93a64>), throwflag=0)
at ../Python/ceval.c:2412
2412 in ../Python/ceval.c
(gdb)
Once we are in the Python execution loop, we can use our special Python helper function. For instance we can
find the corresponding Python code:
(gdb) pyframe
/home/varoquau/usr/lib/python2.6/site-packages/numpy/core/arrayprint.py (158): _leading_
,trailing
(gdb)
This is numpy code, we need to go up until we find code that we have written:
(gdb) up
...
(gdb) up
#34 0x080dc97a in PyEval_EvalFrameEx (f=
Frame 0x82f064c, for file segfault.py, line 11, in print_big_array (small_array=<numpy.
,ndarray at remote 0x853ecf0>, big_array=<numpy.ndarray at remote 0x853ed20>), throwflag=0)
,at ../Python/ceval.c:1630
1630 ../Python/ceval.c: No such file or directory.
in ../Python/ceval.c
(gdb) pyframe
segfault.py (12): print_big_array
def make_big_array(small_array):
big_array = stride_tricks.as_strided(small_array,
shape=(2e6, 2e6), strides=(32, 32))
return big_array
def print_big_array(small_array):
big_array = make_big_array(small_array)
Thus the segfault happens when printing big_array[-10:]. The reason is simply that big_array has been
allocated with its end outside the program memory.
Note: For a list of Python-specific commands defined in the gdbinit, read the source of this file.
Wrap up exercise
The following script is well documented and hopefully legible. It seeks to answer a problem of actual interest
for numerical computing, but it does not work. . . Can you debug it?
Python source code: to_debug.py
Donald Knuth
Prerequisites
line_profiler
Chapters contents
Optimization workflow
Profiling Python code
Timeit
Profiler
Line-profiler
Making code go faster
Algorithmic optimization
325
Scipy lecture notes, Edition 2017.1
10.2.1 Timeit
In [2]: a = np.arange(1000)
In [3]: %timeit a ** 2
100000 loops, best of 3: 5.73 us per loop
In [5]: %timeit a * a
100000 loops, best of 3: 5.56 us per loop
Note: For long running calls, using %time instead of %timeit; it is less precise but faster
10.2.2 Profiler
Useful when you have a large program to profile, for example the following file:
# For this example to run, you also need the 'ica.py' file
import numpy as np
def test():
data = np.random.random((5000, 100))
u, s, v = linalg.svd(data)
pca = np.dot(u[:, :10].T, data)
results = fastica(pca.T, whiten=False)
if __name__ == '__main__':
test()
Note: This is a combination of two unsupervised learning techniques, principal component analysis (PCA)
and independent component analysis (ICA). PCA is a technique for dimensionality reduction, i.e. an algorithm
to explain the observed variance in your data using less dimensions. ICA is a source seperation technique,
for example to unmix multiple signals that have been recorded through multiple sensors. Doing a PCA first
and then an ICA can be useful if you have more sensors than signals. For more information see: the FastICA
example from scikits-learn.
To run it, you also need to download the ica module. In IPython we can time the script:
Clearly the svd (in decomp.py) is what takes most of our time, a.k.a. the bottleneck. We have to find a way to
make this step go faster, or to avoid this step (algorithmic optimization). Spending time on the rest of the code
is useless.
Similar profiling can be done outside of IPython, simply calling the built-in Python profilers cProfile and
profile.
$ python -m cProfile -o demo.prof demo.py
Using the -o switch will output the profiler results to the file demo.prof to view with an external tool. This
can be useful if you wish to process the profiler output with a visualization tool.
10.2.3 Line-profiler
The profiler tells us which function takes most of the time, but not where it is called.
For this, we use the line_profiler: in the source file, we decorate a few functions that we want to inspect with
@profile (no need to import it)
@profile
def test():
data = np.random.random((5000, 100))
u, s, v = linalg.svd(data)
pca = np.dot(u[:, :10], data)
results = fastica(pca.T, whiten=False)
Then we run the script using the kernprof.py program, with switches -l, --line-by-line and -v, --view
to use the line-by-line profiler and view the results in addition to saving them:
$ kernprof.py -l -v demo.py
File: demo.py
Function: test at line 5
Total time: 14.2793 s
The SVD is taking all the time. We need to optimise this line.
Once we have identified the bottlenecks, we need to make the corresponding code go faster.
The first thing to look for is algorithmic optimization: are there ways to compute less, or better?
For a high-level view of the problem, a good understanding of the maths behind the algorithm helps. However,
it is not uncommon to find simple changes, like moving computation or memory allocation outside a for
loop, that bring in big gains.
In both examples above, the SVD - Singular Value Decomposition - is what takes most of the time. Indeed, the
computational cost of this algorithm is roughly n 3 in the size of the input matrix.
However, in both of these example, we are not using all the output of the SVD, but only the first few rows of
its first return argument. If we use the svd implementation of scipy, we can ask for an incomplete version of
the SVD. Note that implementations of linear algebra in scipy are richer then those in numpy and should be
preferred.
def test():
data = np.random.random((5000, 100))
u, s, v = linalg.svd(data, full_matrices=False)
pca = np.dot(u[:, :10].T, data)
results = fastica(pca.T, whiten=False)
Real incomplete SVDs, e.g. computing only the first 10 eigenvectors, can be computed with arpack, available
in scipy.sparse.linalg.eigsh.
For certain algorithms, many of the bottlenecks will be linear algebra computations. In this case, using
the right function to solve the right problem is key. For instance, an eigenvalue problem with a symmetric
matrix is easier to solve than with a general matrix. Also, most often, you can avoid inverting a matrix and
use a less costly (and more numerically stable) operation.
Know your computational linear algebra. When in doubt, explore scipy.linalg, and use %timeit to try
out different alternatives on your data.
A complete discussion on advanced use of numpy is found in chapter Advanced NumPy, or in the article The
NumPy array: a structure for efficient numerical computation by van der Walt et al. Here we discuss only some
commonly encountered tricks to make code faster.
Vectorizing for loops
Find tricks to avoid for loops using numpy arrays. For this, masks and indices arrays can be useful.
Broadcasting
Use broadcasting to do operations on arrays as small as possible before combining them.
In place operations
In [1]: a = np.zeros(1e7)
note: we need global a in the timeit so that it work, as it is assigning to a, and thus considers it as a local
variable.
Be easy on the memory: use views, and not copies
Copying big arrays is as costly as making simple numerical operations on them:
In [1]: a = np.zeros(1e7)
In [3]: %timeit a + 1
10 loops, best of 3: 112 ms per loop
In [4]: c.strides
Out[4]: (80000, 8)
This is the reason why Fortran ordering or C ordering may make a big difference on operations:
In [8]: c = np.ascontiguousarray(a.T)
Note that copying the data to work around this effect may not be worth it:
Using numexpr can be useful to automatically optimize code for such effects.
Use compiled code
The last resort, once you are sure that all the high-level optimizations have been explored, is to transfer
the hot spots, i.e. the few lines or functions in which most of the time is spent, to compiled code. For
compiled code, the preferred option is to use Cython: it is easy to transform exiting Python code in
compiled code, and with a good use of the numpy support yields efficient code on numpy arrays, for
instance by unrolling loops.
Warning: For all the above: profile and time your choices. Dont base your optimization on theoretical
considerations.
If you need to profile memory usage, you could try the memory_profiler
If you need to profile down into C extensions, you could try using gperftools from Python with yep.
If you would like to track performace of your code across time, i.e. as you make new commits to your
repository, you could try: vbench
If you need some interactive visualization why not try RunSnakeRun
11.1 Introduction
332
Scipy lecture notes, Edition 2017.1
11.1.4 Prerequisites
numpy
scipy
matplotlib (optional)
ipython (the enhancements come handy)
* passing a sparse matrix object to NumPy functions expecting ndarray/matrix does not
work
Examples
(1, 0) 5
(2, 1) 6
(3, 2) 7
(0, 2) 11
(1, 3) 12
>>> mtx.todense()
matrix([[ 1, 0, 11, 0],
[ 5, 2, 0, 12],
[ 0, 6, 3, 0],
[ 0, 0, 7, 4]])
offset: row
2: 9
1: --10------
0: 1 . 11 .
-1: 5 2 . 12
-2: . 6 3 .
-3: . . 7 4
---------8
matrix-vector multiplication
Examples
Examples
>>> mtx[1, 1]
0.0
>>> mtx[1, 1:3]
<1x2 sparse matrix of type '<... 'numpy.float64'>'
with 1 stored elements in Dictionary Of Keys format>
>>> mtx[1, 1:3].todense()
matrix([[ 0., 1.]])
>>> mtx[[2,1], 1:3].todense()
matrix([[ 1., 0.],
[ 0., 1.]])
Examples
no slicing. . . :
>>> mtx[2, 3]
Traceback (most recent call last):
...
TypeError: 'coo_matrix' object ...
row oriented
three NumPy arrays: indices, indptr, data
Examples
column oriented
three NumPy arrays: indices, indptr, data
Examples
basically a CSR with dense sub-matrices of fixed shape instead of scalar items
block size (R, C) must evenly divide the shape of the matrix (M, N)
three NumPy arrays: indices, indptr, data
Examples
create empty BSR matrix with (1, 1) block size (like CSR. . . ):
a bug?
create using (data, ij) tuple with (1, 1) block size (like CSR. . . ):
[[2]],
[[3]],
[[4]],
[[5]],
[[6]]]...)
>>> mtx.indices
array([0, 2, 2, 0, 1, 2], dtype=int32)
>>> mtx.indptr
array([0, 2, 3, 6], dtype=int32)
create using (data, indices, indptr) tuple with (2, 2) block size:
[[2, 2],
[2, 2]],
[[3, 3],
[3, 3]],
[[4, 4],
[4, 4]],
[[5, 5],
[5, 5]],
[[6, 6],
[6, 6]]])
11.2.3 Summary
Examples
both superlu and umfpack can be used (if the latter is installed) as follows:
prepare a linear system:
"""
Solve a linear system
=======================
rand = np.random.rand
plt.clf()
plt.spy(mtx, marker='.', markersize=2)
plt.show()
mtx = mtx.tocsr()
rhs = rand(1000)
x = linsolve.spsolve(mtx, rhs)
examples/direct_solve.py
Common Parameters
mandatory:
A [{sparse matrix, dense matrix, LinearOperator}] The N-by-N matrix of the linear system.
b [{array, matrix}] Right hand side of the linear system. Has shape (N,) or (N,1).
optional:
x0 [{array, matrix}] Starting guess for the solution.
tol [float] Relative tolerance to achieve before terminating.
maxiter [integer] Maximum number of iterations. Iteration will stop after maxiter steps even if the spec-
ified tolerance has not been achieved.
M [{sparse matrix, dense matrix, LinearOperator}] Preconditioner for A. The preconditioner should ap-
proximate the inverse of A. Effective preconditioning dramatically improves the rate of conver-
gence, which implies that fewer iterations are needed to reach a given error tolerance.
callback [function] User-supplied function to call after each iteration. It is called as callback(xk), where
xk is the current solution vector.
LinearOperator Class
problem specific
often hard to develop
if not sure, try ILU
available in dsolve as spilu()
arpack * a collection of Fortran77 subroutines designed to solve large scale eigenvalue problems
lobpcg (Locally Optimal Block Preconditioned Conjugate Gradient Method) * works very well in combi-
nation with PyAMG * example by Nathan Bell:
"""
Compute eigenvectors and eigenvalues using a preconditioned eigensolver
========================================================================
import scipy
from scipy.sparse.linalg import lobpcg
N = 100
K = 9
A = poisson((N,N), format='csr')
# preconditioner based on ml
M = ml.aspreconditioner()
pylab.figure(figsize=(9,9))
for i in range(K):
pylab.subplot(3, 3, i+1)
pylab.title('Eigenvector %d ' % i)
pylab.pcolor(V[:,i].reshape(N,N))
pylab.axis('equal')
pylab.axis('off')
pylab.show()
examples/pyamg_with_lobpcg.py
example by Nils Wagner:
examples/lobpcg_sakurai.py
output:
$ python examples/lobpcg_sakurai.py
Results by LOBPCG for n=2500
Exact eigenvalues
352
Scipy lecture notes, Edition 2017.1
Chapters contents
Need to know the shape and dtype of the image (how to separate data bytes).
For large data, use np.memmap for memory mapping:
(data are read from the file, and not loaded into memory)
Working on a list of image files
See also:
3-D visualization: Mayavi
See 3D plotting with Mayavi.
Image plane widgets
Isosurfaces
...
np.histogram
Exercise
Local filters: replace the value of pixels by a function of the values of neighboring pixels.
Neighbourhood: square (choose size), disk, or more complicated structuring element.
12.4.1 Blurring/smoothing
Uniform filter
12.4.2 Sharpening
12.4.3 Denoising
Noisy face:
A Gaussian filter smoothes the noise out. . . and the edges as well:
Exercise: denoising
Create a binary image (of 0s and 1s) with several objects (circles, ellipses, squares, or random shapes).
Add some noise (e.g., 20% of noise)
Try two different denoising methods for denoising the image: gaussian filtering and median filtering.
Compare the histograms of the two different denoised images. Which one is the closest to the his-
togram of the original (noise-free) image?
See also:
More denoising filters are available in skimage.denoising, see the Scikit-image: image processing tutorial.
>>> el = ndimage.generate_binary_structure(2, 1)
>>> el
array([[False, True, False],
[ True, True, True],
[False, True, False]], dtype=bool)
>>> el.astype(np.int)
array([[0, 1, 0],
[1, 1, 1],
[0, 1, 0]])
Erosion = minimum filter. Replace the value of a pixel by the minimal value covered by the structuring ele-
ment.:
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 1, 1, 1, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
>>> ndimage.binary_erosion(a).astype(a.dtype)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
>>> #Erosion removes objects smaller than the structure
>>> ndimage.binary_erosion(a, structure=np.ones((5,5))).astype(a.dtype)
array([[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0],
[0, 0, 0, 0, 0, 0, 0]])
>>> np.random.seed(2)
>>> im = np.zeros((64, 64))
>>> x, y = (63*np.random.random((2, 8))).astype(np.int)
>>> im[x, y] = np.arange(8)
Synthetic data:
12.5.2 Segmentation
>>> n = 10
>>> l = 256
>>> im = np.zeros((l, l))
>>> np.random.seed(1)
>>> points = l*np.random.random((2, n**2))
>>> im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
>>> im = ndimage.gaussian_filter(im, sigma=l/(4.*n))
Exercise
Check that reconstruction operations (erosion + propagation) produce a better result than opening/closing:
>>> eroded_img = ndimage.binary_erosion(binary_img)
>>> reconstruct_img = ndimage.binary_propagation(eroded_img, mask=binary_img)
>>> tmp = np.logical_not(reconstruct_img)
>>> eroded_tmp = ndimage.binary_erosion(tmp)
>>> reconstruct_final = np.logical_not(ndimage.binary_propagation(eroded_tmp, mask=tmp))
>>> np.abs(mask - close_img).mean()
0.00727836...
>>> np.abs(mask - reconstruct_final).mean()
0.00059502...
Exercise
Check how a first denoising step (e.g. with a median filter) modifies the histogram, and check that the
resulting histogram-based segmentation is more accurate.
See also:
More advanced segmentation algorithms are found in the scikit-image: see Scikit-image: image processing.
See also:
Other Scientific Packages provide algorithms that can be useful for image processing. In this example, we use
the spectral clustering function of the scikit-learn in order to segment glued objects.
>>> l = 100
>>> x, y = np.indices((l, l))
>>> # 4 circles
>>> img = circle1 + circle2 + circle3 + circle4
>>> mask = img.astype(bool)
>>> img = img.astype(float)
Synthetic data:
>>> n = 10
>>> l = 256
>>> im = np.zeros((l, l))
When regions are regular blocks, it is more efficient to use stride tricks (Example: fake dimensions with strides).
Non-regularly-spaced blocks: radial mean:
Other measures
Correlation function, Fourier/wavelet spectrum, etc.
One example with mathematical morphology: granulometry
import scipy.misc
import matplotlib.pyplot as plt
f = scipy.misc.face(gray=True)
plt.figure(figsize=(8, 4))
plt.subplot(1, 2, 1)
plt.imshow(f[320:340, 510:530], cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(1, 2, 2)
plt.imshow(f[320:340, 510:530], cmap=plt.cm.gray, interpolation='nearest')
plt.axis('off')
This example shows how to do image manipulation using common numpy arrays tricks.
import numpy as np
import scipy
import scipy.misc
import matplotlib.pyplot as plt
face = scipy.misc.face(gray=True)
face[10:13, 20:23]
face[100:120] = 255
lx, ly = face.shape
X, Y = np.ogrid[0:lx, 0:ly]
mask = (X - lx/2)**2 + (Y - ly/2)**2 > lx*ly/4
face[mask] = 0
face[range(400), range(400)] = 255
plt.figure(figsize=(3, 3))
plt.axes([0, 0, 1, 1])
plt.imshow(face, cmap=plt.cm.gray)
plt.axis('off')
plt.show()
import numpy as np
import scipy
from scipy import ndimage
import matplotlib.pyplot as plt
f = scipy.misc.face(gray=True)
sx, sy = f.shape
X, Y = np.ogrid[0:sx, 0:sy]
plt.figure(figsize=(5, 5))
plt.axes([0, 0, 1, 1])
plt.imshow(rbin, cmap=plt.cm.spectral)
plt.axis('off')
plt.show()
An example showing how to use broad-casting to plot the mean of blocks of an image.
import numpy as np
import scipy.misc
from scipy import ndimage
import matplotlib.pyplot as plt
f = scipy.misc.face(gray=True)
sx, sy = f.shape
X, Y = np.ogrid[0:sx, 0:sy]
plt.figure(figsize=(5, 5))
plt.imshow(block_mean, cmap=plt.cm.gray)
plt.axis('off')
plt.show()
Generated by Sphinx-Gallery
import scipy.misc
import matplotlib.pyplot as plt
f = scipy.misc.face(gray=True)
plt.figure(figsize=(10, 3.6))
plt.subplot(131)
plt.imshow(f, cmap=plt.cm.gray)
plt.subplot(132)
plt.imshow(f, cmap=plt.cm.gray, vmin=30, vmax=200)
plt.axis('off')
plt.subplot(133)
plt.imshow(f, cmap=plt.cm.gray)
plt.contour(f, [50, 200])
plt.axis('off')
This example shows how to sharpen an image in noiseless situation by applying the filter inverse to the blur.
import scipy
from scipy import ndimage
import matplotlib.pyplot as plt
f = scipy.misc.face(gray=True).astype(float)
blurred_f = ndimage.gaussian_filter(f, 3)
filter_blurred_f = ndimage.gaussian_filter(blurred_f, 1)
alpha = 30
sharpened = blurred_f + alpha * (blurred_f - filter_blurred_f)
plt.figure(figsize=(12, 4))
plt.subplot(131)
plt.imshow(f, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(132)
plt.imshow(blurred_f, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(133)
plt.imshow(sharpened, cmap=plt.cm.gray)
plt.axis('off')
plt.tight_layout()
plt.show()
import scipy.misc
from scipy import ndimage
import matplotlib.pyplot as plt
face = scipy.misc.face(gray=True)
blurred_face = ndimage.gaussian_filter(face, sigma=3)
very_blurred = ndimage.gaussian_filter(face, sigma=5)
local_mean = ndimage.uniform_filter(face, size=11)
plt.figure(figsize=(9, 3))
plt.subplot(131)
plt.imshow(blurred_face, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(132)
plt.imshow(very_blurred, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(133)
plt.imshow(local_mean, cmap=plt.cm.gray)
plt.axis('off')
plt.show()
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
np.random.seed(1)
n = 10
l = 256
im = np.zeros((l, l))
points = l*np.random.random((2, n**2))
im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
im = ndimage.gaussian_filter(im, sigma=l/(4.*n))
plt.figure(figsize=(9,3))
plt.subplot(131)
plt.imshow(im)
plt.axis('off')
plt.subplot(132)
plt.imshow(mask, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(133)
plt.imshow(label_im, cmap=plt.cm.spectral)
plt.axis('off')
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
open_square = ndimage.binary_opening(square)
eroded_square = ndimage.binary_erosion(square)
reconstruction = ndimage.binary_propagation(eroded_square, mask=square)
plt.figure(figsize=(9.5, 3))
plt.subplot(131)
plt.imshow(square, cmap=plt.cm.gray, interpolation='nearest')
plt.axis('off')
plt.subplot(132)
plt.imshow(open_square, cmap=plt.cm.gray, interpolation='nearest')
plt.axis('off')
plt.subplot(133)
plt.imshow(reconstruction, cmap=plt.cm.gray, interpolation='nearest')
plt.axis('off')
import numpy as np
import scipy
import scipy.misc
from scipy import ndimage
import matplotlib.pyplot as plt
f = scipy.misc.face(gray=True)
f = f[230:290, 220:320]
noisy = f + 0.4*f.std()*np.random.random(f.shape)
gauss_denoised = ndimage.gaussian_filter(noisy, 2)
med_denoised = ndimage.median_filter(noisy, 3)
plt.figure(figsize=(12,2.8))
plt.subplot(131)
plt.imshow(noisy, cmap=plt.cm.gray, vmin=40, vmax=220)
plt.axis('off')
plt.title('noisy', fontsize=20)
plt.subplot(132)
plt.imshow(gauss_denoised, cmap=plt.cm.gray, vmin=40, vmax=220)
plt.axis('off')
plt.title('Gaussian filter', fontsize=20)
plt.subplot(133)
plt.imshow(med_denoised, cmap=plt.cm.gray, vmin=40, vmax=220)
plt.axis('off')
plt.title('Median filter', fontsize=20)
This example shows how to extract the bounding box of the largest object
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
np.random.seed(1)
n = 10
l = 256
im = np.zeros((l, l))
points = l*np.random.random((2, n**2))
im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
im = ndimage.gaussian_filter(im, sigma=l/(4.*n))
# Now that we have only one connected component, extract it's bounding box
slice_x, slice_y = ndimage.find_objects(label_im==4)[0]
roi = im[slice_x, slice_y]
plt.figure(figsize=(4, 2))
plt.axes([0, 0, 1, 1])
plt.imshow(roi)
plt.axis('off')
plt.show()
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
np.random.seed(1)
n = 10
l = 256
im = np.zeros((l, l))
points = l*np.random.random((2, n**2))
im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
im = ndimage.gaussian_filter(im, sigma=l/(4.*n))
plt.figure(figsize=(6 ,3))
plt.subplot(121)
plt.imshow(label_im, cmap=plt.cm.spectral)
plt.axis('off')
plt.subplot(122)
plt.imshow(label_clean, vmax=nb_labels, cmap=plt.cm.spectral)
plt.axis('off')
import numpy as np
import scipy.misc
from scipy import ndimage
import matplotlib.pyplot as plt
face = scipy.misc.face(gray=True)
lx, ly = face.shape
# Cropping
crop_face = face[lx//4:-lx//4, ly//4:-ly//4]
# up <-> down flip
flip_ud_face = np.flipud(face)
# rotation
rotate_face = ndimage.rotate(face, 45)
rotate_face_noreshape = ndimage.rotate(face, 45, reshape=False)
plt.figure(figsize=(12.5, 2.5))
plt.subplot(151)
plt.imshow(face, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(152)
plt.imshow(crop_face, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(153)
plt.imshow(flip_ud_face, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(154)
plt.imshow(rotate_face, cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(155)
plt.imshow(rotate_face_noreshape, cmap=plt.cm.gray)
plt.axis('off')
plt.show()
This example shows the original image, the noisy image, the denoised one (with the median filter) and the
difference between the two.
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
im = np.zeros((20, 20))
im[5:-5, 5:-5] = 1
im = ndimage.distance_transform_bf(im)
im_noise = im + 0.2*np.random.randn(*im.shape)
im_med = ndimage.median_filter(im_noise, 3)
plt.figure(figsize=(16, 5))
plt.subplot(141)
plt.imshow(im, interpolation='nearest')
plt.axis('off')
plt.title('Original image', fontsize=20)
plt.subplot(142)
plt.imshow(im_noise, interpolation='nearest', vmin=0, vmax=5)
plt.axis('off')
plt.title('Noisy image', fontsize=20)
plt.subplot(143)
plt.imshow(im_med, interpolation='nearest', vmin=0, vmax=5)
plt.axis('off')
plt.title('Median filter', fontsize=20)
plt.subplot(144)
plt.imshow(np.abs(im - im_med), cmap=plt.cm.hot, interpolation='nearest')
plt.axis('off')
plt.title('Error', fontsize=20)
plt.show()
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
np.random.seed(1)
n = 10
l = 256
im = np.zeros((l, l))
points = l*np.random.random((2, n**2))
im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
im = ndimage.gaussian_filter(im, sigma=l/(4.*n))
mask += 0.1 * im
plt.figure(figsize=(11,4))
plt.subplot(131)
plt.imshow(img)
plt.axis('off')
plt.subplot(132)
plt.plot(bin_centers, hist, lw=2)
plt.axvline(0.5, color='r', ls='--', lw=2)
plt.text(0.57, 0.8, 'histogram', fontsize=20, transform = plt.gca().transAxes)
plt.yticks([])
plt.subplot(133)
plt.imshow(binary_img, cmap=plt.cm.gray, interpolation='nearest')
plt.axis('off')
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
im = np.zeros((256, 256))
im[64:-64, 64:-64] = 1
plt.figure(figsize=(16, 5))
plt.subplot(141)
plt.imshow(im, cmap=plt.cm.gray)
plt.axis('off')
plt.title('square', fontsize=20)
plt.subplot(142)
plt.imshow(sx)
plt.axis('off')
plt.title('Sobel (x direction)', fontsize=20)
plt.subplot(143)
plt.imshow(sob)
plt.axis('off')
plt.title('Sobel filter', fontsize=20)
im += 0.07*np.random.random(im.shape)
plt.subplot(144)
plt.imshow(sob)
plt.axis('off')
plt.title('Sobel for noisy image', fontsize=20)
plt.show()
import numpy as np
import scipy
import scipy.misc
import matplotlib.pyplot as plt
try:
from skimage.restoration import denoise_tv_chambolle
except ImportError:
# skimage < 0.12
from skimage.filters import denoise_tv_chambolle
f = scipy.misc.face(gray=True)
f = f[230:290, 220:320]
noisy = f + 0.4*f.std()*np.random.random(f.shape)
plt.figure(figsize=(12, 2.8))
plt.subplot(131)
plt.imshow(noisy, cmap=plt.cm.gray, vmin=40, vmax=220)
plt.axis('off')
plt.title('noisy', fontsize=20)
plt.subplot(132)
plt.imshow(tv_denoised, cmap=plt.cm.gray, vmin=40, vmax=220)
plt.axis('off')
plt.title('TV denoising', fontsize=20)
right=1)
plt.show()
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
im = np.zeros((64, 64))
np.random.seed(2)
x, y = (63*np.random.random((2, 8))).astype(np.int)
im[x, y] = np.arange(8)
plt.figure(figsize=(12.5, 3))
plt.subplot(141)
plt.imshow(im, interpolation='nearest', cmap=plt.cm.spectral)
plt.axis('off')
plt.subplot(142)
plt.imshow(bigger_points, interpolation='nearest', cmap=plt.cm.spectral)
plt.axis('off')
plt.subplot(143)
plt.imshow(dist, interpolation='nearest', cmap=plt.cm.spectral)
plt.axis('off')
plt.subplot(144)
plt.imshow(dilate_dist, interpolation='nearest', cmap=plt.cm.spectral)
plt.axis('off')
An example showing how to clean segmentation with mathematical morphology: removing small regions and
holes.
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
np.random.seed(1)
n = 10
l = 256
im = np.zeros((l, l))
points = l*np.random.random((2, n**2))
im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
im = ndimage.gaussian_filter(im, sigma=l/(4.*n))
plt.figure(figsize=(12, 3))
l = 128
plt.subplot(141)
plt.imshow(binary_img[:l, :l], cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(142)
plt.imshow(open_img[:l, :l], cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(143)
plt.imshow(close_img[:l, :l], cmap=plt.cm.gray)
plt.axis('off')
plt.subplot(144)
plt.imshow(mask[:l, :l], cmap=plt.cm.gray)
plt.contour(close_img[:l, :l], [0.5], linewidths=2, colors='r')
plt.axis('off')
plt.show()
This example performs a Gaussian mixture model analysis of the image histogram to find the right thresholds
for separating foreground from background.
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
from sklearn.mixture import GMM
np.random.seed(1)
n = 10
l = 256
im = np.zeros((l, l))
points = l*np.random.random((2, n**2))
im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
im = ndimage.gaussian_filter(im, sigma=l/(4.*n))
classif = GMM(n_components=2)
classif.fit(img.reshape((img.size, 1)))
threshold = np.mean(classif.means_)
binary_img = img > threshold
plt.figure(figsize=(11,4))
plt.subplot(131)
plt.imshow(img)
plt.axis('off')
plt.subplot(132)
plt.plot(bin_centers, hist, lw=2)
plt.axvline(0.5, color='r', ls='--', lw=2)
plt.text(0.57, 0.8, 'histogram', fontsize=20, transform = plt.gca().transAxes)
plt.yticks([])
plt.subplot(133)
plt.imshow(binary_img, cmap=plt.cm.gray, interpolation='nearest')
plt.axis('off')
import numpy as np
from skimage.morphology import watershed
from skimage.feature import peak_local_max
import matplotlib.pyplot as plt
from scipy import ndimage
local_maxi = peak_local_max(
distance, indices=False, footprint=np.ones((3, 3)), labels=image)
markers = ndimage.label(local_maxi)[0]
labels = watershed(-distance, markers, mask=image)
plt.figure(figsize=(9, 3.5))
plt.subplot(131)
plt.imshow(image, cmap='gray', interpolation='nearest')
plt.axis('off')
plt.subplot(132)
plt.imshow(-distance, interpolation='nearest')
plt.axis('off')
plt.subplot(133)
plt.imshow(labels, cmap='spectral', interpolation='nearest')
plt.axis('off')
12.8.23 Granulometry
import numpy as np
from scipy import ndimage
import matplotlib.pyplot as plt
def disk_structure(n):
struct = np.zeros((2 * n + 1, 2 * n + 1))
x, y = np.indices((2 * n + 1, 2 * n + 1))
mask = (x - n)**2 + (y - n)**2 <= n**2
struct[mask] = 1
return struct.astype(np.bool)
np.random.seed(1)
n = 10
l = 256
im = np.zeros((l, l))
points = l*np.random.random((2, n**2))
im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
im = ndimage.gaussian_filter(im, sigma=l/(4.*n))
plt.figure(figsize=(6, 2.2))
plt.subplot(121)
plt.imshow(mask, cmap=plt.cm.gray)
opened = ndimage.binary_opening(mask, structure=disk_structure(10))
opened_more = ndimage.binary_opening(mask, structure=disk_structure(14))
plt.contour(opened, [0.5], colors='b', linewidths=2)
plt.contour(opened_more, [0.5], colors='r', linewidths=2)
plt.axis('off')
plt.subplot(122)
plt.plot(np.arange(2, 19, 4), granulo, 'ok', ms=8)
import numpy as np
import matplotlib.pyplot as plt
l = 100
x, y = np.indices((l, l))
4 circles
img += 1 + 0.2*np.random.randn(*img.shape)
# Convert the image into a graph with the value of the gradient on the
# edges.
graph = image.img_to_graph(img, mask=mask)
plt.figure(figsize=(6, 3))
plt.subplot(121)
plt.imshow(img, cmap=plt.cm.spectral, interpolation='nearest')
plt.axis('off')
plt.subplot(122)
plt.imshow(label_im, cmap=plt.cm.spectral, interpolation='nearest')
plt.axis('off')
Generated by Sphinx-Gallery
Download all examples in Python source code: auto_examples_python.zip
Download all examples in Jupyter notebooks: auto_examples_jupyter.zip
Generated by Sphinx-Gallery
See also:
More on image-processing:
The chapter on Scikit-image
Other, more powerful and complete modules: OpenCV (Python bindings), CellProfiler, ITK with Python
bindings
Prerequisites
Numpy
Scipy
Matplotlib
See also:
References
Mathematical optimization is very . . . mathematical. If you want performance, it really pays to read the books:
Convex Optimization by Boyd and Vandenberghe (pdf available free online).
Numerical Optimization, by Nocedal and Wright. Detailed reference on gradient descent methods.
Practical Methods of Optimization by Fletcher: good at hand-waving explanations.
397
Scipy lecture notes, Edition 2017.1
Chapters contents
Not all optimization problems are equal. Knowing your problem enables you to choose the right tool.
The scale of an optimization problem is pretty much set by the dimensionality of the problem, i.e. the num-
ber of scalar variables on which the search is performed.
Optimizing convex functions is easy. Optimizing non-convex functions can be very hard.
Note: It can be proven that for a convex function a local minimum is also a global minimum. Then, in some
sense, the minimum is unique.
Optimizing smooth functions is easier (true in the context of black-box optimization, otherwise Linear Pro-
gramming is an example of methods which deal very efficiently with piece-wise linear functions).
Noisy gradients
Many optimization methods rely on gradients of the objective function. If the gradient function is not given,
they are computed numerically, which induces errors. In such situation, even if the objective function is not
noisy, a gradient-based optimization may be a noisy optimization.
13.1.4 Constraints
Lets get started by finding the minimum of the scalar function f (x) = exp[(x 0.7)2 ]. scipy.optimize.
minimize_scalar() uses Brents method to find the minimum of a function:
Note: You can use different solvers using the parameter method.
We can see that very anisotropic (ill-conditioned) functions are harder to optimize.
If you know natural scaling for your variables, prescale them so that they behave similarly. This is related to
preconditioning.
Also, it clearly can be advantageous to take bigger steps. This is done in gradient descent code using a line
search.
A well-conditioned quadratic
function.
An ill-conditioned quadratic
function.
An ill-conditioned non-
quadratic function.
The more a function looks like a quadratic function (elliptic iso-curves), the easier it is to optimize.
The gradient descent algorithms above are toys not to be used on real problems.
As can be seen from the above experiments, one of the problems of the simple gradient descent algorithms, is
that it tends to oscillate across a valley, each time following the direction of the gradient, that makes it cross
the valley. The conjugate gradient solves this problem by adding a friction term: each step depends on the two
last values of the gradient and sharp turns are reduced.
An ill-conditioned non-
quadratic function.
scipy provides scipy.optimize.minimize() to find the minimum of scalar functions of one or more vari-
ables. The simple conjugate gradient method can be used by setting the parameter method to CG
Gradient methods need the Jacobian (gradient) of the function. They can compute it numerically, but will
perform better if you can pass them the gradient:
Note that the function has only been evaluated 27 times, compared to 108 without the gradient.
Newton methods use a local quadratic approximation to compute the jump direction. For this purpose, they
rely on the 2 first derivative of the function: the gradient and the Hessian.
In scipy, you can use the Newton method by setting method to Newton-CG in scipy.optimize.minimize().
Here, CG refers to the fact that an internal inversion of the Hessian is performed by conjugate gradient
Note that compared to a conjugate gradient (above), Newtons method has required less function evaluations,
but more gradient evaluations, as it uses it to approximate the Hessian. Lets compute the Hessian and pass it
to the algorithm:
njev: 20
status: 0
success: True
x: array([ 0.99999..., 0.99999...])
Note: At very high-dimension, the inversion of the Hessian can be costly and unstable (large scale > 250).
Note: Newton optimizers should not to be confused with Newtons root finding method, based on the same
principles, scipy.optimize.newton().
BFGS: BFGS (Broyden-Fletcher-Goldfarb-Shanno algorithm) refines at each step an approximation of the Hes-
sian.
import numpy as np
import pylab as pl
np.random.seed(0)
x = np.linspace(-5, 5, 101)
x_ = np.linspace(-5, 5, 31)
def f(x):
return -np.exp(-x**2)
# A smooth function
pl.figure(1, figsize=(3, 2.5))
pl.clf()
pl.ylim(ymin=-1.3)
pl.axis('off')
pl.tight_layout()
pl.show()
import numpy as np
import pylab as pl
# A smooth function
pl.figure(1, figsize=(3, 2.5))
pl.clf()
pl.ylim(ymin=-.2)
pl.axis('off')
pl.tight_layout()
# A non-smooth function
pl.figure(2, figsize=(3, 2.5))
pl.clf()
pl.plot(x, np.abs(x), linewidth=2)
pl.text(-1, 0, '$f$', size=20)
pl.ylim(ymin=-.2)
pl.axis('off')
pl.tight_layout()
pl.show()
import numpy as np
from scipy import optimize
import pylab as pl
np.random.seed(0)
# Fit the model: the parameters omega and phi can be found in the
# `params` vector
params, params_cov = optimize.curve_fit(f, x, y)
pl.figure(1)
pl.clf()
pl.plot(x, y, 'bx')
pl.plot(t, f(t, *params), 'r-')
pl.show()
import numpy as np
import pylab as pl
x = np.linspace(-1, 2)
# A convex function
pl.plot(x, x**2, linewidth=2)
pl.text(-.7, -.6**2, '$f$', size=20)
# Convexity as barycenter
pl.plot([.35, 1.85], [.35**2, 1.85**2])
pl.plot([.35, 1.85], [.35**2, 1.85**2], 'k+')
pl.ylim(ymin=-1)
pl.axis('off')
pl.tight_layout()
# Convexity as barycenter
pl.figure(2, figsize=(3, 2.5))
pl.clf()
pl.plot(x, x**2 + np.exp(-5*(x - .5)**2), linewidth=2)
pl.text(-.7, -.6**2, '$f$', size=20)
pl.ylim(ymin=-1)
pl.axis('off')
pl.tight_layout()
pl.show()
An excercise of finding minimum. This excercise is hard because the function is very flat around the minimum
(all its derivatives are zero). Thus gradient information is unreliable.
The function admits a minimum in [0, 0]. The challenge is to get within 1e-7 of this minimum, starting at x0 =
[1, 1].
The solution that we adopt here is to give up on using gradient or information based on local differences, and
to rely on the Powell algorithm. With 162 function evaluations, we get to 1e-8 of the solution.
import numpy as np
from scipy import optimize
import pylab as pl
def f(x):
return np.exp(-1/(.01*x[0]**2 + x[1]**2))
# A well-conditionned version of f:
def g(x):
return f([10*x[0], x[1]])
pl.figure(0)
pl.clf()
t = np.linspace(-1.1, 1.1, 100)
pl.plot(t, f([0, t]))
pl.figure(1)
pl.clf()
X, Y = np.mgrid[-1.5:1.5:100j, -1.1:1.1:100j]
pl.imshow(f([X, Y]).T, cmap=pl.cm.gray_r, extent=[-1.5, 1.5, -1.1, 1.1],
origin='lower')
pl.contour(X, Y, f([X, Y]), cmap=pl.cm.gnuplot)
pl.show()
Total running time of the script: ( 0 minutes 0.160 seconds)
Download Python source code: plot_exercise_flat_minimum.py
Download Jupyter notebook: plot_exercise_flat_minimum.ipynb
Generated by Sphinx-Gallery
An example showing how to do optimization with general constraints using SLSQP and cobyla.
import numpy as np
import pylab as pl
from scipy import optimize
x, y = np.mgrid[-2.03:4.2:.04, -1.6:3.2:.04]
x = x.T
y = y.T
def f(x):
# Store the list of function calls
accumulator.append(x)
return np.sqrt((x[0] - 3)**2 + (x[1] - 2)**2)
def constraint(x):
return np.atleast_1d(1.5 - np.sum(np.abs(x)))
accumulated = np.array(accumulator)
pl.plot(accumulated[:, 0], accumulated[:, 1])
pl.show()
Out:
Converged at 6
Converged at 23
import numpy as np
import pylab as pl
from scipy import optimize
x = np.linspace(-1, 3, 100)
x_0 = np.exp(-1)
def f(x):
return (x - x_0)**2 + epsilon*np.exp(-5*(x - .5 - x_0)**2)
# A convex function
pl.plot(x, f(x), linewidth=2)
this_x = result.x
all_x.append(this_x)
all_y.append(f(this_x))
if iter < 6:
pl.text(this_x - .05*np.sign(this_x) - .05,
f(this_x) + 1.2*(.3 - iter % 2), iter + 1,
size=12)
pl.figure(figsize=(4, 3))
pl.semilogy(np.abs(all_y - all_y[-1]), linewidth=2)
pl.ylabel('Error on f(x)')
pl.xlabel('Iteration')
pl.tight_layout()
pl.show()
Generated by Sphinx-Gallery
import numpy as np
import pylab as pl
from scipy import optimize
x, y = np.mgrid[-2.9:5.8:.05, -2.5:5:.05]
x = x.T
y = y.T
def f(x):
# Store the list of function calls
accumulator.append(x)
return np.sqrt((x[0] - 3)**2 + (x[1] - 2)**2)
# We don't use the gradient, as with the gradient, L-BFGS is too fast,
# and finds the optimum without showing us a pretty path
def f_prime(x):
r = np.sqrt((x[0] - 3)**2 + (x[0] - 2)**2)
return np.array(((x[0] - 3)/r, (x[0] - 2)/r))
accumulated = np.array(accumulator)
pl.plot(accumulated[:, 0], accumulated[:, 1])
pl.show()
import pickle
import sys
import numpy as np
import pylab as pl
results = pickle.load(open(
'helper/compare_optimizers_py%s .pkl' % sys.version_info[0],
'rb'))
n_methods = len(list(results.values())[0]['Rosenbrock '])
n_dims = len(results)
symbols = 'o>*Ds'
pl.xticks(np.arange(n_methods), method_names)
pl.yticks(())
pl.tight_layout()
pl.show()
The challenge here is that Hessian of the problem is a very ill-conditioned matrix. This can easily be seen, as
the Hessian of the first term in simply 2*np.dot(K.T, K). Thus the conditioning of the problem can be judged
from looking at the conditioning of K.
import time
import numpy as np
from scipy import optimize
import pylab as pl
np.random.seed(0)
K = np.random.normal(size=(100, 100))
def f(x):
return np.sum((np.dot(K, x - 1))**2) + np.sum(x**2)**2
def f_prime(x):
return 2*np.dot(np.dot(K.T, K), x - 1) + 4*np.sum(x**2)*x
def hessian(x):
H = 2*np.dot(K.T, K) + 4*2*x*x[:, np.newaxis]
return H + 4*np.eye(H.shape[0])*np.sum(x**2)
pl.figure(1)
pl.clf()
Z = X, Y = np.mgrid[-1.5:1.5:100j, -1.1:1.1:100j]
# Complete in the additional dimensions with zeros
Z = np.reshape(Z, (2, -1)).copy()
Z.resize((100, Z.shape[-1]))
Z = np.apply_along_axis(f, 0, Z)
Z = np.reshape(Z, X.shape)
pl.imshow(Z.T, cmap=pl.cm.gray_r, extent=[-1.5, 1.5, -1.1, 1.1],
origin='lower')
pl.contour(X, Y, Z, cmap=pl.cm.gnuplot)
t0 = time.time()
x_l_bfgs = optimize.minimize(f, K[0], method="L-BFGS-B").x
print(' L-BFGS: time %.2f s, x error %.2f , f error %.2f ' % (time.time() - t0,
np.sqrt(np.sum((x_l_bfgs - x_ref)**2)), f(x_l_bfgs) - f_ref))
t0 = time.time()
x_bfgs = optimize.minimize(f, K[0], jac=f_prime, method="BFGS").x
print(" BFGS w f': time %.2f s, x error %.2f , f error %.2f " % (
time.time() - t0, np.sqrt(np.sum((x_bfgs - x_ref)**2)),
f(x_bfgs) - f_ref))
t0 = time.time()
x_l_bfgs = optimize.minimize(f, K[0], jac=f_prime, method="L-BFGS-B").x
print("L-BFGS w f': time %.2f s, x error %.2f , f error %.2f " % (
t0 = time.time()
x_newton = optimize.minimize(f, K[0], jac=f_prime, hess=hessian, method="Newton-CG").x
print(" Newton: time %.2f s, x error %.2f , f error %.2f " % (
time.time() - t0, np.sqrt(np.sum((x_newton - x_ref)**2)),
f(x_newton) - f_ref))
pl.show()
Out:
An example demoing gradient descent by creating figures that trace the evolution of the optimizer.
import numpy as np
import pylab as pl
from scipy import optimize
import sys, os
sys.path.append(os.path.abspath('helper'))
from cost_functions import mk_quad, mk_gauss, rosenbrock,\
rosenbrock_prime, rosenbrock_hessian, LoggingFunction,\
CountingFunction
def super_fmt(value):
if value > 1:
if np.abs(int(value) - value) < .1:
out = '$10^{%.1i }$' % value
else:
out = '$10^{%.1f }$' % value
else:
value = np.exp(value - .01)
if value > .1:
out = '%1.1f ' % value
elif value > .01:
out = '%.2f ' % value
else:
out = '%.2e ' % value
return out
A gradient descent algorithm do not use: its a toy, use scipys optimize.fmin_cg
all_x_i.append(x)
all_y_i.append(y)
all_f_i.append(f(X))
optimize.minimize(f, x0, method="Nelder-Mead", callback=store, options={"ftol": 1e-12})
return all_x_i, all_y_i, all_f_i
levels = dict()
# Compute a gradient-descent
x_i, y_i = 1.6, 1.1
counting_f_prime = CountingFunction(f_prime)
counting_hessian = CountingFunction(hessian)
logging_f = LoggingFunction(f, counter=counting_f_prime.counter)
all_x_i, all_y_i, all_f_i = optimizer(np.array([x_i, y_i]),
logging_f, counting_f_prime,
hessian=counting_hessian)
pl.xticks(())
pl.yticks(())
pl.xlim(x_min, x_max)
pl.ylim(y_min, y_max)
pl.draw()
Total running time of the script: ( 0 minutes 12.552 seconds)
Download Python source code: plot_gradient_descent.py
Download Jupyter notebook: plot_gradient_descent.ipynb
Generated by Sphinx-Gallery
Download all examples in Python source code: auto_examples_python.zip
Download all examples in Jupyter notebooks: auto_examples_jupyter.zip
Generated by Sphinx-Gallery
L-BFGS: Limited-memory BFGS Sits between BFGS and conjugate gradient: in very high dimensions (> 250)
the Hessian matrix is too costly to compute and invert. L-BFGS keeps a low-rank version. In addition, box
bounds are also supported by L-BFGS-B:
An ill-conditioned non-
quadratic function:
If your problem does not admit a unique local minimum (which can be hard to test unless the function is
convex), and you do not have prior information to initialize the optimization close to the solution, you may
need a global optimizer.
scipy.optimize.brute() evaluates the function on a given grid of parameters and returns the parameters
corresponding to the minimum value. The parameters are specified with ranges given to numpy.mgrid. By
default, 20 steps are taken in each direction:
In general, prefer BFGS or L-BFGS, even if you have to approximate numerically gra-
dients. These are also the default if you omit the parameter method - depending if the
problem has constraints or bounds
On well-conditioned problems, Powell and Nelder-Mead, both gradient-free methods,
work well in high dimension, but they collapse for ill-conditioned problems.
With knowledge of the gradient
BFGS or L-BFGS.
Computational overhead of BFGS is larger than that L-BFGS, itself larger than that of
conjugate gradient. On the other side, BFGS usually needs less function evaluations than
CG. Thus conjugate gradient method is better than BFGS at optimizing computationally
cheap functions.
With the Hessian
If you can compute the Hessian, prefer the Newton method (Newton-CG or TCG).
If you have noisy measurements
Use Nelder-Mead or Powell.
Choose the right method (see above), do compute analytically the gradient and Hessian, if you can.
Use preconditionning when possible.
Choose your initialization points wisely. For instance, if you are running many similar optimizations,
warm-restart one with the results of another.
Relax the tolerance if you dont need precision using the parameter tol.
Computing gradients, and even more Hessians, is very tedious but worth the effort. Symbolic computation
with Sympy may come in handy.
Warning: A very common source of optimization not converging well is human error in the computation
of the gradient. You can use scipy.optimize.check_grad() to check that your gradient is correct. It
returns the norm of the different between the gradient given, and a gradient computed numerically:
>>> optimize.check_grad(f, jacobian, [2, -1])
2.384185791015625e-07
def f(x):
return np.sum((np.dot(K, x - 1))**2) + np.sum(x**2)**2
Time your approach. Find the fastest approach. Why is BFGS not working well?
Consider the function exp(-1/(.1*x**2 + y**2). This function admits a minimum in (0, 0). Starting from an
initialization at (1, 1), try to get within 1e-8 of this minimum point.
Least square problems, minimizing the norm of a vector function, have a specific structure that can be used in
the LevenbergMarquardt algorithm implemented in scipy.optimize.leastsq().
Lets try to minimize the norm of the following vectorial function:
>>> x0 = np.zeros(10)
>>> optimize.leastsq(f, x0)
(array([ 0. , 0.11111111, 0.22222222, 0.33333333, 0.44444444,
0.55555556, 0.66666667, 0.77777778, 0.88888889, 1. ]), 2)
This took 67 function evaluations (check it with full_output=1). What if we compute the norm ourselves and
use a good generic optimizer (BFGS):
BFGS needs more function calls, and gives a less precise result.
Note: leastsq is interesting compared to BFGS only if the dimensionality of the output vector is large, and
larger than the number of parameters to optimize.
Warning: If the function is linear, this is a linear-algebra problem, and should be solved with scipy.
linalg.lstsq().
>>> optimize.curve_fit(f, x, y)
(array([ 1.5185..., 0.92665...]), array([[ 0.00037..., -0.00056...],
[-0.0005..., 0.00123...]]))
Exercise
Box bounds correspond to limiting each of the individual parameters of the optimization. Note that some
problems that are not originally written as box bounds can be rewritten as such via change of variables. Both
scipy.optimize.minimize_scalar() and scipy.optimize.minimize() support bound constraints with
the parameter bounds:
Equality and inequality constraints specified as functions: f (x) = 0 and g (x) < 0.
scipy.optimize.fmin_slsqp() Sequential least square programming: equality and inequality con-
straints:
Warning: The above problem is known as the Lasso problem in statistics, and there exist very efficient
solvers for it (for instance in scikit-learn). In general do not use generic solvers when specific ones exist.
Lagrange multipliers
If you are ready to do a bit of math, many constrained optimization problems can be converted to non-
constrained optimization problems using a mathematical trick known as Lagrange multipliers.
Chapters contents
Introduction
Python-C-Api
Ctypes
SWIG
Cython
Summary
Further Reading and References
Exercises
14.1 Introduction
439
Scipy lecture notes, Edition 2017.1
Ctypes
SWIG (Simplified Wrapper and Interface Generator)
Cython
These four techniques are perhaps the most well known ones, of which Cython is probably the most advanced
one and the one you should consider using first. The others are also important, if you want to understand the
wrapping problem from different angles. Having said that, there are other alternatives out there, but having
understood the basics of the ones above, you will be in a position to evaluate the technique of your choice to
see if it fits your needs.
The following criteria may be useful when evaluating a technology:
Are additional libraries required?
Is the code autogenerated?
Does it need to be compiled?
Is there good support for interacting with Numpy arrays?
Does it support C++?
Before you set out, you should consider your use case. When interfacing with native code, there are usually
two use-cases that come up:
Existing code in C/C++ that needs to be leveraged, either because it already exists, or because it is faster.
Python code too slow, push inner loops to native code
Each technology is demonstrated by wrapping the cos function from math.h. While this is a mostly a trivial
example, it should serve us well to demonstrate the basics of the wrapping solution. Since each technique
also includes some form of Numpy support, this is also demonstrated using an example where the cosine is
computed on some kind of array.
Last but not least, two small warnings:
All of these techniques may crash (segmentation fault) the Python interpreter, which is (usually) due to
bugs in the C code.
All the examples have been done on Linux, they should be possible on other operating systems.
You will need a C compiler for most of the examples.
14.2 Python-C-Api
The Python-C-API is the backbone of the standard Python interpreter (a.k.a CPython). Using this API it is
possible to write Python extension module in C and C++. Obviously, these extension modules can, by virtue of
language compatibility, call any function written in C or C++.
When using the Python-C-API, one usually writes much boilerplate code, first to parse the arguments that were
given to a function, and later to construct the return type.
Advantages
Requires no additional libraries
Lots of low-level control
Entirely usable from C++
Disadvantages
May require a substantial amount of effort
Much overhead in the code
Must be compiled
Note: The Python-C-Api example here serves mainly for didactic reasons. Many of the other techniques
actually depend on this, so it is good to have a high-level understanding of how it works. In 99% of the use-
cases you will be better off, using an alternative technique.
Note: Since refernce counting bugs are easy to create and hard to track down, anyone really needing to use
the Python C-API should read the section about objects, types and reference counts from the official python
documentation. Additionally, there is a tool by the name of cpychecker which can help discover common
errors with reference counting.
14.2.1 Example
The following C-extension module, make the cos function from the standard math library available to Python:
# include <Python.h>
# include <math.h>
# if PY_MAJOR_VERSION >= 3
/* module initialization */
/* Python version 3*/
static struct PyModuleDef cModPyDem =
{
PyModuleDef_HEAD_INIT,
"cos_module", "Some documentation",
-1,
CosMethods
};
PyMODINIT_FUNC
PyInit_cos_module(void)
{
return PyModule_Create(&cModPyDem);
}
# else
/* module initialization */
/* Python version 2 */
PyMODINIT_FUNC
initcos_module(void)
{
(void) Py_InitModule("cos_module", CosMethods);
}
# endif
As you can see, there is much boilerplate, both to massage the arguments and return types into place and for
the module initialisation. Although some of this is amortised, as the extension grows, the boilerplate required
for each function(s) remains.
The standard python build system distutils supports compiling C-extensions from a setup.py, which is
rather convenient:
$ cd advanced/interfacing_with_c/python_c_api
$ ls
cos_module.c setup.py
$ ls
build/ cos_module.c cos_module.so setup.py
Note: In Python 3, the filename for compiled modules includes metadata on the Python interpreter (see PEP
3149) and is thus longer. The import statement is not affected by this.
In [2]: cos_module?
Type: module
String Form:<module 'cos_module' from 'cos_module.so'>
File: /home/esc/git-working/scipy-lecture-notes/advanced/interfacing_with_c/python_c_api/
,cos_module.so
Docstring: <no docstring>
In [3]: dir(cos_module)
Out[3]: ['__doc__', '__file__', '__name__', '__package__', 'cos_func']
In [4]: cos_module.cos_func(1.0)
Out[4]: 0.5403023058681398
In [5]: cos_module.cos_func(0.0)
Out[5]: 1.0
In [6]: cos_module.cos_func(3.14159265359)
Out[7]: -1.0
In [10]: cos_module.cos_func('foo')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-10-11bee483665d> in <module>()
----> 1 cos_module.cos_func('foo')
Analog to the Python-C-API, Numpy, which is itself implemented as a C-extension, comes with the Numpy-C-
API. This API can be used to create and manipulate Numpy arrays from C, when writing a custom C-extension.
See also: Advanced NumPy.
Note: If you do ever need to use the Numpy C-API refer to the documentation about Arrays and Iterators.
The following example shows how to pass Numpy arrays as arguments to functions and how to iterate over
Numpy arrays using the (old) Numpy-C-API. It simply takes an array as argument applies the cosine function
from the math.h and returns a resulting new array.
/* Example of wrapping the cos function from math.h using the Numpy-C-API. */
# include <Python.h>
# include <numpy/arrayobject.h>
# include <math.h>
PyArrayObject *in_array;
PyObject *out_array;
NpyIter *in_iter;
NpyIter *out_iter;
NpyIter_IterNextFunc *in_iternext;
NpyIter_IterNextFunc *out_iternext;
/* module initialization */
PyMODINIT_FUNC
initcos_module_np(void)
{
(void) Py_InitModule("cos_module_np", CosMethods);
/* IMPORTANT: this must be called */
import_array();
}
To compile this we can use distutils again. However we need to be sure to include the Numpy headers by using
:func:numpy.get_include.
To convince ourselves if this does actually works, we run the following test script:
import cos_module_np
import numpy as np
import pylab
14.3 Ctypes
Ctypes is a foreign function library for Python. It provides C compatible data types, and allows calling functions
in DLLs or shared libraries. It can be used to wrap these libraries in pure Python.
Advantages
Part of the Python standard library
Does not need to be compiled
Wrapping code entirely in Python
Disadvantages
Requires code to be wrapped to be available as a shared library (roughly speaking *.dll in Windows
*.so in Linux and *.dylib in Mac OSX.)
No good support for C++
14.3.1 Example
""" Example of wrapping cos function from math.h using ctypes. """
import ctypes
# OSX or linux
from ctypes.util import find_library
libm = ctypes.cdll.LoadLibrary(find_library('m'))
# Windows
# from ctypes import windll
# libm = cdll.msvcrt
def cos_func(arg):
''' Wrapper for cos from math.h '''
return libm.cos(arg)
Finding and loading the library may vary depending on your operating system, check the documentation
for details
This may be somewhat deceptive, since the math library exists in compiled form on the system already.
If you were to wrap a in-house library, you would have to compile it first, which may or may not require
some additional effort.
We may now use this, as before:
In [2]: cos_module?
Type: module
String Form:<module 'cos_module' from 'cos_module.py'>
File: /home/esc/git-working/scipy-lecture-notes/advanced/interfacing_with_c/ctypes/cos_
,module.py
Docstring: <no docstring>
In [3]: dir(cos_module)
Out[3]:
['__builtins__',
'__doc__',
'__file__',
'__name__',
'__package__',
'cos_func',
'ctypes',
'find_library',
'libm']
In [4]: cos_module.cos_func(1.0)
Out[4]: 0.5403023058681398
In [5]: cos_module.cos_func(0.0)
Out[5]: 1.0
In [6]: cos_module.cos_func(3.14159265359)
Out[6]: -1.0
As with the previous example, this code is somewhat robust, although the error message is not quite as helpful,
since it does not tell us what the type should be.
In [7]: cos_module.cos_func('foo')
---------------------------------------------------------------------------
ArgumentError Traceback (most recent call last)
<ipython-input-7-11bee483665d> in <module>()
----> 1 cos_module.cos_func('foo')
/home/esc/git-working/scipy-lecture-notes/advanced/interfacing_with_c/ctypes/cos_module.py in
,cos_func(arg)
12 def cos_func(arg):
13 ''' Wrapper for cos from math.h '''
---> 14 return libm.cos(arg)
Numpy contains some support for interfacing with ctypes. In particular there is support for exporting certain
attributes of a Numpy array as ctypes data-types and there are functions to convert from C arrays to Numpy
arrays and back.
For more information, consult the corresponding section in the Numpy Cookbook and the API documentation
for numpy.ndarray.ctypes and numpy.ctypeslib.
For the following example, lets consider a C function in a library that takes an input and an output array,
computes the cosine of the input array and stores the result in the output array.
The library consists of the following header file (although this is not strictly needed for this example, we list it
for completeness):
# include <math.h>
And since the library is pure C, we cant use distutils to compile it, but must use a combination of make and
gcc:
m.PHONY : clean
libcos_doubles.so : cos_doubles.o
gcc -shared -Wl,-soname,libcos_doubles.so -o libcos_doubles.so cos_doubles.o
cos_doubles.o : cos_doubles.c
gcc -c -fPIC cos_doubles.c -o cos_doubles.o
clean :
-rm -vf libcos_doubles.so cos_doubles.o cos_doubles.pyc
We can then compile this (on Linux) into the shared library libcos_doubles.so:
$ ls
cos_doubles.c cos_doubles.h cos_doubles.py makefile test_cos_doubles.py
$ make
gcc -c -fPIC cos_doubles.c -o cos_doubles.o
gcc -shared -Wl,-soname,libcos_doubles.so -o libcos_doubles.so cos_doubles.o
$ ls
cos_doubles.c cos_doubles.o libcos_doubles.so* test_cos_doubles.py
cos_doubles.h cos_doubles.py makefile
Now we can proceed to wrap this library via ctypes with direct support for (certain kinds of) Numpy arrays:
import numpy as np
import numpy.ctypeslib as npct
from ctypes import c_int
Note the inherent limitation of contiguous single dimensional Numpy arrays, since the C functions re-
quires this kind of buffer.
Also note that the output array must be preallocated, for example with numpy.zeros() and the function
will write into its buffer.
Although the original signature of the cos_doubles function is ARRAY, ARRAY, int the final
cos_doubles_func takes only two Numpy arrays as arguments.
And, as before, we convince ourselves that it worked:
import numpy as np
import pylab
import cos_doubles
cos_doubles.cos_doubles_func(x, y)
pylab.plot(x, y)
pylab.show()
14.4 SWIG
SWIG, the Simplified Wrapper Interface Generator, is a software development tool that connects programs
written in C and C++ with a variety of high-level programming languages, including Python. The important
thing with SWIG is, that it can autogenerate the wrapper code for you. While this is an advantage in terms of
development time, it can also be a burden. The generated file tend to be quite large and may not be too human
readable and the multiple levels of indirection which are a result of the wrapping process, may be a bit tricky
to understand.
Advantages
Can automatically wrap entire libraries given the headers
Works nicely with C++
Disadvantages
Autogenerates enormous files
Hard to debug if something goes wrong
Steep learning curve
14.4.1 Example
Lets imagine that our cos function lives in a cos_module which has been written in c and consists of the
source file cos_module.c:
# include <math.h>
And our goal is to expose the cos_func to Python. To achieve this with SWIG, we must write an interface file
which contains the instructions for SWIG.
%module cos_module
%{
/* the resulting C file should be built as a python extension */
# define SWIG_FILE_WITH_INIT
/* Includes the header in the wrapper code */
# include "cos_module.h"
%}
/* Parse the header file to generate wrappers */
%include "cos_module.h"
As you can see, not too much code is needed here. For this simple example it is enough to simply include
the header file in the interface file, to expose the function to Python. However, SWIG does allow for more fine
grained inclusion/exclusion of functions found in header files, check the documentation for details.
Generating the compiled wrappers is a two stage process:
1. Run the swig executable on the interface file to generate the files cos_module_wrap.c, which is the
source file for the autogenerated Python C-extension and cos_module.py, which is the autogenerated
pure python module.
2. Compile the cos_module_wrap.c into the _cos_module.so. Luckily, distutils knows how to handle
SWIG interface files, so that our setup.py is simply:
setup(ext_modules=[Extension("_cos_module",
sources=["cos_module.c", "cos_module.i"])])
$ cd advanced/interfacing_with_c/swig
$ ls
cos_module.c cos_module.h cos_module.i setup.py
$ ls
build/ cos_module.c cos_module.h cos_module.i cos_module.py _cos_module.so* cos_module_
,wrap.c setup.py
We can now load and execute the cos_module as we have done in the previous examples:
In [2]: cos_module?
Type: module
String Form:<module 'cos_module' from 'cos_module.py'>
File: /home/esc/git-working/scipy-lecture-notes/advanced/interfacing_with_c/swig/cos_
,module.py
Docstring: <no docstring>
In [3]: dir(cos_module)
Out[3]:
['__builtins__',
'__doc__',
'__file__',
'__name__',
'__package__',
'_cos_module',
'_newclass',
'_object',
'_swig_getattr',
'_swig_property',
'_swig_repr',
'_swig_setattr',
'_swig_setattr_nondynamic',
'cos_func']
In [4]: cos_module.cos_func(1.0)
Out[4]: 0.5403023058681398
In [5]: cos_module.cos_func(0.0)
Out[5]: 1.0
In [6]: cos_module.cos_func(3.14159265359)
Out[6]: -1.0
Again we test for robustness, and we see that we get a better error message (although, strictly speaking in
Python there is no double type):
In [7]: cos_module.cos_func('foo')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-11bee483665d> in <module>()
----> 1 cos_module.cos_func('foo')
Numpy provides support for SWIG with the numpy.i file. This interface file defines various so-called typemaps
which support conversion between Numpy arrays and C-Arrays. In the following example we will take a quick
look at how such typemaps work in practice.
We have the same cos_doubles function as in the ctypes example:
# include <math.h>
for(i=0;i<size;i++){
out_array[i] = cos(in_array[i]);
}
}
%module cos_doubles
%{
/* the resulting C file should be built as a python extension */
# define SWIG_FILE_WITH_INIT
/* Includes the header in the wrapper code */
# include "cos_doubles.h"
%}
/* typemaps for the two arrays, the second will be modified in-place */
%apply (double* IN_ARRAY1, int DIM1) {(double * in_array, int size_in)}
%apply (double* INPLACE_ARRAY1, int DIM1) {(double * out_array, int size_out)}
setup(ext_modules=[Extension("_cos_doubles",
sources=["cos_doubles.c", "cos_doubles.i"],
include_dirs=[numpy.get_include()])])
$ ls
cos_doubles.c cos_doubles.h cos_doubles.i numpy.i setup.py test_cos_doubles.py
$ python setup.py build_ext -i
running build_ext
building '_cos_doubles' extension
import numpy as np
import pylab
import cos_doubles
cos_doubles.cos_doubles_func(x, y)
pylab.plot(x, y)
pylab.show()
14.5 Cython
Cython is both a Python-like language for writing C-extensions and an advanced compiler for this language.
The Cython language is a superset of Python, which comes with additional constructs that allow you call C
functions and annotate variables and class attributes with c types. In this sense one could also call it a Python
with types.
In addition to the basic use case of wrapping native code, Cython supports an additional use-case, namely
interactive optimization. Basically, one starts out with a pure-Python script and incrementally adds Cython
types to the bottleneck code to optimize only those code paths that really matter.
In this sense it is quite similar to SWIG, since the code can be autogenerated but in a sense it also quite similar
to ctypes since the wrapping code can (almost) be written in Python.
While others solutions that autogenerate code can be quite difficult to debug (for example SWIG) Cython
comes with an extension to the GNU debugger that helps debug Python, Cython and C code.
Advantages
Python like language for writing C-extensions
Autogenerated code
Supports incremental optimization
Includes a GNU debugger extension
Support for C++ (Since version 0.13)
Disadvantages
Must be compiled
Requires an additional library ( but only at build time, at this problem can be overcome by shipping the
generated C files)
14.5.1 Example
The main Cython code for our cos_module is contained in the file cos_module.pyx:
""" Example of wrapping cos function from math.h using Cython. """
def cos_func(arg):
return cos(arg)
Note the additional keywords such as cdef and extern. Also the cos_func is then pure Python.
Again we can use the standard distutils module, but this time we need some additional pieces from the
Cython.Distutils:
setup(
cmdclass={'build_ext': build_ext},
ext_modules=[Extension("cos_module", ["cos_module.pyx"])]
)
Compiling this:
$ cd advanced/interfacing_with_c/cython
$ ls
cos_module.pyx setup.py
$ python setup.py build_ext --inplace
running build_ext
cythoning cos_module.pyx to cos_module.c
building 'cos_module' extension
creating build
creating build/temp.linux-x86_64-2.7
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -
,fPIC -I/home/esc/anaconda/include/python2.7 -c cos_module.c -o build/temp.linux-x86_64-2.7/
,cos_module.o
gcc -pthread -shared build/temp.linux-x86_64-2.7/cos_module.o -L/home/esc/anaconda/lib -
,lpython2.7 -o /home/esc/git-working/scipy-lecture-notes/advanced/interfacing_with_c/cython/
,cos_module.so
$ ls
build/ cos_module.c cos_module.pyx cos_module.so* setup.py
In [2]: cos_module?
Type: module
String Form:<module 'cos_module' from 'cos_module.so'>
File: /home/esc/git-working/scipy-lecture-notes/advanced/interfacing_with_c/cython/cos_
,module.so
Docstring: <no docstring>
In [3]: dir(cos_module)
Out[3]:
['__builtins__',
'__doc__',
'__file__',
'__name__',
'__package__',
'__test__',
'cos_func']
In [4]: cos_module.cos_func(1.0)
Out[4]: 0.5403023058681398
In [5]: cos_module.cos_func(0.0)
Out[5]: 1.0
In [6]: cos_module.cos_func(3.14159265359)
Out[6]: -1.0
And, testing a little for robustness, we can see that we get good error messages:
In [7]: cos_module.cos_func('foo')
---------------------------------------------------------------------------
TypeError Traceback (most recent call last)
<ipython-input-7-11bee483665d> in <module>()
----> 1 cos_module.cos_func('foo')
/home/esc/git-working/scipy-lecture-notes/advanced/interfacing_with_c/cython/cos_module.so in
,cos_module.cos_func (cos_module.c:506)()
Additionally, it is worth noting that Cython ships with complete declarations for the C math library, which
""" Simpler example of wrapping cos function from math.h using Cython. """
def cos_func(arg):
return cos(arg)
In this case the cimport statement is used to import the cos function.
Cython has support for Numpy via the numpy.pyx file which allows you to add the Numpy array type to your
Cython code. I.e. like specifying that variable i is of type int, you can specify that variable a is of type numpy.
ndarray with a given dtype. Also, certain optimizations such as bounds checking are supported. Look at the
corresponding section in the Cython documentation. In case you want to pass Numpy arrays as C arrays to
your Cython wrapped C functions, there is a section about this in the Cython wiki.
In the following example, we will show how to wrap the familiar cos_doubles function using Cython.
# include <math.h>
""" Example of wrapping a C function that takes C double arrays as input using
the Numpy declarations from Cython """
setup(
cmdclass={'build_ext': build_ext},
ext_modules=[Extension("cos_doubles",
sources=["_cos_doubles.pyx", "cos_doubles.c"],
include_dirs=[numpy.get_include()])],
)
As with the previous compiled Numpy examples, we need the include_dirs option.
$ ls
cos_doubles.c cos_doubles.h _cos_doubles.pyx setup.py test_cos_doubles.py
$ python setup.py build_ext -i
running build_ext
cythoning _cos_doubles.pyx to _cos_doubles.c
building 'cos_doubles' extension
creating build
creating build/temp.linux-x86_64-2.7
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -
,fPIC -I/home/esc/anaconda/lib/python2.7/site-packages/numpy/core/include -I/home/esc/
,anaconda/include/python2.7 -c _cos_doubles.c -o build/temp.linux-x86_64-2.7/_cos_doubles.o
In file included from /home/esc/anaconda/lib/python2.7/site-packages/numpy/core/include/numpy/
,ndarraytypes.h:1722,
from /home/esc/anaconda/lib/python2.7/site-packages/numpy/core/include/numpy/
,ndarrayobject.h:17,
from /home/esc/anaconda/lib/python2.7/site-packages/numpy/core/include/numpy/
,arrayobject.h:15,
from _cos_doubles.c:253:
/home/esc/anaconda/lib/python2.7/site-packages/numpy/core/include/numpy/npy_deprecated_api.
,h:11:2: warning: #warning "Using deprecated NumPy API, disable it by #defining NPY_NO_
,DEPRECATED_API NPY_1_7_API_VERSION"
/home/esc/anaconda/lib/python2.7/site-packages/numpy/core/include/numpy/__ufunc_api.h:236:
,warning: _import_umath defined but not used
gcc -pthread -fno-strict-aliasing -g -O2 -DNDEBUG -g -fwrapv -O3 -Wall -Wstrict-prototypes -
,fPIC -I/home/esc/anaconda/lib/python2.7/site-packages/numpy/core/include -I/home/esc/
,anaconda/include/python2.7 -c cos_doubles.c -o build/temp.linux-x86_64-2.7/cos_doubles.o
gcc -pthread -shared build/temp.linux-x86_64-2.7/_cos_doubles.o build/temp.linux-x86_64-2.7/
,cos_doubles.o -L/home/esc/anaconda/lib -lpython2.7 -o /home/esc/git-working/scipy-lecture-
,notes/advanced/interfacing_with_c/cython_numpy/cos_doubles.so
$ ls
build/ _cos_doubles.c cos_doubles.c cos_doubles.h _cos_doubles.pyx cos_doubles.so* setup.
,py test_cos_doubles.py
import numpy as np
import pylab
import cos_doubles
cos_doubles.cos_doubles_func(x, y)
pylab.plot(x, y)
pylab.show()
14.6 Summary
In this section four different techniques for interfacing with native code have been presented. The table below
roughly summarizes some of the aspects of the techniques.
Of all three presented techniques, Cython is the most modern and advanced. In particular, the ability to opti-
mize code incrementally by adding types to your Python code is unique.
Gal Varoquauxs blog post about avoiding data copies provides some insight on how to handle memory
management cleverly. If you ever run into issues with large datasets, this is a reference to come back to
for some inspiration.
14.8 Exercises
Since this is a brand new section, the exercises are considered more as pointers as to what to look at next, so
pick the ones that you find more interesting. If you have good ideas for exercises, please let us know!
1. Download the source code for each example and compile and run them on your machine.
2. Make trivial changes to each example and convince yourself that this works. ( E.g. change cos for sin.)
3. Most of the examples, especially the ones involving Numpy may still be fragile and respond badly to
input errors. Look for ways to crash the examples, figure what the problem is and devise a potential
solution. Here are some ideas:
(a) Numerical overflow.
(b) Input and output arrays that have different lengths.
(c) Multidimensional array.
(d) Empty array
(e) Arrays with non-double types
4. Use the %timeit IPython magic to measure the execution time of the various solutions
14.8.1 Python-C-API
1. Modify the Numpy example such that the function takes two input arguments, where the second is the
preallocated output array, making it similar to the other Numpy examples.
2. Modify the example such that the function only takes a single input array and modifies this in place.
3. Try to fix the example to use the new Numpy iterator protocol. If you manage to obtain a working solu-
tion, please submit a pull-request on github.
4. You may have noticed, that the Numpy-C-API example is the only Numpy example that does not wrap
cos_doubles but instead applies the cos function directly to the elements of the Numpy array. Does
this have any advantages over the other techniques.
5. Can you wrap cos_doubles using only the Numpy-C-API. You may need to ensure that the arrays have
the correct type, are one dimensional and contiguous in memory.
14.8.2 Ctypes
1. Modify the Numpy example such that cos_doubles_func handles the preallocation for you, thus mak-
ing it more like the Numpy-C-API example.
14.8.3 SWIG
1. Look at the code that SWIG autogenerates, how much of it do you understand?
2. Modify the Numpy example such that cos_doubles_func handles the preallocation for you, thus mak-
ing it more like the Numpy-C-API example.
3. Modify the cos_doubles C function so that it returns an allocated array. Can you wrap this using SWIG
typemaps? If not, why not? Is there a workaround for this specific situation? (Hint: you know the size of
the output array, so it may be possible to construct a Numpy array from the returned double *.)
14.8.4 Cython
1. Look at the code that Cython autogenerates. Take a closer look at some of the comments that Cython
inserts. What do you see?
2. Look at the section Working with Numpy from the Cython documentation to learn how to incrementally
optimize a pure python script that uses Numpy.
3. Modify the Numpy example such that cos_doubles_func handles the preallocation for you, thus mak-
ing it more like the Numpy-C-API example.
460
Scipy lecture notes, Edition 2017.1
This part of the Scipy lecture notes is dedicated to various scientific packages useful for extended needs.
461
CHAPTER 15
Statistics in Python
Requirements
See also:
Bayesian statistics in Python: This chapter does not cover tools for Bayesian statistics. Of particular
interest for Bayesian modelling is PyMC, which implements a probabilistic programming language in
Python.
Read a statistics book: The Think stats book is available as free PDF or in print and is a great introduction
to statistics.
462
Scipy lecture notes, Edition 2017.1
R is a language dedicated to statistics. Python is a general-purpose language with statistics modules. R has
more statistical analysis features than Python, and specialized syntaxes. However, when it comes to building
complex analysis pipelines that mix statistics with e.g. image analysis, text mining, or control of a physical
experiment, the richness of Python is an invaluable asset.
Contents
Tip: In this document, the Python inputs are represented with the sign >>>.
The setting that we consider for statistical analysis is that of multiple observations or samples described by a
set of different attributes or features. The data can than be seen as a 2D table, or matrix, with columns giving
the different attributes of the data, and rows the observations. For instance, the data contained in examples/
brain_size.csv:
"";"Gender";"FSIQ";"VIQ";"PIQ";"Weight";"Height";"MRI_Count"
"1";"Female";133;132;124;"118";"64.5";816932
"2";"Male";140;150;124;".";"72.5";1001121
"3";"Male";139;123;150;"143";"73.3";1038437
"4";"Male";133;129;128;"172";"68.8";965353
"5";"Female";137;132;134;"147";"65.0";951545
Tip: We will store and manipulate this data in a pandas.DataFrame, from the pandas module. It is the Python
equivalent of the spreadsheet table. It is different from a 2D numpy array as it has named columns, can contain
a mixture of different data types by column, and has elaborate selection and pivotal mechanisms.
Separator
Reading from a CSV file: Using the above CSV file that gives observations of brain size and weight and IQ
(Willerman et al. 1991), the data are a mixture of numerical and categorical values:
Creating from arrays: A pandas.DataFrame can also be seen as a dictionary of 1D series, eg arrays or lists. If
we have 3 numpy arrays:
Other inputs: pandas can input data from SQL, excel files, or other formats. See the pandas documentation.
Manipulating data
Note: For a quick view on a large dataframe, use its describe method: pandas.DataFrame.describe().
('Female', 109.45)
('Male', 115.25)
groupby_gender is a powerful object that exposes many operations on the resulting group of dataframes:
>>> groupby_gender.mean()
Unnamed: 0 FSIQ VIQ PIQ Weight Height MRI_Count
Gender
Female 19.65 111.9 109.45 110.45 137.200000 65.765000 862654.6
Male 21.35 115.0 115.25 111.60 166.444444 71.431579 954855.4
Tip: Use tab-completion on groupby_gender to find more. Other common grouping functions are median,
count (useful for checking to see the amount of missing values in different subsets) or sum. Groupby evalua-
tion is lazy, no work is done until an aggregation function is applied.
Exercise
What is the mean value for VIQ for the full population?
How many males/females were included in this study?
Hint use tab completion to find out the methods that can be called, instead of mean in the above
example.
What is the average value of MRI counts expressed in log units, for males and females?
Note: groupby_gender.boxplot is used for the plots above (see this example).
Plotting data
Pandas comes with some plotting tools (pandas.tools.plotting, using matplotlib behind the scene) to dis-
play statistics of the data in dataframes:
Scatter matrices:
Two populations
Exercise
Plot the scatter matrix for males only, and for females only. Do you think that the 2 sub-populations corre-
spond to gender?
For simple statistical tests, we will use the scipy.stats sub-module of scipy:
See also:
Scipy is a vast library. For a quick summary to the whole library, see the scipy chapter.
scipy.stats.ttest_1samp() tests if the population mean of data is likely to be equal to a given value (tech-
nically if observations are drawn from a Gaussian distributions of given population mean). It returns the T
statistic, and the p-value (see the functions help):
>>> stats.ttest_1samp(data['VIQ'], 0)
Ttest_1sampResult(statistic=30.088099970..., pvalue=1.32891964...e-28)
Tip: With a p-value of 10^-28 we can claim that the population mean for the IQ (VIQ measure) is not 0.
We have seen above that the mean VIQ in the male and female populations were different. To test if this is
significant, we do a 2-sample t-test with scipy.stats.ttest_ind():
PIQ, VIQ, and FSIQ give 3 measures of IQ. Let us test if FISQ
and PIQ are significantly different. We can use a 2 sample test:
The problem with this approach is that it forgets that there are links between observations: FSIQ and PIQ are
measured on the same individuals. Thus the variance due to inter-subject variability is confounding, and can
be removed, using a paired test, or repeated measures test:
T-tests assume Gaussian errors. We can use a Wilcoxon signed-rank test, that relaxes this assumption:
Note: The corresponding test in the non paired case is the MannWhitney U test, scipy.stats.
mannwhitneyu().
Exercise
>>> print(model.summary())
OLS Regression Results
==========================...
Dep. Variable: y R-squared: 0.804
Model: OLS Adj. R-squared: 0.794
Method: Least Squares F-statistic: 74.03
Date: ... Prob (F-statistic): 8.56e-08
Time: ... Log-Likelihood: -57.988
No. Observations: 20 AIC: 120.0
Df Residuals: 18 BIC: 122.0
Df Model: 1
Covariance Type: nonrobust
==========================...
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------...
Intercept -5.5335 1.036 -5.342 0.000 -7.710 -3.357
x 2.9369 0.341 8.604 0.000 2.220 3.654
==========================...
Omnibus: 0.100 Durbin-Watson: 2.956
Prob(Omnibus): 0.951 Jarque-Bera (JB): 0.322
Skew: -0.058 Prob(JB): 0.851
Kurtosis: 2.390 Cond. No. 3.03
==========================...
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Terminology:
Statsmodels uses a statistical terminology: the y variable in statsmodels is called endogenous while the x
variable is called exogenous. This is discussed in more detail here.
To simplify, y (endogenous) is the value you are trying to predict, while x (exogenous) represents the features
you are using to make the prediction.
Exercise
Retrieve the estimated parameters from the model above. Hint: use tab-completion to find the relevent
attribute.
We can write a comparison between IQ of male and female using a linear model:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Forcing categorical: the Gender is automatically detected as a categorical variable, and thus each of its
different values are treated as different entities.
An integer column can be forced to be treated as categorical using:
>>> model = ols('VIQ ~ C(Gender)', data).fit()
Intercept: We can remove the intercept using - 1 in the formula, or force the use of an intercept using + 1.
Tip: By default, statsmodels treats a categorical variable with K possible values as K-1 dummy boolean
variables (the last level being absorbed into the intercept term). This is almost always a good default
choice - however, it is possible to specify different encodings for categorical variables (http://statsmodels.
sourceforge.net/devel/contrasts.html).
To compare different types of IQ, we need to create a long-form table, listing IQs, where the type of IQ is
indicated by a categorical variable:
>>> data_fisq = pandas.DataFrame({'iq': data['FSIQ'], 'type': 'fsiq'})
>>> data_piq = pandas.DataFrame({'iq': data['PIQ'], 'type': 'piq'})
>>> data_long = pandas.concat((data_fisq, data_piq))
>>> print(data_long)
iq type
0 133 fsiq
1 140 fsiq
2 139 fsiq
...
31 137 piq
32 110 piq
33 86 piq
...
We can see that we retrieve the same values for t-test and corresponding p-values for the effect of the type
of iq than the previous t-test:
>>> stats.ttest_ind(data['FSIQ'], data['PIQ'])
Ttest_indResult(statistic=0.46563759638..., pvalue=0.64277250...)
Consider a linear model explaining a variable z (the dependent variable) with 2 variables x and y:
z = x c1 + y c2 + i + e
Tip: Sepal and petal size tend to be related: bigger flowers are bigger! But is there in addition a systematic
effect of species?
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
In the above iris example, we wish to test if the petal length is different between versicolor and virginica, after
removing the effect of sepal width. This can be formulated as testing the difference between the coefficient as-
sociated to versicolor and virginica in the linear model estimated above (it is an Analysis of Variance, ANOVA).
For this, we write a vector of contrast on the parameters estimated: we want to test "name[T.versicolor]
- name[T.virginica]", with an F-test:
Exercise
Going back to the brain size + IQ data, test if the VIQ of male and female are different after removing the
effect of brain size, height and weight.
Tip: The full code loading and plotting of the wages data is found in corresponding example.
0 8 0 1 21 0 0.707570 35 2
1 9 0 1 42 0 0.694605 57 3
2 12 0 0 1 0 0.824126 19 3
3 12 0 0 4 0 0.602060 22 3
...
We can easily have an intuition on the interactions between continuous variables using seaborn.pairplot()
to display a scatter matrix:
Seaborn changes the default of matplotlib figures to achieve a more modern, excel-like look. It does that
upon import. You can reset the default using:
>>> from matplotlib import pyplot as plt
>>> plt.rcdefaults()
Tip: To switch back to seaborn settings, or understand better styling in seaborn, see the relevent section of
the seaborn documentation.
Robust regression
Tip: Given that, in the above plot, there seems to be a couple of data points that are outside of the main
cloud to the right, they might be outliers, not representative of the population, but driving the regression.
To compute a regression that is less sentive to outliers, one must use a robust model. This is done in seaborn
using robust=True in the plotting functions, or in statsmodels by replacing the use of the OLS by a Robust
Linear Model, statsmodels.formula.api.rlm().
Tip: The plot above is made of two different fits. We need to formulate a single model that tests for a variance
of slope across the to population. This is done via an interaction.
Hypothesis testing and p-value give you the significance of an effect / difference
Formulas (with categorical variables) enable you to express rich links in your data
Visualizing your data and simple model fits matters!
Conditionning (adding factors that can explain all or part of the variation) is important modeling
aspect that changes the interpretation.
Plot boxplots for FSIQ, PIQ, and the paired difference between the two: while the spread (error bars) for FSIQ
and PIQ are very large, there is a systematic (common) effect due to the subjects. This effect is cancelled out
in the difference and the spread of the difference (paired by subject) is much smaller than the spread of the
individual measures.
import pandas
plt.show()
This example loads from a CSV file data with mixed numerical and categorical entries, and plots a few quan-
tities, separately for females and males, thanks to the pandas integrated plotting tool (that uses matplotlib
behind the scene).
See http://pandas.pydata.org/pandas-docs/stable/visualization.html
import pandas
import pandas
from pandas.tools import plotting
# The parameter 'c' is passed to plt.scatter and will control the color
plotting.scatter_matrix(data, c=categories.labels, marker='o')
fig = plt.gcf()
fig.suptitle("blue: setosa, green: versicolor, red: virginica", size=13)
Statistical analysis
Out:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Testing the difference between effect of versicolor and virginica
<F test: F=array([[ 3.24533535]]), p=0.07369058781700738, df_denom=146, df_num=1>
import numpy as np
import matplotlib.pyplot as plt
import pandas
x = np.linspace(-5, 5, 20)
y = -5 + 3*x + 4 * np.random.normal(size=x.shape)
# Convert the data into a Pandas DataFrame to use the formulas framework
# in statsmodels
data = pandas.DataFrame({'x': x, 'y': y})
print('\nANOVA results')
print(anova_results)
Out:
==============================================================================
Omnibus: 0.100 Durbin-Watson: 2.956
Prob(Omnibus): 0.951 Jarque-Bera (JB): 0.322
Skew: -0.058 Prob(JB): 0.851
Kurtosis: 2.390 Cond. No. 3.03
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
ANOVA results
df sum_sq mean_sq F PR(>F)
x 1.0 1588.873443 1588.873443 74.029383 8.560649e-08
Residual 18.0 386.329330 21.462741 NaN NaN
plt.show()
Calculate using statsmodels just the best fit, or all the corresponding statistical parameters.
Also shows how to make 3d plots.
import numpy as np
import matplotlib.pyplot as plt
import pandas
x = np.linspace(-5, 5, 21)
# We generate a 2D grid
X, Y = np.meshgrid(x, x)
# Convert the data into a Pandas DataFrame to use the formulas framework
# in statsmodels
print('\nANOVA results')
print(anova_results)
plt.show()
Out:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
ANOVA results
df sum_sq mean_sq F PR(>F)
x 1.0 39284.301219 39284.301219 623.962799 2.888238e-86
y 1.0 1055.220089 1055.220089 16.760336 5.050899e-05
Residual 438.0 27576.201607 62.959364 NaN NaN
Wages depend mostly on education. Here we investigate how this dependence is related to gender: not only
does gender create an offset in wages, it also seems that wages increase more with education for males than
females.
Does our data support this last hypothesis? We will test this using statsmodels formulas (http://statsmodels.
sourceforge.net/stable/example_formulas.html).
Load and massage the data
import pandas
import urllib
import os
if not os.path.exists('wages.txt'):
# Download the file if it is not present
urllib.urlretrieve('http://lib.stat.cmu.edu/datasets/CPS_85_Wages',
'wages.txt')
simple plotting
import seaborn
statistical analysis
import statsmodels.formula.api as sm
# Note that this model is not the plot displayed above: it is one
# joined model for male and female, not separate models for male and
# female. The reason is that a single model enables statistical testing
result = sm.ols(formula='wage ~ education + gender', data=data).fit()
print(result.summary())
Out:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
The plots above highlight that there is not only a different offset in wage but also a different slope
We need to model this using an interaction
Out:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
Looking at the p-value of the interaction of gender and education, the data does not support the hypothesis
that education benefits males more than female (p-value > 0.05).
This example uses seaborn to quickly plot various factors relating wages, experience and eduction.
Seaborn (http://stanford.edu/~mwaskom/software/seaborn/) is a library that combines visualization and sta-
tistical fits to show trends in data.
Note that importing seaborn changes the matplotlib style to have an excel-like feeling. This changes affect
other matplotlib figures. To restore defaults once this example is run, we would need to call plt.rcdefaults().
import pandas
if not os.path.exists('wages.txt'):
# Download the file if it is not present
urllib.urlretrieve('http://lib.stat.cmu.edu/datasets/CPS_85_Wages',
'wages.txt')
import seaborn
seaborn.pairplot(data, vars=['WAGE', 'AGE', 'EDUCATION'],
kind='reg')
Plot a simple regression
plt.show()
import pandas
if not os.path.exists('airfares.txt'):
# Download the file if it is not present
urllib.urlretrieve(
'http://www.stat.ufl.edu/~winner/data/airq4.dat',
'airfares.txt')
import seaborn
seaborn.pairplot(data_flat, vars=['fare', 'dist', 'nb_passengers'],
kind='reg', markers='.')
# A second plot, to show the effect of the year (ie the 9/11 effect)
seaborn.pairplot(data_flat, vars=['fare', 'dist', 'nb_passengers'],
kind='reg', hue='year', markers='.')
Plot the difference in fare
plt.figure(figsize=(5, 2))
seaborn.boxplot(data.fare_2001 - data.fare_2000)
plt.title('Fare: 2001 - 2000')
plt.subplots_adjust()
plt.figure(figsize=(5, 2))
seaborn.boxplot(data.nb_passengers_2001 - data.nb_passengers_2000)
plt.title('NB passengers: 2001 - 2000')
plt.subplots_adjust()
Statistical testing: dependence of fare on distance and number of passengers
import statsmodels.formula.api as sm
Out:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 5.23e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
Robust linear Model Regression Results
==============================================================================
Dep. Variable: fare No. Observations: 8352
Model: RLM Df Residuals: 8349
Method: IRLS Df Model: 2
Norm: HuberT
Scale Est.: mad
Cov Type: H1
Date: Tue, 03 Oct 2017
Time: 16:00:28
No. Iterations: 12
=================================================================================
coef std err z P>|z| [95.0% Conf. Int.]
---------------------------------------------------------------------------------
Intercept 215.0848 2.448 87.856 0.000 210.287 219.883
dist 0.0460 0.001 46.166 0.000 0.044 0.048
nb_passengers -35.2686 1.119 -31.526 0.000 -37.461 -33.076
=================================================================================
If the model instance has been used for another fit with different fit
parameters, then the fit options might not be the correct ones anymore .
plt.show()
Out:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 2.4e+03. This might indicate that there are
strong multicollinearity or other numerical problems.
Going back to the brain size + IQ data, test if the VIQ of male and female are different after removing the effect
of brain size, height and weight.
Notice that here Gender is a categorical value. As it is a non-float data type, statsmodels is able to automati-
cally infer this.
import pandas
from statsmodels.formula.api import ols
Out:
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
[2] The condition number is large, 2.4e+07. This might indicate that there are
Here we plot a scatter matrix to get intuitions on our results. This goes beyond what was asked in the exercise
# The parameter 'c' is passed to plt.scatter and will control the color
# The same holds for parameters 'marker', 'alpha' and 'cmap', that
# control respectively the type of marker used, their transparency and
# the colormap
plotting.scatter_matrix(data[['VIQ', 'MRI_Count', 'Height']],
c=(data['Gender'] == 'Female'), marker='o',
alpha=1, cmap='winter')
fig = plt.gcf()
fig.suptitle("blue: male, green: female", size=13)
plt.show()
Generated by Sphinx-Gallery
Download all examples in Python source code: auto_examples_python.zip
Download all examples in Jupyter notebooks: auto_examples_jupyter.zip
Generated by Sphinx-Gallery
Objectives
What is SymPy? SymPy is a Python library for symbolic mathematics. It aims to be an alternative to systems
such as Mathematica or Maple while keeping the code as simple as possible and easily extensible. SymPy is
written entirely in Python and does not require any external libraries.
Sympy documentation and packages for installation can be found on http://www.sympy.org/
Chapters contents
505
Scipy lecture notes, Edition 2017.1
Algebraic manipulations
Expand
Simplify
Calculus
Limits
Differentiation
Series expansion
Integration
Equation solving
Linear Algebra
Matrices
Differential Equations
>>> a
1/2
>>> a*2
1
SymPy uses mpmath in the background, which makes it possible to perform computations using arbitrary-
precision arithmetic. That way, some special constants, like e, pi , oo (Infinity), are treated as symbols and can
be evaluated with arbitrary precision:
>>> sym.pi**2
pi**2
>>> sym.pi.evalf()
3.14159265358979
Exercises
p
1. Calculate 2 with 100 decimals.
2. Calculate 1/2 + 1/3 in rational arithmetic.
16.1.2 Symbols
In contrast to other Computer Algebra Systems, in SymPy you have to declare symbolic variables explicitly:
>>> x = sym.Symbol('x')
>>> y = sym.Symbol('y')
>>> x + y + x - y
2*x
>>> (x + y) ** 2
(x + y)**2
Symbols can now be manipulated using some of python operators: +, -`, ``*, ** (arithmetic), &, |, ~ , >>, <<
(boolean).
Printing
Sympy allows for control of the display of the output. From here we use the following setting for printing:
>>> sym.init_printing(use_unicode=False, wrap_line=True)
SymPy is capable of performing powerful algebraic manipulations. Well take a look into some of the most
frequently used: expand and simplify.
16.2.1 Expand
Use this to expand an algebraic expression. It will try to denest powers and multiplications:
>>> sym.expand((x + y) ** 3)
3 2 2 3
x + 3*x *y + 3*x*y + y
>>> 3 * x * y ** 2 + 3 * y * x ** 2 + x ** 3 + y ** 3
3 2 2 3
x + 3*x *y + 3*x*y + y
-sin(x)*sin(y) + cos(x)*cos(y)
>>> sym.cos(x) * sym.cos(y) - sym.sin(x) * sym.sin(y)
-sin(x)*sin(y) + cos(x)*cos(y)
16.2.2 Simplify
Use simplify if you would like to transform an expression into a simpler form:
>>> sym.simplify((x + x * y) / x)
y + 1
Simplification is a somewhat vague term, and more precises alternatives to simplify exists: powsimp (simplifi-
cation of exponents), trigsimp (for trigonometric expressions) , logcombine, radsimp, together.
Exercises
16.3 Calculus
16.3.1 Limits
Limits are easy to use in SymPy, they follow the syntax limit(function, variable, point), so to compute
the limit of f (x) as x 0, you would issue limit(f, x, 0):
>>> sym.limit(sym.sin(x) / x, x, 0)
1
>>> sym.limit(x ** x, x, 0)
1
16.3.2 Differentiation
You can differentiate any SymPy expression using diff(func, var). Examples:
>>> sym.diff(sym.sin(x), x)
cos(x)
>>> sym.diff(sym.sin(2 * x), x)
2*cos(2*x)
>>> sym.diff(sym.tan(x), x)
2
tan (x) + 1
SymPy also knows how to compute the Taylor series of an expression at a point. Use series(expr, var):
>>> sym.series(sym.cos(x), x)
2 4
x x / 6\
1 - -- + -- + O\x /
2 24
>>> sym.series(1/sym.cos(x), x)
2 4
x 5*x / 6\
1 + -- + ---- + O\x /
2 24
Exercises
16.3.4 Integration
SymPy has support for indefinite and definite integration of transcendental elementary and special functions
via integrate() facility, which uses the powerful extended Risch-Norman algorithm and some heuristics and
pattern matching. You can integrate elementary functions:
>>> sym.integrate(6 * x ** 5, x)
6
x
>>> sym.integrate(sym.sin(x), x)
-cos(x)
>>> sym.integrate(sym.log(x), x)
x*log(x) - x
>>> sym.integrate(2 * x + sym.sinh(x), x)
2
x + cosh(x)
SymPy is able to solve algebraic equations, in one and several variables using solveset():
>>> sym.solveset(x ** 4 - 1, x)
{-1, 1, -I, I}
As you can see it takes as first argument an expression that is supposed to be equaled to 0. It also has (limited)
support for transcendental equations:
>>> sym.solveset(sym.exp(x) + 1, x)
{I*(2*n*pi + pi) | n in Integers()}
Sympy is able to solve a large part of polynomial equations, and is also capable of solving multiple equa-
tions with respect to multiple variables giving a tuple as second argument. To do this you use the solve()
command:
>>> solution = sym.solve((x + 5 * y - 2, -3 * x + 6 * y - 15), (x, y))
>>> solution[x], solution[y]
(-3, 1)
Another alternative in the case of polynomial equations is factor. factor returns the polynomial factorized into
irreducible terms, and is capable of computing the factorization over various domains:
>>> f = x ** 4 - 3 * x ** 2 + 1
>>> sym.factor(f)
/ 2 \ / 2 \
\x - x - 1/*\x + x - 1/
SymPy is also able to solve boolean equations, that is, to decide if a certain boolean expression is satisfiable or
not. For this, we use the function satisfiable:
This tells us that (x & y) is True whenever x and y are both True. If an expression cannot be true, i.e. no
values of its arguments can make the expression True, it will return False:
Exercises
16.5.1 Matrices
>>> A**2
[x*y + 1 2*x ]
[ ]
[ 2*y x*y + 1]
SymPy is capable of solving (some) Ordinary Differential. To solve differential equations, use dsolve. First,
create an undefined function by passing cls=Function to the symbols function:
f and g are now undefined functions. We can call f(x), and it will represent an unknown function:
>>> f(x)
f(x)
2
d
f(x) + ---(f(x))
2
dx
Keyword arguments can be given to this function in order to help if find the best possible resolution system.
For example, if you know that it is a separable equations, you can use keyword hint='separable' to force
dsolve to resolve it as a separable equation:
_\ / _________________\ / ______________
| | / C1 | | / C1
| + pi, f(x) = -asin| / ----------- + 1 |, f(x) = asin| / ----------- +
| | / 2 | | / 2
/ \\/ sin (x) - 1 / \\/ sin (x) - 1
___\
|
1 |]
|
/
Exercises
d f (x)
x + f (x) f (x)2 = 0
x
2. Solve the same equation using hint='Bernoulli'. What do you observe ?
Chapters contents
513
Scipy lecture notes, Edition 2017.1
Mathematical morphology
Image segmentation
Binary segmentation: foreground + background
Marker based methods
Measuring regions properties
Data visualization and interaction
Feature extraction for computer vision
Full code examples
Examples for the scikit-image chapter
Recent versions of scikit-image is packaged in most Scientific Python distributions, such as Anaconda or
Enthought Canopy. It is also packaged for Ubuntu/Debian.
Other Python packages are available for image processing and work with NumPy arrays:
scipy.ndimage : for nd-arrays. Basic filtering, mathematical morphology, regions properties
Mahotas
Also, powerful image processing libraries have Python bindings:
OpenCV (computer vision)
ITK (3D images and registration)
and many others
(but they are less Pythonic and NumPy friendly, to a variable extent).
Website: http://scikit-image.org/
Gallery of examples: http://scikit-image.org/docs/stable/auto_examples/
Different kinds of functions, from boilerplate utility functions to high-level recent algorithms.
Filters: functions transforming images into other images.
NumPy machinery
Common filtering algorithms
Data reduction functions: computation of image histogram, position of local maxima, of corners, etc.
Other actions: I/O, visualization, etc.
>>> import os
>>> filename = os.path.join(skimage.data_dir, 'camera.png')
>>> camera = io.imread(filename)
Saving to files:
Different integer sizes are possible: 8-, 16- or 32-bytes, signed or unsigned.
Warning: An important (if questionable) skimage convention: float images are supposed to lie in [-1, 1]
(in order to have comparable contrast for all float images)
Some image processing routines need to work with float arrays, and may hence output an array with a different
type and the data range from the input array
Utility functions are provided in skimage to convert both the dtype and the data range, following skimages
conventions: util.img_as_float, util.img_as_ubyte, etc.
See the user guide for more details.
17.2.2 Colorspaces
Color images are of shape (N, M, 3) or (N, M, 4) (when an alpha channel encodes transparency)
Routines converting between different colorspaces (RGB, HSV, LAB etc.) are available in skimage.color :
color.rgb2hsv, color.lab2rgb, etc. Check the docstring for the expected dtype (and data range) of input
images.
3D images
Most functions of skimage can take 3D images as input arguments. Check the docstring to know if a func-
tion can be used on 3D images (for example MRI or CT images).
Exercise
Local filters replace the value of pixels by a function of the values of neighboring pixels. The function can be
linear or non-linear.
1 2 1
0 0 0
-1 -2 -1
Non-local filters use a large region of the image (or all the image) to transform the value of one pixel:
Erosion = minimum filter. Replace the value of a pixel by the minimal value covered by the structuring ele-
ment.:
Mathematical morphology operations are also available for (non-binary) grayscale images (int or float type).
Erosion and dilation correspond to minimum (resp. maximum) filters.
Image segmentation is the attribution of different labels to different regions of the image, for example in order
to extract the pixels of an object of interest.
Tip: The Otsu method is a simple heuristic to find a threshold to separate the foreground from the background.
Tip: Once you have separated foreground objects, it is use to separate them from each other. For this, we can
assign a different integer labels to each one.
Synthetic data:
>>> n = 20
>>> l = 256
>>> im = np.zeros((l, l))
>>> points = l * np.random.random((2, n ** 2))
>>> im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
>>> im = filters.gaussian(im, sigma=l / (4. * n))
>>> blobs = im > im.mean()
See also:
scipy.ndimage.find_objects() is useful to return slices on object in an image.
If you have markers inside a set of regions, you can use these to segment the regions.
Watershed segmentation
skimage provides several utility functions that can be used on label images (ie images where
different discrete values identify different regions). Functions names are often self-explaining:
skimage.segmentation.clear_border(), skimage.segmentation.relabel_from_one(), skimage.
morphology.remove_small_objects(), etc.
Exercise
Example: compute the size and perimeter of the two segmented regions:
See also:
for some properties, functions are available as well in scipy.ndimage.measurements with a different API (a
list is returned).
Exercise (continued)
Use the binary image of the coins and background from the previous exercise.
Compute an image of labels for the different coins.
>>> plt.figure()
<matplotlib.figure.Figure object at 0x...>
>>> plt.imshow(clean_border, cmap='gray')
<matplotlib.image.AxesImage object at 0x...>
Visualize contour
>>> plt.figure()
<matplotlib.figure.Figure object at 0x...>
>>> plt.imshow(coins, cmap='gray')
<matplotlib.image.AxesImage object at 0x...>
>>> plt.contour(clean_border, [0.5])
<matplotlib.contour.QuadContourSet ...>
The (ex-
perimental) scikit-image viewer
skimage.viewer = matplotlib-based canvas for displaying images + experimental Qt-based GUI-toolkit
(this ex-
ample is taken from the plot_corner example in scikit-image)
Points of interest such as corners can then be used to match objects in different images, as described in the
plot_matching example of scikit-image.
import numpy as np
import matplotlib.pyplot as plt
camera = data.camera()
plt.figure(figsize=(4, 4))
plt.imshow(camera, cmap='gray', interpolation='nearest')
plt.axis('off')
plt.tight_layout()
plt.show()
camera = data.camera()
camera_multiply = 3 * camera
plt.figure(figsize=(8, 4))
plt.subplot(121)
plt.imshow(camera, cmap='gray', interpolation='nearest')
plt.axis('off')
plt.subplot(122)
plt.imshow(camera_multiply, cmap='gray', interpolation='nearest')
plt.axis('off')
plt.tight_layout()
plt.show()
This example illustrates the use of the horizontal Sobel filter, to compute horizontal gradients.
text = data.text()
hsobel_text = filters.sobel_h(text)
plt.figure(figsize=(12, 3))
plt.subplot(121)
plt.imshow(text, cmap='gray', interpolation='nearest')
plt.axis('off')
plt.subplot(122)
plt.imshow(hsobel_text, cmap='spectral', interpolation='nearest')
plt.axis('off')
plt.tight_layout()
plt.show()
camera = data.camera()
camera_equalized = exposure.equalize_hist(camera)
plt.figure(figsize=(7, 3))
plt.subplot(121)
plt.imshow(camera, cmap='gray', interpolation='nearest')
plt.axis('off')
plt.subplot(122)
plt.imshow(camera_equalized, cmap='gray', interpolation='nearest')
plt.axis('off')
plt.tight_layout()
plt.show()
coins = data.coins()
mask = coins > filters.threshold_otsu(coins)
clean_border = segmentation.clear_border(mask).astype(np.int)
plt.figure(figsize=(8, 3.5))
plt.subplot(121)
plt.imshow(clean_border, cmap='gray')
plt.axis('off')
plt.subplot(122)
plt.imshow(coins_edges)
plt.axis('off')
plt.tight_layout()
plt.show()
camera = data.camera()
val = filters.threshold_otsu(camera)
plt.figure(figsize=(9, 4))
plt.subplot(131)
plt.imshow(camera, cmap='gray', interpolation='nearest')
plt.axis('off')
plt.subplot(132)
plt.imshow(camera < val, cmap='gray', interpolation='nearest')
plt.axis('off')
plt.subplot(133)
plt.plot(bins_center, hist, lw=2)
plt.axvline(val, color='k', ls='--')
plt.tight_layout()
plt.show()
This example shows how to label connected components of a binary image, using the dedicated skim-
age.measure.label function.
n = 12
l = 256
np.random.seed(1)
im = np.zeros((l, l))
points = l * np.random.random((2, n ** 2))
im[(points[0]).astype(np.int), (points[1]).astype(np.int)] = 1
im = filters.gaussian_filter(im, sigma= l / (4. * n))
blobs = im > 0.7 * im.mean()
all_labels = measure.label(blobs)
blobs_labels = measure.label(blobs, background=0)
plt.figure(figsize=(9, 3.5))
plt.subplot(131)
plt.imshow(blobs, cmap='gray')
plt.axis('off')
plt.subplot(132)
plt.imshow(all_labels, cmap='spectral')
plt.axis('off')
plt.subplot(133)
plt.imshow(blobs_labels, cmap='spectral')
plt.axis('off')
plt.tight_layout()
plt.show()
plt.gray()
plt.imshow(image, interpolation='nearest')
plt.plot(coords_subpix[:, 1], coords_subpix[:, 0], '+r', markersize=15, mew=5)
plt.plot(coords[:, 1], coords[:, 0], '.b', markersize=7)
plt.axis('off')
plt.show()
This example compares several denoising filters available in scikit-image: a Gaussian filter, a median filter, and
total variation denoising.
import numpy as np
import matplotlib.pyplot as plt
from skimage import data
from skimage import filters
from skimage import restoration
coins = data.coins()
gaussian_filter_coins = filters.gaussian(coins, sigma=2)
med_filter_coins = filters.median(coins, np.ones((3, 3)))
tv_filter_coins = restoration.denoise_tv_chambolle(coins, weight=0.1)
plt.figure(figsize=(16, 4))
plt.subplot(141)
plt.imshow(coins[10:80, 300:370], cmap='gray', interpolation='nearest')
plt.axis('off')
plt.title('Image')
plt.subplot(142)
plt.imshow(gaussian_filter_coins[10:80, 300:370], cmap='gray',
interpolation='nearest')
plt.axis('off')
plt.title('Gaussian filter')
plt.subplot(143)
plt.imshow(med_filter_coins[10:80, 300:370], cmap='gray',
interpolation='nearest')
plt.axis('off')
plt.title('Median filter')
plt.subplot(144)
plt.imshow(tv_filter_coins[10:80, 300:370], cmap='gray',
interpolation='nearest')
plt.axis('off')
plt.title('TV filter')
plt.show()
This example compares two segmentation methods in order to separate two connected disks: the watershed
algorithm, and the random walker algorithm.
Both segmentation methods require seeds, that are pixels belonging unambigusouly to a reagion. Here, local
maxima of the distance map to the background are used as seeds.
import numpy as np
from skimage.morphology import watershed
from skimage.feature import peak_local_max
from skimage import measure
from skimage.segmentation import random_walker
import matplotlib.pyplot as plt
from scipy import ndimage
markers[~image] = -1
labels_rw = random_walker(image, markers)
plt.figure(figsize=(12, 3.5))
plt.subplot(141)
plt.imshow(image, cmap='gray', interpolation='nearest')
plt.axis('off')
plt.title('image')
plt.subplot(142)
plt.imshow(-distance, interpolation='nearest')
plt.axis('off')
plt.title('distance map')
plt.subplot(143)
plt.imshow(labels_ws, cmap='spectral', interpolation='nearest')
plt.axis('off')
plt.title('watershed segmentation')
plt.subplot(144)
plt.imshow(labels_rw, cmap='spectral', interpolation='nearest')
plt.axis('off')
plt.title('random walker segmentation')
plt.tight_layout()
plt.show()
Tip: In this tutorial we will explore the Traits toolset and learn how to dramatically reduce the amount of
boilerplate code you write, do rapid GUI application development, and understand the ideas which underly
other parts of the Enthought Tool Suite.
Traits and the Enthought Tool Suite are open source projects licensed under a BSD-style license.
Intended Audience
Requirements
538
Scipy lecture notes, Edition 2017.1
Tutorial content
Introduction
Example
What are Traits
Initialisation
Validation
Documentation
Visualization: opening a dialog
Deferral
Notification
Some more advanced traits
18.1 Introduction
Tip: The Enthought Tool Suite enable the construction of sophisticated application frameworks for data anal-
ysis, 2D plotting and 3D visualization. These powerful, reusable components are released under liberal BSD-
style licenses.
18.2 Example
Throughout this tutorial, we will use an example based on a water resource management simple case. We will
try to model a dam and reservoir system. The reservoir and the dams do have a set of parameters :
Name
Minimal and maximal capacity of the reservoir [hm3]
Height and length of the dam [m]
Catchment area [km2]
Hydraulic head [m]
Power of the turbines [MW]
Minimal and maximal release [m3/s]
Efficiency of the turbines
The reservoir has a known behaviour. One part is related to the energy production based on the water released.
A simple formula for approximating electric power production at a hydroelectric plant is P = hr g k, where:
P is Power in watts,
is the density of water (~1000 kg/m3),
h is height in meters,
r is flow rate in cubic meters per second,
g is acceleration due to gravity of 9.8 m/s2,
k is a coefficient of efficiency ranging from 0 to 1.
Tip: Annual electric energy production depends on the available water supply. In some installations the water
flow rate can vary by a factor of 10:1 over the course of a year.
The second part of the behaviour is the state of the storage that depends on controlled and uncontrolled pa-
rameters :
st or ag e t +1 = st or ag e t + i n f l ow s r el ease spi l l ag e i r r i g at i on
Warning: The data used in this tutorial are not real and might even not have sense in the reality.
A trait is a type definition that can be used for normal Python object attributes, giving the attributes some
additional characteristics:
Standardization:
Initialization
Validation
Deferral
Notification
Visualization
Documentation
A class can freely mix trait-based attributes with normal Python attributes, or can opt to allow the use of only
a fixed or open set of trait attributes within the class. Trait attributes defined by a class are automatically
inherited by any subclass derived from the class.
The common way of creating a traits class is by extending from the HasTraits base class and defining class
traits :
class Reservoir(HasTraits):
name = Str
max_storage = Float
Using a traits class like that is as simple as any other Python class. Note that the trait value are passed using
keyword arguments:
18.3.1 Initialisation
All the traits do have a default value that initialise the variables. For example, the basic python types do have
the following trait equivalents:
A number of other predefined trait type do exist : Array, Enum, Range, Event, Dict, List, Color, Set, Expression,
Code, Callable, Type, Tuple, etc.
Custom default values can be defined in the code:
class Reservoir(HasTraits):
name = Str
max_storage = Float(100)
Complex initialisation
When a complex initialisation is required for a trait, a _XXX_default magic method can be implemented. It
will be lazily called when trying to access the XXX trait. For example:
def _name_default(self):
""" Complex initialisation of the reservoir name. """
return 'Undefined'
18.3.2 Validation
Every trait does validation when the user tries to set its content:
reservoir.max_storage = '230'
---------------------------------------------------------------------------
TraitError Traceback (most recent call last)
.../scipy-lecture-notes/advanced/traits/<ipython-input-7-979bdff9974a> in <module>()
----> 1 reservoir.max_storage = '230'
TraitError: The 'max_storage' trait of a Reservoir instance must be a float, but a value of '23
,' <type 'str'> was specified.
18.3.3 Documentation
By essence, all the traits do provide documentation about the model itself. The declarative approach to the
creation of classes makes it self-descriptive:
class Reservoir(HasTraits):
name = Str
max_storage = Float(100)
The desc metadata of the traits can be used to provide a more descriptive information about the trait :
class Reservoir(HasTraits):
name = Str
max_storage = Float(100, desc='Maximal storage [hm3]')
class Reservoir(HasTraits):
name = Str
max_storage = Float(1e6, desc='Maximal storage [hm3]')
max_release = Float(10, desc='Maximal release [m3/s]')
head = Float(10, desc='Hydraulic head [m]')
efficiency = Range(0, 1.)
if __name__ == '__main__':
reservoir = Reservoir(
name = 'Project A',
max_storage = 30,
max_release = 100.0,
head = 60,
efficiency = 0.8
)
release = 80
print 'Releasing {} m3/s produces {} kWh'.format(
release, reservoir.energy_production(release)
)
The Traits library is also aware of user interfaces and can pop up a default view for the Reservoir class:
reservoir1 = Reservoir()
reservoir1.edit_traits()
TraitsUI simplifies the way user interfaces are created. Every trait on a HasTraits class has a default editor that
will manage the way the trait is rendered to the screen (e.g. the Range trait is displayed as a slider, etc.).
In the very same vein as the Traits declarative way of creating classes, TraitsUI provides a declarative interface
to build user interfaces code:
class Reservoir(HasTraits):
name = Str
max_storage = Float(1e6, desc='Maximal storage [hm3]')
max_release = Float(10, desc='Maximal release [m3/s]')
head = Float(10, desc='Hydraulic head [m]')
efficiency = Range(0, 1.)
traits_view = View(
'name', 'max_storage', 'max_release', 'head', 'efficiency',
title = 'Reservoir',
resizable = True,
)
if __name__ == '__main__':
reservoir = Reservoir(
name = 'Project A',
max_storage = 30,
max_release = 100.0,
head = 60,
efficiency = 0.8
)
reservoir.configure_traits()
18.3.5 Deferral
Being able to defer the definition of a trait and its value to another object is a powerful feature of Traits.
class ReservoirState(HasTraits):
"""Keeps track of the reservoir state given the initial storage.
"""
reservoir = Instance(Reservoir, ())
min_storage = Float
max_storage = DelegatesTo('reservoir')
min_release = Float
max_release = DelegatesTo('reservoir')
# state attributes
storage = Range(low='min_storage', high='max_storage')
# control attributes
inflows = Float(desc='Inflows [hm3]')
release = Range(low='min_release', high='max_release')
spillage = Float(desc='Spillage [hm3]')
def print_state(self):
print 'Storage\tRelease\tInflows\tSpillage'
str_format = '\t'.join(['{:7.2f} 'for i in range(4)])
print str_format.format(self.storage, self.release, self.inflows,
self.spillage)
print '-' * 79
if __name__ == '__main__':
projectA = Reservoir(
name = 'Project A',
max_storage = 30,
max_release = 100.0,
hydraulic_head = 60,
efficiency = 0.8
)
A special trait allows to manage events and trigger function calls using the magic _xxxx_fired method:
class ReservoirState(HasTraits):
"""Keeps track of the reservoir state given the initial storage.
# state attributes
storage = Range(low='min_storage', high='max_storage')
# control attributes
def _update_storage_fired(self):
# update storage state
new_storage = self.storage - self.release + self.inflows
self.storage = min(new_storage, self.max_storage)
overflow = new_storage - self.max_storage
self.spillage = max(overflow, 0)
def print_state(self):
print 'Storage\tRelease\tInflows\tSpillage'
str_format = '\t'.join(['{:7.2f} 'for i in range(4)])
print str_format.format(self.storage, self.release, self.inflows,
self.spillage)
print '-' * 79
if __name__ == '__main__':
projectA = Reservoir(
name = 'Project A',
max_storage = 30,
max_release = 5.0,
hydraulic_head = 60,
efficiency = 0.8
)
Dependency between objects can be made automatic using the trait Property. The depends_on attribute
expresses the dependency between the property and other traits. When the other traits gets changed, the
property is invalidated. Again, Traits uses magic method names for the property :
_get_XXX for the getter of the XXX Property trait
_set_XXX for the setter of the XXX Property trait
class ReservoirState(HasTraits):
"""Keeps track of the reservoir state given the initial storage.
min_release = Float
max_release = DelegatesTo('reservoir')
# state attributes
storage = Property(depends_on='inflows, release')
# control attributes
inflows = Float(desc='Inflows [hm3]')
release = Range(low='min_release', high='max_release')
spillage = Property(
desc='Spillage [hm3]', depends_on=['storage', 'inflows', 'release']
)
def _get_spillage(self):
new_storage = self._storage - self.release + self.inflows
overflow = new_storage - self.max_storage
return max(overflow, 0)
def print_state(self):
print 'Storage\tRelease\tInflows\tSpillage'
str_format = '\t'.join(['{:7.2f} 'for i in range(4)])
print str_format.format(self.storage, self.release, self.inflows,
self.spillage)
print '-' * 79
if __name__ == '__main__':
projectA = Reservoir(
name = 'Project A',
max_storage = 30,
max_release = 5,
hydraulic_head = 60,
efficiency = 0.8
)
state.print_state()
class ReservoirState(HasTraits):
"""Keeps track of the reservoir state given the initial storage.
# state attributes
storage = Property(depends_on='inflows, release')
# control attributes
inflows = Float(desc='Inflows [hm3]')
release = Range(low='min_release', high='max_release')
spillage = Property(
desc='Spillage [hm3]', depends_on=['storage', 'inflows', 'release']
)
def _get_spillage(self):
new_storage = self._storage - self.release + self.inflows
overflow = new_storage - self.max_storage
return max(overflow, 0)
def print_state(self):
print 'Storage\tRelease\tInflows\tSpillage'
str_format = '\t'.join(['{:7.2f} 'for i in range(4)])
print str_format.format(self.storage, self.release, self.inflows,
self.spillage)
print '-' * 79
if __name__ == '__main__':
projectA = Reservoir(
name = 'Project A',
max_storage = 30,
max_release = 5,
hydraulic_head = 60,
efficiency = 0.8
)
state.print_state()
state.configure_traits()
Some use cases need the delegation mechanism to be broken by the user when setting the value of the trait.
The PrototypeFrom trait implements this behaviour.
class Turbine(HasTraits):
turbine_type = Str
power = Float(1.0, desc='Maximal power delivered by the turbine [Mw]')
class Reservoir(HasTraits):
name = Str
max_storage = Float(1e6, desc='Maximal storage [hm3]')
max_release = Float(10, desc='Maximal release [m3/s]')
head = Float(10, desc='Hydraulic head [m]')
efficiency = Range(0, 1.)
turbine = Instance(Turbine)
installed_capacity = PrototypedFrom('turbine', 'power')
if __name__ == '__main__':
turbine = Turbine(turbine_type='type1', power=5.0)
reservoir = Reservoir(
name = 'Project A',
max_storage = 30,
max_release = 100.0,
head = 60,
efficiency = 0.8,
turbine = turbine,
)
print '-' * 15
print 'updating the turbine power updates the installed capacity'
turbine.power = 10
print reservoir.installed_capacity
print '-' * 15
print 'setting the installed capacity breaks the link between turbine.power'
print 'and the installed_capacity trait'
reservoir.installed_capacity = 8
print turbine.power, reservoir.installed_capacity
18.3.6 Notification
Traits implements a Listener pattern. For each trait a list of static and dynamic listeners can be fed with call-
backs. When the trait does change, all the listeners are called.
Static listeners are defined using the _XXX_changed magic methods:
class ReservoirState(HasTraits):
"""Keeps track of the reservoir state given the initial storage.
"""
reservoir = Instance(Reservoir, ())
min_storage = Float
max_storage = DelegatesTo('reservoir')
min_release = Float
max_release = DelegatesTo('reservoir')
# state attributes
storage = Range(low='min_storage', high='max_storage')
# control attributes
inflows = Float(desc='Inflows [hm3]')
release = Range(low='min_release', high='max_release')
spillage = Float(desc='Spillage [hm3]')
def print_state(self):
print 'Storage\tRelease\tInflows\tSpillage'
str_format = '\t'.join(['{:7.2f} 'for i in range(4)])
print str_format.format(self.storage, self.release, self.inflows,
self.spillage)
print '-' * 79
if new > 0:
print 'Warning, we are releasing {} hm3 of water'.format(new)
if __name__ == '__main__':
projectA = Reservoir(
name = 'Project A',
max_storage = 30,
max_release = 100.0,
hydraulic_head = 60,
efficiency = 0.8
)
To listen to all the changes on a HasTraits class, the magic _any_trait_changed method can be implemented.
In many situations, you do not know in advance what type of listeners need to be activated. Traits offers the
ability to register listeners on the fly with the dynamic listeners
def wake_up_watchman_if_spillage(new_value):
if new_value > 0:
print 'Wake up watchman! Spilling {} hm3'.format(new_value)
if __name__ == '__main__':
projectA = Reservoir(
name = 'Project A',
max_storage = 30,
max_release = 100.0,
hydraulic_head = 60,
efficiency = 0.8
)
state.release = 90
state.inflows = 0
state.print_state()
The dynamic trait notification signatures are not the same as the static ones :
def wake_up_watchman(): pass
def wake_up_watchman(new): pass
def wake_up_watchman(name, new): pass
def wake_up_watchman(object, name, new): pass
def wake_up_watchman(object, name, old, new): pass
Removing a dynamic listener can be done by:
calling the remove_trait_listener method on the trait with the listener method as argument,
calling the on_trait_change method with listener method and the keyword remove=True,
deleting the instance that holds the listener.
Listeners can also be added to classes using the on_trait_change decorator:
class ReservoirState(HasTraits):
"""Keeps track of the reservoir state given the initial storage.
# state attributes
storage = Property(depends_on='inflows, release')
# control attributes
inflows = Float(desc='Inflows [hm3]')
release = Range(low='min_release', high='max_release')
spillage = Property(
desc='Spillage [hm3]', depends_on=['storage', 'inflows', 'release']
)
def _get_spillage(self):
new_storage = self._storage - self.release + self.inflows
overflow = new_storage - self.max_storage
return max(overflow, 0)
@on_trait_change('storage')
def print_state(self):
print 'Storage\tRelease\tInflows\tSpillage'
str_format = '\t'.join(['{:7.2f} 'for i in range(4)])
print str_format.format(self.storage, self.release, self.inflows,
self.spillage)
print '-' * 79
if __name__ == '__main__':
projectA = Reservoir(
name = 'Project A',
max_storage = 30,
max_release = 5,
hydraulic_head = 60,
efficiency = 0.8
)
The patterns supported by the on_trait_change method and decorator are powerful. The reader should look at
the docstring of HasTraits.on_trait_change for the details.
The following example demonstrate the usage of the Enum and List traits :
class IrrigationArea(HasTraits):
name = Str
surface = Float(desc='Surface [ha]')
crop = Enum('Alfalfa', 'Wheat', 'Cotton')
class Reservoir(HasTraits):
name = Str
max_storage = Float(1e6, desc='Maximal storage [hm3]')
max_release = Float(10, desc='Maximal release [m3/s]')
head = Float(10, desc='Hydraulic head [m]')
efficiency = Range(0, 1.)
irrigated_areas = List(IrrigationArea)
traits_view = View(
Item('name'),
Item('max_storage'),
Item('max_release'),
Item('head'),
Item('efficiency'),
Item('irrigated_areas'),
resizable = True
)
if __name__ == '__main__':
upper_block = IrrigationArea(name='Section C', surface=2000, crop='Wheat')
reservoir = Reservoir(
name='Project A',
max_storage=30,
max_release=100.0,
head=60,
efficiency=0.8,
irrigated_areas=[upper_block]
)
release = 80
print 'Releasing {} m3/s produces {} kWh'.format(
release, reservoir.energy_production(release)
)
Trait listeners can be used to listen to changes in the content of the list to e.g. keep track of the total crop
surface on linked to a given reservoir.
from traits.api import HasTraits, Str, Float, Range, Enum, List, Property
from traitsui.api import View, Item
class IrrigationArea(HasTraits):
name = Str
surface = Float(desc='Surface [ha]')
crop = Enum('Alfalfa', 'Wheat', 'Cotton')
class Reservoir(HasTraits):
name = Str
max_storage = Float(1e6, desc='Maximal storage [hm3]')
max_release = Float(10, desc='Maximal release [m3/s]')
head = Float(10, desc='Hydraulic head [m]')
efficiency = Range(0, 1.)
irrigated_areas = List(IrrigationArea)
total_crop_surface = Property(depends_on='irrigated_areas.surface')
def _get_total_crop_surface(self):
return sum([iarea.surface for iarea in self.irrigated_areas])
traits_view = View(
Item('name'),
Item('max_storage'),
Item('max_release'),
Item('head'),
Item('efficiency'),
Item('irrigated_areas'),
Item('total_crop_surface'),
resizable = True
)
if __name__ == '__main__':
upper_block = IrrigationArea(name='Section C', surface=2000, crop='Wheat')
reservoir = Reservoir(
name='Project A',
max_storage=30,
max_release=100.0,
head=60,
efficiency=0.8,
irrigated_areas=[upper_block],
)
release = 80
print 'Releasing {} m3/s produces {} kWh'.format(
release, reservoir.energy_production(release)
)
The next example shows how the Array trait can be used to feed a specialised TraitsUI Item, the ChacoPlotItem:
import numpy as np
class ReservoirEvolution(HasTraits):
reservoir = Instance(Reservoir)
name = DelegatesTo('reservoir')
initial_stock = Float
stock = Property(depends_on='inflows, releases, initial_stock')
month = Property(depends_on='stock')
def _get_month(self):
return np.arange(self.stock.size)
if __name__ == '__main__':
reservoir = Reservoir(
name = 'Project A',
max_storage = 30,
max_release = 100.0,
head = 60,
efficiency = 0.8
)
initial_stock = 10.
inflows_ts = np.array([6., 6, 4, 4, 1, 2, 0, 0, 3, 1, 5, 3])
releases_ts = np.array([4., 5, 3, 5, 3, 5, 5, 3, 2, 1, 3, 3])
view = ReservoirEvolution(
reservoir = reservoir,
inflows = inflows_ts,
releases = releases_ts
)
view.configure_traits()
See also:
References
ETS repositories
Traits manual
Traits UI manual
Mailing list : enthought-dev@enthought.com
Tip: Mayavi is an interactive 3D plotting package. matplotlib can also do simple 3D plotting, but Mayavi relies
on a more powerful engine ( VTK ) and is more suited to displaying large or complex data.
Chapters contents
* Points
* Lines
* Elevation surface
* Arbitrary regular mesh
* Volumetric data
Figures and decorations
* Figure management
* Changing plot properties
* Decorations
Interactive work
The pipeline dialog
557
Scipy lecture notes, Edition 2017.1
The mayavi.mlab module provides simple plotting functions to apply to numpy arrays, similar to matplotlib
or matlabs plotting interface. Try using them in IPython, by starting IPython with the switch --gui=wx.
Points
Hint: Points in 3D, represented with markers (or glyphs) and optionaly different sizes.
Lines
Hint: A line connecting points in 3D, with optional thickness and varying color.
Elevation surface
mlab.clf()
x, y = np.mgrid[-10:10:100j, -10:10:100j]
r = np.sqrt(x**2 + y**2)
z = np.sin(r)/r
mlab.surf(z, warp_scale='auto')
mlab.clf()
phi, theta = np.mgrid[0:np.pi:11j, 0:2*np.pi:11j]
x = np.sin(phi) * np.cos(theta)
y = np.sin(phi) * np.sin(theta)
z = np.cos(phi)
mlab.mesh(x, y, z)
mlab.mesh(x, y, z, representation='wireframe', color=(0, 0, 0))
Note: A surface is defined by points connected to form triangles or polygones. In mayavi.mlab.surf() and
mayavi.mlab.mesh(), the connectivity is implicity given by the layout of the arrays. See also mayavi.mlab.
triangular_mesh().
Our data is often more than points and values: it needs some connectivity information
Volumetric data
Hint: If your data is dense in 3D, it is more difficult to display. One option is to take iso-contours of the data.
mlab.clf()
x, y, z = np.mgrid[-5:5:64j, -5:5:64j, -5:5:64j]
values = x*x*0.5 + y*y + z*z*2.0
mlab.contour3d(values)
This function works with a regular orthogonal grid: the value array is a 3D array that gives the shape of the
grid.
Figure management
Tip: In general, many properties of the various objects on the figure can be changed. If these visualization are
created via mlab functions, the easiest way to change them is to use the keyword arguments of these functions,
as described in the docstrings.
x, y, z are 2D arrays, all of the same shape, giving the positions of the vertices of the surface. The connectivity
between these points is implied by the connectivity on the arrays.
For simple structures (such as orthogonal grids) prefer the surf function, as it will create more efficient data
structures.
Keyword arguments:
color the color of the vtk object. Overides the colormap, if any, when specified. This
is specified as a triplet of float ranging from 0 to 1, eg (1, 1, 1) for white.
Example:
In [3]: x = r * np.cos(theta)
In [4]: y = r * np.sin(theta)
In [5]: z = np.sin(r)/r
Decorations
Tip: Different items can be added to the figure to carry extra information, such as a colorbar or a title.
In [11]: mlab.outline(Out[7])
Out[11]: <enthought.mayavi.modules.outline.Outline object at 0xdd21b6c>
In [12]: mlab.axes(Out[7])
Out[12]: <enthought.mayavi.modules.axes.Axes object at 0xd2e4bcc>
Warning: extent: If we specified extents for a plotting object, mlab.outline and mlab.axes dont get them
by default.
Tip: The quickest way to create beautiful visualization with Mayavi is probably to interactively tweak the
various settings.
Click on the Mayavi button in the scene, and you can control properties of objects with dialogs.
To find out what code can be used to program these changes, click on the red button as you modify those
properties, and it will generate the corresponding lines of code.
Suppose we are simulating the magnetic field generated by Helmholtz coils. The examples/compute_field.
py script does this computation and gives you a B array, that is (3 x n), where the first axis is the direction of
the field (Bx, By, Bz), and the second axis the index number of the point. Arrays X, Y and Z give the positions
of these data points.
Excercise
Visualize this field. Your goal is to make sure that the simulation code is correct.
19.3. Slicing and dicing data: sources, modules and filters 565
Scipy lecture notes, Edition 2017.1
Suggestions
If you compute the norm of the vector field, you can apply an isosurface to it.
using mayavi.mlab.quiver3d() you can plot vectors. You can also use the masking options (in the
GUI) to make the plot a bit less dense.
Tip: As we see above, it may be desirable to look at the same data in different ways.
Mayavi visualization are created by loading the data in a data source and then displayed on the screen using
modules.
This can be seen by looking at the pipeline view. By right-clicking on the nodes of the pipeline, you can add
new modules.
Quiz
19.3. Slicing and dicing data: sources, modules and filters 566
Scipy lecture notes, Edition 2017.1
A 3D block of regularly-spaced value is structured: it is easy to know how one measurement is related to
another neighboring and how to continuously interpolate between these. We can call such data a field,
borrowing from terminology used in physics, as it is continuously defined in space.
A set of data points measured at random positions in a random order gives rise to much more difficult
and ill-posed interpolation problems: the data structure itself does not tell us what are the neighbors of
a data point. We call such data a scatter.
Unstructured and unconnected data: a scatter Structured and connected data: a field
mlab.points3d, mlab.quiver3d mlab.contour3d
Exercice:
1. Create a contour (for instance of the magnetic field norm) by using one of those functions and adding
the right module by clicking on the GUI dialog.
2. Create the right source to apply a vector_cut_plane and reproduce the picture of the magnetic field
shown previously.
Note that one of the difficulties is providing the data in the right form (number of arrays, shape) to the
functions. This is often the case with real-life data.
See also:
Sources are described in details in the Mayavi manual.
If you create a vector field, you may want to visualize the iso-contours of its magnitude. But the isosurface
module can only be applied to scalar data, and not vector data. We can use a filter, ExtractVectorNorm to
add this scalar value to the vector field.
Filters apply a transformation to data, and can be added between sources and modules
Excercice
Using the GUI, add the ExtractVectorNorm filter to display iso-contours of the field magnitude.
19.3. Slicing and dicing data: sources, modules and filters 567
Scipy lecture notes, Edition 2017.1
The mlab scripting layer builds pipelines for you. You can reproduce these pipelines programmatically with
the mlab.pipeline interface: each step has a corresponding mlab.pipeline function (simply convert the
name of the step to lower-case underscore-separated: ExtractVectorNorm gives extract_vector_norm). This
function takes as an argument the node that it applies to, as well as optional parameters, and returns the new
node.
For example, iso-contours of the magnitude are coded as:
mlab.pipeline.iso_surface(mlab.pipeline.extract_vector_norm(field),
contours=[0.1*Bmax, 0.4*Bmax],
opacity=0.5)
Excercice
Using the mlab.pipeline interface, generate a complete visualization, with iso-contours of the field magni-
tude, and a vector cut plane.
(click on the figure for a solution)
Tip: To make movies, or interactive application, you may want to change the data represented on a given
visualization.
If you have built a visualization, using the mlab plotting functions, or the mlab.pipeline function, we can
update the data by assigning new values to the mlab_source attributes
See also:
More details in the Mayavi documentation
Event loops
For the interaction with the user (for instance changing the view with the mouse), Mayavi needs some time
to process these events. The for loop above prevents this. The Mayavi documentation details a workaround
It is very simple to make interactive dialogs with Mayavi using the Traits library (see the dedicated chapter
Traits: building interactive dialogs).
def curve(n_turns):
"The function creating the x, y, z coordinates needed to plot"
phi = np.linspace(0, 2*np.pi, 2000)
return [np.cos(phi) * (1 + 0.5*np.cos(n_turns*phi)),
np.sin(phi) * (1 + 0.5*np.cos(n_turns*phi)),
0.5*np.sin(n_turns*phi)]
class Visualization(HasTraits):
"The class that contains the dialog"
scene = Instance(MlabSceneModel, ())
def __init__(self):
HasTraits.__init__(self)
x, y, z = curve(n_turns=2)
# Populating our plot
self.plot = self.scene.mlab.plot3d(x, y, z)
Second, the dialog is defined by an object inheriting from HasTraits, as it is done with Traits. The important
point here is that a Mayavi scene is added as a specific Traits attribute (Instance). This is important for
embedding it in the dialog. The view of this dialog is defined by the view attribute of the object. In the init of
this object, we populate the 3D scene with a curve.
Finally, the configure_traits method creates the dialog and starts the event loop.
See also:
There are a few things to be aware of when doing dialogs with Mayavi. Please read the Mayavi documentation
We can combine the Traits events handler with the mlab_source to modify the visualization with the dialog.
We will enable the user to vary the n_turns parameter in the definition of the curve. For this, we need:
to define an n_turns attribute on our visualization object, so that it can appear in the dialog. We use a
Range type.
to wire modification of this attribute to a recomputation of the curve. For this, we use the
on_traits_change decorator.
class Visualization(HasTraits):
n_turns = Range(0, 30, 11)
scene = Instance(MlabSceneModel, ())
def __init__(self):
HasTraits.__init__(self)
x, y, z = curve(self.n_turns)
self.plot = self.scene.mlab.plot3d(x, y, z)
@on_trait_change('n_turns')
def update_plot(self):
x, y, z = curve(self.n_turns)
self.plot.mlab_source.set(x=x, y=y, z=z)
Exercise
Using the code from the magnetic field simulation, create a dialog that enable to move the 2 coils: change
their parameters.
Hint: to define a dialog entry for a vector of dimension 3
direction = Array(float, value=(0, 0, 1), cols=3, shape=(3,))
You can look at the example_coil_application.py to see a full-blown application for coil design in 270 lines of
code.
Prerequisites
numpy
scipy
matplotlib (optional)
ipython (the enhancements come handy)
Acknowledgements
This chapter is adapted from a tutorial given by Gal Varoquaux, Jake Vanderplas, Olivier Grisel.
See also:
Data science in Python
The Statistics in Python chapter may also be of interest for readers looking into machine learning.
The documentation of scikit-learn is very complete and didactic.
572
Scipy lecture notes, Edition 2017.1
Chapters contents
Tip: Machine Learning is about building programs with tunable parameters that are adjusted automatically
so as to improve their behavior by adapting to previously seen data.
Machine Learning can be considered a subfield of Artificial Intelligence since those algorithms can be seen
as building blocks to make computers learn to behave more intelligently by somehow generalizing rather that
just storing and retrieving data items like a database system would do.
Well take a look at two very simple machine learning tasks here. The first is a classification task: the figure
shows a collection of two-dimensional data, colored according to two different class labels. A classification
algorithm may be used to draw a dividing boundary between the two clusters of points:
By drawing this separating line, we have learned a model which can generalize to new data: if you were to drop
another point onto the plane which is unlabeled, this algorithm could now predict whether its a blue or a red
point.
The next simple task well look at is a regression task: a simple best-fit line to a set of data.
Again, this is an example of fitting a model to data, but our focus here is that the model can make general-
izations about new data. The model has been learned from the training data, and can be used to predict the
result of test data: here, we might be given an x-value, and the model would allow us to predict the y value.
Machine learning algorithms implemented in scikit-learn expect data to be stored in a two-dimensional array
or matrix. The arrays can be either numpy arrays, or in some cases scipy.sparse matrices. The size of the
array is expected to be [n_samples, n_features]
n_samples: The number of samples: each sample is an item to process (e.g. classify). A sample can be a
document, a picture, a sound, a video, an astronomical object, a row in database or CSV file, or whatever
you can describe with a fixed set of quantitative traits.
n_features: The number of features or distinct traits that can be used to describe each item in a quanti-
tative manner. Features are generally real-valued, but may be boolean or discrete-valued in some cases.
Tip: The number of features must be fixed in advance. However it can be very high dimensional (e.g. millions
of features) with most of them being zeros for a given sample. This is a case where scipy.sparse matrices
can be useful, in that they are much more memory-efficient than numpy arrays.
As an example of a simple dataset, let us a look at the iris data stored by scikit-learn. Suppose we want to
recognize species of irises. The data consists of measurements of three different species of irises:
Quick Question:
If we want to design an algorithm to recognize iris species, what might the data be?
Remember: we need a 2D array of size [n_samples x n_features].
What would the n_samples refer to?
What might the n_features refer to?
Remember that there must be a fixed number of features for each sample, and feature number i must be a
similar kind of quantity for each sample.
Scikit-learn has a very straightforward set of data on these iris species. The data consist of the following:
Features in the Iris dataset:
sepal length (cm)
sepal width (cm)
petal length (cm)
petal width (cm)
Target classes to predict:
Setosa
Versicolour
Virginica
scikit-learn embeds a copy of the iris CSV file along with a function to load it into numpy arrays:
>>> from sklearn.datasets import load_iris
>>> iris = load_iris()
The features of each sample flower are stored in the data attribute of the dataset:
>>> print(iris.data.shape)
(150, 4)
>>> n_samples, n_features = iris.data.shape
>>> print(n_samples)
150
>>> print(n_features)
4
>>> print(iris.data[0])
[ 5.1 3.5 1.4 0.2]
The information about the class of each sample is stored in the target attribute of the dataset:
>>> print(iris.target.shape)
(150,)
>>> print(iris.target)
[0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 2 2 2 2 2 2 2 2 2 2 2
2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2 2
2 2]
The names of the classes are stored in the last attribute, namely target_names:
>>> print(iris.target_names)
['setosa' 'versicolor' 'virginica']
This data is four-dimensional, but we can visualize two of the dimensions at a time using a scatter plot:
Excercise:
Can you choose 2 features to find a plot where it is easier to seperate the different classes of irises?
Hint: click on the figure above to see the code that generates it, and modify this code.
Every algorithm is exposed in scikit-learn via an Estimator object. For instance a linear regression is:
sklearn.linear_model.LinearRegression
Estimator parameters: All the parameters of an estimator can be set when it is instantiated:
Fitting on data
>>> X = x[:, np.newaxis] # The input data for sklearn is 2D: (samples == 3 x features == 1)
>>> X
array([[0],
[1],
[2]])
>>> model.fit(X, y)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=True)
Estimated parameters: When data is fitted with an estimator, parameters are estimated from the data at hand.
All the estimated parameters are attributes of the estimator object ending by an underscore:
>>> model.coef_
array([ 1.])
In Supervised Learning, we have a dataset consisting of both features and labels. The task is to construct an
estimator which is able to predict the label of an object given the set of features. A relatively simple example
is predicting the species of iris given a set of measurements of its flower. This is a relatively simple task. Some
more complicated examples are:
given a multicolor image of an object through a telescope, determine whether that object is a star, a
quasar, or a galaxy.
given a photograph of a person, identify the person in the photo.
given a list of movies a person has watched and their personal rating of the movie, recommend a list of
movies they would like (So-called recommender systems: a famous example is the Netflix Prize).
Tip: What these tasks have in common is that there is one or more unknown quantities associated with the
object which needs to be determined from other observed quantities.
Supervised learning is further broken down into two categories, classification and regression. In classifica-
tion, the label is discrete, while in regression, the label is continuous. For example, in astronomy, the task
of determining whether an object is a star, a galaxy, or a quasar is a classification problem: the label is from
three distinct categories. On the other hand, we might wish to estimate the age of an object based on such
observations: this would be a regression problem, because the label (age) is a continuous quantity.
Classification: K nearest neighbors (kNN) is one of the simplest learning strategies: given a new, unknown ob-
servation, look up in your reference database which ones have the closest features and assign the predominant
class. Lets try it out on our iris classification problem:
Fig. 20.3: A plot of the sepal space and the prediction of the KNN
Regression: The simplest possible regression setting is the linear regression one:
Scikit-learn strives to have a uniform interface across all methods, and well see examples of these below. Given
a scikit-learn estimator object named model, the following methods are available:
In all Estimators
model.fit() : fit training data. For supervised learning applications, this accepts two
arguments: the data X and the labels y (e.g. model.fit(X, y)). For unsupervised learn-
ing applications, this accepts only a single argument, the data X (e.g. model.fit(X)).
In supervised estimators
model.predict() : given a trained model, predict the label of a new set of data. This
method accepts one argument, the new data X_new (e.g. model.predict(X_new)), and
returns the learned label for each object in the array.
model.predict_proba() : For classification problems, some estimators also provide
this method, which returns the probability that a new observation has each categor-
ical label. In this case, the label with the highest probability is returned by model.
predict().
model.score() : for classification or regression problems, most (all?) estimators imple-
ment a score method. Scores are between 0 and 1, with a larger score indicating a better
fit.
In unsupervised estimators
model.transform() : given an unsupervised model, transform new data into the new
basis. This also accepts one argument X_new, and returns the new representation of the
data based on the unsupervised model.
model.fit_transform() : some estimators implement this method, which more effi-
ciently performs a fit and a transform on the same input data.
Train errors Suppose you are using a 1-nearest neighbor estimator. How many errors do you expect on your
train set?
Train set error is not a good measurement of prediction performance. You need to leave out a test set.
In general, we should accept errors on the train set.
An example of regularization The core idea behind regularization is that we are going to prefer models that
are simpler, for a certain definition of simpler, even if they lead to more errors on the train set.
As an example, lets generate with a 9th order polynomial, with noise:
And now, lets fit a 4th order and a 9th order polynomial to the data.
With your naked eyes, which model do you prefer, the 4th order one, or the 9th order one?
Lets look at the ground truth:
Tip: Regularization is ubiquitous in machine learning. Most scikit-learn estimators have a parameter to tune
the amount of regularization. For instance, with k-NN, it is k, the number of nearest neighbors used to make
the decision. k=1 amounts to no regularization: 0 error on the training set, whereas large k will push toward
smoother decision boundaries in the feature space.
Tip: For classification models, the decision boundary, that separates the class expresses the complexity of the
model. For instance, a linear model, that makes a decision based on a linear combination of features, is more
complex than a non-linear one.
Python code and Jupyter notebook for this section are found here
In this section well apply scikit-learn to the classification of handwritten digits. This will go a bit beyond
the iris classification we saw before: well discuss some of the metrics which can be used in evaluating the
effectiveness of a classification model.
Let
us visualize the data and remind us what were looking at (click on the figure for the full code):
A good first-step for many problems is to visualize the data using a Dimensionality Reduction technique. Well
start with the most straightforward one, Principal Component Analysis (PCA).
PCA seeks orthogonal linear combinations of the features which show the greatest variance, and as such, can
help give you a good idea of the structure of the data set.
Question
Given these projections of the data, which numbers do you think a classifier might have trouble distinguish-
ing?
For most classification problems, its nice to have a simple, fast method to provide a quick baseline classifi-
cation. If the simple and fast method is sufficient, then we dont have to waste CPU cycles on more complex
models. If not, we can use the results of the simple method to give us clues about our data.
One good method to keep in mind is Gaussian Naive Bayes (sklearn.naive_bayes.GaussianNB).
Tip: Gaussian Naive Bayes fits a Gaussian distribution to each training label independantly on each feature,
and uses this to quickly give a rough classification. It is generally not sufficiently accurate for real-world data,
but can perform surprisingly well, for instance on text data.
>>> # use the model to predict the labels of the test data
>>> predicted = clf.predict(X_test)
>>> expected = y_test
>>> print(predicted)
[1 7 7 7 8 2 8 0 4 8 7 7 0 8 2 3 5 8 5 3 7 9 6 2 8 2 2 7 3 5...]
>>> print(expected)
[1 0 4 7 8 2 2 0 4 3 7 7 0 8 2 3 4 8 5 3 7 9 6 3 8 2 2 9 3 5...]
As above, we plot the digits with the predicted labels to get an idea of how well the classification is working.
Question
Why did we split the data into training and validation sets?
Wed like to measure the performance of our estimator without having to resort to plotting examples. A simple
method might be to simply compare the number of matches:
We see that more than 80% of the 450 predictions match the input. But there are other more sophisticated
metrics that can be used to judge the performance of a classifier: several are available in the sklearn.metrics
submodule.
One of the most useful metrics is the classification_report, which combines several measures and prints
a table with the results:
Another enlightening metric for this sort of multi-label classification is a confusion matrix: it helps us visualize
which labels are being interchanged in the classification errors:
We see here that in particular, the numbers 1, 2, 3, and 9 are often being labeled 8.
Here well do a short example of a regression problem: learning a continuous value from a set of features.
Python code and Jupyter notebook for this section are found here
Well use the simple Boston house prices set, available in scikit-learn. This records measurements of 13 at-
tributes of housing markets around Boston, as well as the median price. The question is: can you predict the
price of a new market given its attributes?:
>>> print(data.target.shape)
(506,)
We can see that there are just over 500 data points.
The DESCR variable has a long description of the dataset:
>>> print(data.DESCR)
Boston House Prices dataset
===========================
Notes
------
Data Set Characteristics:
It often helps to quickly visualize pieces of the data using histograms, scatter plots, or other plot types. With
pylab, let us show a histogram of the target values: the median price in each neighborhood:
>>> plt.hist(data.target)
(array([...
Lets have a quick look to see if some features are more rele-
vant than others for our problem:
<matplotlib.figure.Figure object...
Tip: Sometimes, in Machine Learning it is useful to use feature selection to decide which features are the most
useful for a particular problem. Automated methods exist which quantify this sort of exercise of choosing the
most informative features.
Now well use scikit-learn to perform a simple linear regression on the housing data. There are many
possibilities of regressors to use. A particularly simple one is LinearRegression: this is basically a wrapper
around an ordinary least squares calculation.
Tip: The prediction at least correlates with the true price, though there are clearly some biases. We could
imagine evaluating the performance of the regressor by, say, computing the RMS residuals between the true
and predicted price. There are some subtleties in this, however, which well cover in a later section.
There are many other types of regressors available in scikit-learn: well try a more powerful one here.
Use the GradientBoostingRegressor class to fit the housing data.
hint You can copy and paste some of the above code, replacing LinearRegression with
GradientBoostingRegressor:
from sklearn.ensemble import GradientBoostingRegressor
# Instantiate the model, fit the results, and scatter in vs. out
Here well continue to look at the digits data, but well switch to the K-Neighbors classifier. The K-neighbors
classifier is an instance-based classifier. The K-neighbors classifier predicts the label of an unknown point
based on the labels of the K nearest points in the parameter space.
Apparently, weve found a perfect classifier! But this is misleading for the reasons we saw before: the classifier
essentially memorizes all the samples it has already seen. To really test how well this algorithm does, we need
to try some samples it hasnt yet seen.
This problem also occurs with regression models. In the following we fit an other instance-based model named
decision tree to the Boston Housing price dataset we introduced previously:
Here again the predictions are seemingly perfect as the model was able to perfectly memorize the training set.
Learning the parameters of a prediction function and testing it on the same data is a methodological mistake:
a model that would just repeat the labels of the samples that it has just seen would have a perfect score but
would fail to predict anything useful on yet-unseen data.
To avoid over-fitting, we have to define two different sets:
a training set X_train, y_train which is used for learning the parameters of a predictive model
a testing set X_test, y_test which is used for evaluating the fitted predictive model
In scikit-learn such a random split can be quickly computed with the train_test_split() function:
Now we train on the training data, and test on the testing data:
The averaged f1-score is often used as a convenient measure of the overall performance of an algorithm. It
appears in the bottom row of the classification report; it can also be accessed directly:
The over-fitting we saw previously can be quantified by computing the f1-score on the training data itself:
Note: Regression metrics In the case of regression models, we need to use different metrics, such as explained
variance.
Tip: We have applied Gaussian Naives, support vectors machines, and K-nearest neighbors classifiers to the
digits dataset. Now that we have these validation tools in place, we can ask quantitatively which of the three
estimators works best for this dataset.
With the default hyper-parameters for each estimator, which gives the best f1 score on the validation
set? Recall that hyperparameters are the parameters set when you instantiate the classifier: for example,
the n_neighbors in clf = KNeighborsClassifier(n_neighbors=1)
>>> X = digits.data
>>> y = digits.target
>>> X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y,
... test_size=0.25, random_state=0)
For each classifier, which value for the hyperparameters gives the best results for the digits data? For
LinearSVC, use loss='l2' and loss='l1'. For KNeighborsClassifier we use n_neighbors be-
tween 1 and 10. Note that GaussianNB does not have any adjustable hyperparameters.
LinearSVC(loss='l1'): 0.930570687535
LinearSVC(loss='l2'): 0.933068826918
-------------------
KNeighbors(n_neighbors=1): 0.991367521884
KNeighbors(n_neighbors=2): 0.984844206884
KNeighbors(n_neighbors=3): 0.986775344954
KNeighbors(n_neighbors=4): 0.980371905382
KNeighbors(n_neighbors=5): 0.980456280495
KNeighbors(n_neighbors=6): 0.975792419414
KNeighbors(n_neighbors=7): 0.978064579214
KNeighbors(n_neighbors=8): 0.978064579214
KNeighbors(n_neighbors=9): 0.978064579214
KNeighbors(n_neighbors=10): 0.975555089773
20.5.4 Cross-validation
Cross-validation consists in repetively splitting the data in pairs of train and test sets, called folds. Scikit-learn
comes with a function to automatically compute score on all these folds. Here we do KFold with k=5.
Tip: There exists many different cross-validation strategies in scikit-learn. They are often useful to take in
Consider regularized linear models, such as Ridge Regression, which uses l2 regularlization, and Lasso Regres-
sion, which uses l1 regularization. Choosing their regularization parameter is important.
Let us set these parameters on the Diabetes dataset, a simple regression problem. The diabetes data consists
of 10 physiological variables (age, sex, weight, blood pressure) measure on 442 patients, and an indication of
disease progression after one year:
We compute the cross-validation score as a function of alpha, the strength of the regularization for Lasso and
Ridge. We choose 20 values of alpha between 0.0001 and 1:
>>> alphas = np.logspace(-3, -1, 30)
Question
For some models within scikit-learn, cross-validation can be performed more efficiently on large datasets. In
this case, a cross-validated version of the particular model is included. The cross-validated versions of Ridge
and Lasso are RidgeCV and LassoCV, respectively. Parameter search on these estimators can be performed as
follows:
Nested cross-validation
How do we measure the performance of these estimators? We have used data to set the hyperparameters, so
we need to test on actually new data. We can do this by running cross_val_score() on our CV objects. Here
there are 2 cross-validation loops going on, this is called nested cross validation:
Note: Note that these results do not match the best results of our curves above, and LassoCV seems to under-
perform RidgeCV. The reason is that setting the hyper-parameter is harder for Lasso, thus the estimation error
on this hyper-parameter is larger.
Unsupervised learning is applied on X without y: data without labels. A typical use case is to find hidden
structure in the data.
Dimensionality reduction derives a set of new artificial features smaller than the original feature set. Here well
use Principal Component Analysis (PCA), a dimensionality reduction that strives to retain most of the variance
of the original data. Well use sklearn.decomposition.PCA on the iris dataset:
>>> X = iris.data
>>> y = iris.target
Tip: PCA computes linear combinations of the original features using a truncated Singular Value Decomposi-
tion of the matrix X, to project the data onto a base of the top singular vectors.
Once fitted, PCA exposes the singular vectors in the components_ attribute:
>>> pca.components_
array([[ 0.36158..., -0.08226..., 0.85657..., 0.35884...],
[ 0.65653..., 0.72971..., -0.17576..., -0.07470...]])
>>> pca.explained_variance_ratio_
array([ 0.92461..., 0.05301...])
Let us project the iris dataset along those first two dimensions::
PCA normalizes and whitens the data, which means that the data is now centered on both components with
unit variance:
>>> X_pca.mean(axis=0)
array([ ...e-15, ...e-15])
>>> X_pca.std(axis=0)
array([ 1., 1.])
>>> np.corrcoef(X_pca.T)
array([[ 1.00000000e+00, ...],
[ ..., 1.00000000e+00]])
Tip: Note that this projection was determined without any information about the labels (represented by the
colors): this is the sense in which the learning is unsupervised. Nevertheless, we see that the projection gives
us insight into the distribution of the different flowers in parameter space: notably, iris setosa is much more
distinct than the other two species.
For visualization, more complex embeddings can be useful (for statistical analysis, they are harder to control).
sklearn.manifold.TSNE is such a powerful manifold learning method. We apply it to the digits dataset, as
the digits are vectors of dimension 8*8 = 64. Embedding them in 2D enables visualization:
>>> # Take the first 500 data points: it's hard to see 1500 points
>>> X = digits.data[:500]
>>> y = digits.target[:500]
fit_transform
As TSNE cannot be applied to new data, we need to use its fit_transform method.
sklearn.manifold.TSNE separates quite well the different classes of digits eventhough it had no access to
the class information.
sklearn.manifold has many other non-linear embeddings. Try them out on the digits dataset. Could you
judge their quality without knowing the labels y?
>>> from sklearn.datasets import load_digits
>>> digits = load_digits()
>>> # ...
Python code and Jupyter notebook for this section are found here
The goal of this example is to show how an unsupervised method and a supervised one can be chained for
better prediction. It starts with a didactic but lengthy way of doing things, and finishes with the idiomatic
approach to pipelining in scikit-learn.
Here well take a look at a simple facial recognition example. Ideally, we would use a dataset consisting of a
subset of the Labeled Faces in the Wild data that is available with sklearn.datasets.fetch_lfw_people().
However, this is a relatively large download (~200MB) so we will do the tutorial on a simpler, less rich dataset.
Feel free to explore the LFW dataset.
Tip: Note is that these faces have already been localized and scaled to a common size. This is an important
preprocessing piece for facial recognition, and is a process that can require a large collection of training data.
This can be done in scikit-learn, but the challenge is gathering a sufficient amount of training data for the
algorithm to work. Fortunately, this piece is common enough that it has been done. One good resource is
OpenCV, the Open Computer Vision Library.
Well perform a Support Vector classification of the images. Well do a typical train-test split on the images:
print(X_train.shape, X_test.shape)
Out:
1850 dimensions is a lot for SVM. We can use PCA to reduce these 1850 features to a manageable size, while
maintaining most of the information in the dataset.
One interesting part of PCA is that it computes the mean face, which can be interesting to examine:
plt.imshow(pca.mean_.reshape(faces.images[0].shape),
cmap=plt.cm.bone)
The principal components measure deviations about this mean along orthogonal axes.
print(pca.components_.shape)
Out:
(150, 4096)
The components (eigenfaces) are ordered by their importance from top-left to bottom-right. We see that the
first few components seem to primarily take care of lighting conditions; the remaining components pull out
certain identifying features: the nose, eyes, eyebrows, etc.
With this projection computed, we can now project our original training and test data onto the PCA basis:
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print(X_train_pca.shape)
Out:
(300, 150)
print(X_test_pca.shape)
Out:
(100, 150)
These projected components correspond to factors in a linear combination of component images such that
the combination approaches the original face.
Finally, we can evaluate how well this classification did. First, we might plot a few of the test-cases with the
labels learned from the training set:
import numpy as np
fig = plt.figure(figsize=(8, 6))
for i in range(15):
The classifier is correct on an impressive number of images given the simplicity of its learning model! Using
a linear classifier on 150 features derived from the pixel-level data, the algorithm correctly identifies a large
number of the people in the images.
Again, we can quantify this effectiveness using one of several measures from sklearn.metrics. First we can
do the classification report, which shows the precision, recall and other measures of the goodness of the
classification:
Out:
Another interesting metric is the confusion matrix, which indicates how often any two items are mixed-up.
The confusion matrix of a perfect classifier would only have nonzero entries on the diagonal, with zeros on the
off-diagonal:
print(metrics.confusion_matrix(y_test, y_pred))
Out:
[[4 0 0 ..., 0 0 0]
[0 4 0 ..., 0 0 0]
[0 0 2 ..., 0 0 0]
...,
[0 0 0 ..., 3 0 0]
[0 0 0 ..., 0 1 0]
[0 0 0 ..., 0 0 3]]
20.7.3 Pipelining
Above we used PCA as a pre-processing step before applying our support vector machine classifier. Plugging
the output of one estimator directly into the input of a second estimator is a commonly used pattern; for this
reason scikit-learn provides a Pipeline object which automates this process. The above problem can be re-
expressed as a pipeline as follows:
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(metrics.confusion_matrix(y_pred, y_test))
See also:
This section is adapted from Andrew Ngs excellent Coursera course
The issues associated with validation and cross-validation are some of the most important aspects of the prac-
tice of machine learning. Selecting the optimal model for your data is vital, and is a piece of the problem that
is not often appreciated by machine learning practitioners.
The central question is: If our estimator is underperforming, how should we move forward?
Use simpler or more complicated model?
Add more features to each observed data point?
Add more training samples?
The answer is often counter-intuitive. In particular, Sometimes using a more complicated model will give
worse results. Also, Sometimes adding training data will not improve your results. The ability to determine
what steps will improve your model is what separates the successful machine learning practitioners from the
unsuccessful.
Python code and Jupyter notebook for this section are found here
Let us start with a simple 1D regression problem. This will help us to easily visualize the data and the model,
and the results generalize easily to higher-dimensional datasets. Well explore a simple linear regression prob-
lem, with sklearn.linear_model.
In real life situation, we have noise (e.g. measurement noise) in our data:
np.random.seed(0)
for _ in range(6):
noisy_X = X + np.random.normal(loc=0, scale=.1, size=X.shape)
plt.plot(noisy_X, y, 'o')
regr.fit(noisy_X, y)
plt.plot(X_test, regr.predict(X_test))
As we can see, our linear model captures and amplifies the noise in the data. It displays a lot of variance.
We can use another linear estimator that uses regularization, the Ridge estimator. This estimator regularizes
the coefficients by shrinking them to zero, under the assumption that very high correlations are often spurious.
The alpha parameter controls the amount of shrinkage used.
regr = linear_model.Ridge(alpha=.1)
np.random.seed(0)
for _ in range(6):
noisy_X = X + np.random.normal(loc=0, scale=.1, size=X.shape)
plt.plot(noisy_X, y, 'o')
regr.fit(noisy_X, y)
plt.plot(X_test, regr.predict(X_test))
plt.show()
As we can see, the estimator displays much less variance. However it systematically under-estimates the coef-
ficient. It displays a biased behavior.
This is a typical example of bias/variance tradeof: non-regularized estimator are not biased, but they can
display a lot of bias. Highly-regularized models have little variance, but high bias. This bias is not necessarily
a bad thing: what matters is choosing the tradeoff between bias and variance that leads to the best prediction
performance. For a specific dataset there is a sweet spot corresponding to the highest complexity that the data
can support, depending on the amount of noise and of observations available.
Tip: Given a particular dataset and a model (e.g. a polynomial), wed like to understand whether bias (underfit)
or variance limits prediction, and how to tune the hyperparameter (here d, the degree of the polynomial) to give
the best fit.
On a given data, let us fit a simple polynomial regression model with varying degrees:
Tip: In the above figure, we see fits for three different values of d. For d = 1, the data is under-fit. This means
that the model is too simplistic: no straight line will ever be a good fit to this data. In this case, we say that the
model suffers from high bias. The model itself is biased, and this will be reflected in the fact that the data is
poorly fit. At the other extreme, for d = 6 the data is over-fit. This means that the model has too many free
parameters (6 in this case) which can be adjusted to perfectly fit the training data. If we add a new point to this
plot, though, chances are it will be very far from the curve representing the degree-6 fit. In this case, we say
that the model suffers from high variance. The reason for the term high variance is that if any of the input
points are varied slightly, it could result in a very different model.
In the middle, for d = 2, we have found a good mid-point. It fits the data fairly well, and does not suffer from
the bias and variance problems seen in the figures on either side. What we would like is a way to quantitatively
identify bias and variance, and optimize the metaparameters (in this case, the polynomial degree d) in order
to determine the best algorithm.
Validation Curves
Validation curve A validation curve consists in varying a model parameter that controls its complexity (here
the degree of the polynomial) and measures both error of the model on training data, and on test data (eg with
cross-validation). The model parameter is then adjusted so that the test error is minimized:
We use sklearn.model_selection.validation_curve() to compute train and test error, and plot it:
>>> # Plot the mean train score and validation score across folds
>>> plt.plot(degrees, validation_scores.mean(axis=1), label='cross-validation')
[<matplotlib.lines.Line2D object at ...>]
>>> plt.plot(degrees, train_scores.mean(axis=1), label='training')
[<matplotlib.lines.Line2D object at ...>]
>>> plt.legend(loc='best')
<matplotlib.legend.Legend object at ...>
Tip: The astute reader will realize that something is amiss here: in the above plot, d = 4 gives the best results.
But in the previous plot, we found that d = 6 vastly over-fits the data. Whats going on here? The difference
is the number of training points used. In the previous example, there were only eight training points. In this
example, we have 100. As a general rule of thumb, the more training points used, the more complicated model
can be used. But how can you determine for a given model whether more training points will be helpful? A
useful diagnostic for this are learning curves.
Learning Curves
A learning curve shows the training and validation score as a function of the number of training points. Note
that when we train on a subset of the training data, the training score is computed using this subset, not the
full training set. This curve gives a quantitative view into how beneficial it will be to add training samples.
Questions:
As the number of training samples are increased, what do you expect to see for the training score? For
the validation score?
Would you expect the training score to be higher or lower than the validation score? Would you ever
expect this to change?
>>> # Plot the mean train score and validation score across folds
>>> plt.plot(train_sizes, validation_scores.mean(axis=1), label='cross-validation')
[<matplotlib.lines.Line2D object at ...>]
>>> plt.plot(train_sizes, train_scores.mean(axis=1), label='training')
[<matplotlib.lines.Line2D object at ...>]
Note that the validation score generally increases with a growing training set, while the training score generally
decreases with a growing training set. As the training size increases, they will converge to a single value.
From the above discussion, we know that d = 1 is a high-bias estimator which under-fits the data. This is
indicated by the fact that both the training and validation scores are low. When confronted with this type of
learning curve, we can expect that adding more training data will not help: both lines converge to a relatively
low score.
When the learning curves have converged to a low score, we have a high bias model.
A high-bias model can be improved by:
Using a more sophisticated model (i.e. in this case, increase d)
Gather more features for each sample.
Decrease regularization in a regularized model.
Increasing the number of samples, however, does not improve a high-bias model.
Now lets look at a high-variance (i.e. over-fit) model:
Here we show the learning curve for d = 15. From the above discussion, we know that d = 15 is a high-
variance estimator which over-fits the data. This is indicated by the fact that the training score is much higher
than the validation score. As we add more samples to this training set, the training score will continue to
decrease, while the cross-validation error will continue to increase, until they meet in the middle.
Learning curves that have not yet converged with the full training set indicate a high-variance, over-fit
model.
A high-variance model can be improved by:
Gathering more training samples.
Using a less-sophisticated model (i.e. in this case, make d smaller)
Increasing regularization.
In particular, gathering more features for each sample will not help the results.
Weve seen above that an under-performing algorithm can be due to two possible situations: high bias (under-
fitting) and high variance (over-fitting). In order to evaluate our algorithm, we set aside a portion of our training
data for cross-validation. Using the technique of learning curves, we can train on progressively larger subsets
of the data, evaluating the training error and cross-validation error to determine whether our algorithm has
high variance or high bias. But what do we do with this information?
High Bias
High Variance
Using validation schemes to determine hyper-parameters means that we are fitting the hyper-parameters to
the particular validation set. In the same way that parameters can be over-fit to the training set, hyperpa-
rameters can be over-fit to the validation set. Because of this, the validation error tends to under-predict the
classification error of new data.
For this reason, it is recommended to split the data into three sets:
The training set, used to train the model (usually ~60% of the data)
The validation set, used to validate the model (usually ~20% of the data)
The test set, used to evaluate the expected error of the validated model (usually ~20% of the data)
Many machine learning practitioners do not separate test set and validation set. But if your goal is to gauge
the error of a model on unknown data, using an independent test set is vital.
predicted = clf.predict(data.data)
expected = data.target
Fit a PCA
X_pca = pca.transform(X)
target_ids = range(len(iris.target_names))
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import LinearRegression
# x from 0 to 30
x = 30 * np.random.random((20, 1))
ax.set_xlabel('x')
ax.set_ylabel('y')
ax.axis('tight')
plt.show()
# this formatter will label the colorbar with the correct target names
formatter = plt.FuncFormatter(lambda i, *args: iris.target_names[int(i)])
plt.figure(figsize=(5, 4))
plt.scatter(iris.data[:, x_index], iris.data[:, y_index], c=iris.target)
plt.colorbar(ticks=[0, 1, 2], format=formatter)
plt.xlabel(iris.feature_names[x_index])
plt.ylabel(iris.feature_names[y_index])
plt.tight_layout()
plt.show()
Here we use sklearn.manifold.TSNE to visualize the digits datasets. Indeed, the digits are vectors in a 8*8
= 64 dimensional space. We want to project them in 2D for visualization. tSNE is often a good solution, as it
groups and separates data points based on their local relationship.
Load the iris data
X_2d = tsne.fit_transform(X)
target_ids = range(len(digits.target_names))
20.9.6 Use the RidgeCV and LassoCV to set the regularization parameter
Out:
(442, 10)
Out:
Ridge: 0.409427438303
Lasso: 0.353800083299
We compute the cross-validation score as a function of alpha, the strength of the regularization for Lasso and
Ridge
import numpy as np
from matplotlib import pyplot as plt
plt.figure(figsize=(5, 3))
plt.legend(loc='lower left')
plt.xlabel('alpha')
plt.ylabel('cross validation score')
plt.tight_layout()
plt.show()
import numpy as np
# Smaller figures
from matplotlib import pyplot as plt
plt.rcParams['figure.figsize'] = (3, 2)
In real life situation, we have noise (e.g. measurement noise) in our data:
np.random.seed(0)
for _ in range(6):
noisy_X = X + np.random.normal(loc=0, scale=.1, size=X.shape)
plt.plot(noisy_X, y, 'o')
regr.fit(noisy_X, y)
plt.plot(X_test, regr.predict(X_test))
As we can see, our linear model captures and amplifies the noise in the data. It displays a lot of variance.
We can use another linear estimator that uses regularization, the Ridge estimator. This estimator regularizes
the coefficients by shrinking them to zero, under the assumption that very high correlations are often spurious.
The alpha parameter controls the amount of shrinkage used.
regr = linear_model.Ridge(alpha=.1)
np.random.seed(0)
for _ in range(6):
noisy_X = X + np.random.normal(loc=0, scale=.1, size=X.shape)
plt.plot(noisy_X, y, 'o')
regr.fit(noisy_X, y)
plt.plot(X_test, regr.predict(X_test))
plt.show()
This example generates simple synthetic data ploints and shows a separating hyperplane on them.
import numpy as np
import matplotlib.pyplot as plt
from sklearn.linear_model import SGDClassifier
from sklearn.datasets.samples_generator import make_blobs
# plot the line, the points, and the nearest vectors to the plane
xx = np.linspace(-1, 5, 10)
yy = np.linspace(-1, 5, 10)
plt.figure(figsize=(4, 3))
ax = plt.axes()
ax.contour(X1, X2, Z, [-1.0, 0.0, 1.0], colors='k',
linestyles=['dashed', 'solid', 'dashed'])
ax.scatter(X[:, 0], X[:, 1], c=Y, cmap=plt.cm.Paired)
ax.axis('tight')
plt.show()
Compare the performance of a variety of classifiers on a test set for the digits data.
Out:
LinearSVC: 0.927190026916
GaussianNB: 0.833274168101
KNeighborsClassifier: 0.980456280495
------------------
LinearSVC(loss='l1'): 0.9325901988044822
LinearSVC(loss='l2'): 0.9165874249889226
-------------------
KNeighbors(n_neighbors=1): 0.9913675218842191
KNeighbors(n_neighbors=2): 0.9848442068835102
KNeighbors(n_neighbors=3): 0.9867753449543099
KNeighbors(n_neighbors=4): 0.9803719053818863
KNeighbors(n_neighbors=5): 0.9804562804949924
KNeighbors(n_neighbors=6): 0.9757924194139573
KNeighbors(n_neighbors=7): 0.9780645792142071
KNeighbors(n_neighbors=8): 0.9780645792142071
KNeighbors(n_neighbors=9): 0.9780645792142071
KNeighbors(n_neighbors=10): 0.9755550897728812
digits = datasets.load_digits()
X = digits.data
y = digits.target
X_train, X_test, y_train, y_test = model_selection.train_test_split(X, y,
test_size=0.25, random_state=0)
print('------------------')
print('-------------------')
Fits data generated from a 9th order polynomial with model of 4th order and 9th order polynomials, to demon-
strate that often simpler models are to be prefered
import numpy as np
from matplotlib import pyplot as plt
from matplotlib.colors import ListedColormap
rng = np.random.RandomState(0)
x = 2*rng.rand(100) - 1
The data
plt.figure(figsize=(6, 4))
plt.scatter(x, y, s=4)
plt.figure(figsize=(6, 4))
plt.scatter(x, y, s=4)
plt.legend(loc='best')
plt.axis('tight')
plt.title('Fitting a 4th and a 9th order polynomial')
Ground truth
plt.figure(figsize=(6, 4))
plt.scatter(x, y, s=4)
plt.plot(x_test, f(x_test), label="truth")
plt.axis('tight')
plt.title('Ground truth (9th order polynomial)')
plt.show()
Here we perform a simple regression analysis on the Boston housing data, exploring two types of regressors.
Simple prediction
plt.figure(figsize=(4, 3))
plt.scatter(expected, predicted)
plt.plot([0, 50], [0, 50], '--k')
plt.axis('tight')
plt.xlabel('True price ($1000s)')
plt.ylabel('Predicted price ($1000s)')
plt.tight_layout()
clf = GradientBoostingRegressor()
clf.fit(X_train, y_train)
predicted = clf.predict(X_test)
expected = y_test
plt.figure(figsize=(4, 3))
plt.scatter(expected, predicted)
plt.plot([0, 50], [0, 50], '--k')
plt.axis('tight')
plt.xlabel('True price ($1000s)')
plt.ylabel('Predicted price ($1000s)')
plt.tight_layout()
import numpy as np
print("RMS: %r " % np.sqrt(np.mean((predicted - expected) ** 2)))
plt.show()
Out:
RMS: 3.3940265988986305
Plot the decision boundary of nearest neighbor decision on iris, first with a single nearest neighbor, and then
using 3 nearest neighbors.
import numpy as np
from matplotlib import pyplot as plt
from sklearn import neighbors, datasets
from matplotlib.colors import ListedColormap
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features. We could
# avoid this ugly slicing by using a two-dim dataset
y = iris.target
knn = neighbors.KNeighborsClassifier(n_neighbors=1)
knn.fit(X, y)
Z = Z.reshape(xx.shape)
plt.figure()
plt.pcolormesh(xx, yy, Z, cmap=cmap_light)
knn = neighbors.KNeighborsClassifier(n_neighbors=3)
knn.fit(X, y)
Z = knn.predict(np.c_[xx.ravel(), yy.ravel()])
plt.show()
Plot the first few samples of the digits dataset and a 2D representation built using PCA, then do a simple clas-
sification
for i in range(64):
ax = fig.add_subplot(8, 8, i + 1, xticks=[], yticks=[])
ax.imshow(digits.images[i], cmap=plt.cm.binary, interpolation='nearest')
# label the image with the target value
ax.text(0, 7, str(digits.target[i]))
plt.figure()
Out:
384
print(len(matches))
Out:
450
matches.sum() / float(len(matches))
Out:
print(metrics.confusion_matrix(expected, predicted))
plt.show()
Out:
[[37 0 0 0 2 0 0 1 0 0]
[ 0 41 0 0 0 0 1 0 4 2]
[ 0 2 31 0 0 0 1 0 11 0]
[ 0 0 2 48 0 1 0 2 4 1]
[ 0 1 1 0 32 0 0 1 1 0]
[ 0 1 0 1 1 41 0 4 0 0]
[ 0 0 0 0 0 1 42 0 0 0]
[ 0 0 0 0 0 1 0 46 0 0]
[ 0 4 0 0 0 1 0 3 34 0]
[ 0 1 0 1 0 0 1 4 4 32]]
The goal of this example is to show how an unsupervised method and a supervised one can be chained for
better prediction. It starts with a didactic but lengthy way of doing things, and finishes with the idiomatic
approach to pipelining in scikit-learn.
Here well take a look at a simple facial recognition example. Ideally, we would use a dataset consisting of a
subset of the Labeled Faces in the Wild data that is available with sklearn.datasets.fetch_lfw_people().
However, this is a relatively large download (~200MB) so we will do the tutorial on a simpler, less rich dataset.
Feel free to explore the LFW dataset.
Tip: Note is that these faces have already been localized and scaled to a common size. This is an important
preprocessing piece for facial recognition, and is a process that can require a large collection of training data.
This can be done in scikit-learn, but the challenge is gathering a sufficient amount of training data for the
algorithm to work. Fortunately, this piece is common enough that it has been done. One good resource is
OpenCV, the Open Computer Vision Library.
Well perform a Support Vector classification of the images. Well do a typical train-test split on the images:
print(X_train.shape, X_test.shape)
Out:
1850 dimensions is a lot for SVM. We can use PCA to reduce these 1850 features to a manageable size, while
maintaining most of the information in the dataset.
One interesting part of PCA is that it computes the mean face, which can be interesting to examine:
plt.imshow(pca.mean_.reshape(faces.images[0].shape),
cmap=plt.cm.bone)
The principal components measure deviations about this mean along orthogonal axes.
print(pca.components_.shape)
Out:
(150, 4096)
ax.imshow(pca.components_[i].reshape(faces.images[0].shape),
cmap=plt.cm.bone)
The components (eigenfaces) are ordered by their importance from top-left to bottom-right. We see that the
first few components seem to primarily take care of lighting conditions; the remaining components pull out
certain identifying features: the nose, eyes, eyebrows, etc.
With this projection computed, we can now project our original training and test data onto the PCA basis:
X_train_pca = pca.transform(X_train)
X_test_pca = pca.transform(X_test)
print(X_train_pca.shape)
Out:
(300, 150)
print(X_test_pca.shape)
Out:
(100, 150)
These projected components correspond to factors in a linear combination of component images such that
the combination approaches the original face.
Finally, we can evaluate how well this classification did. First, we might plot a few of the test-cases with the
labels learned from the training set:
import numpy as np
fig = plt.figure(figsize=(8, 6))
for i in range(15):
ax = fig.add_subplot(3, 5, i + 1, xticks=[], yticks=[])
ax.imshow(X_test[i].reshape(faces.images[0].shape),
cmap=plt.cm.bone)
y_pred = clf.predict(X_test_pca[i, np.newaxis])[0]
color = ('black' if y_pred == y_test[i] else 'red')
ax.set_title(faces.target[y_pred],
fontsize='small', color=color)
The classifier is correct on an impressive number of images given the simplicity of its learning model! Using
a linear classifier on 150 features derived from the pixel-level data, the algorithm correctly identifies a large
number of the people in the images.
Again, we can quantify this effectiveness using one of several measures from sklearn.metrics. First we can
do the classification report, which shows the precision, recall and other measures of the goodness of the
classification:
Out:
Another interesting metric is the confusion matrix, which indicates how often any two items are mixed-up.
The confusion matrix of a perfect classifier would only have nonzero entries on the diagonal, with zeros on the
off-diagonal:
print(metrics.confusion_matrix(y_test, y_pred))
Out:
[[4 0 0 ..., 0 0 0]
[0 4 0 ..., 0 0 0]
[0 0 2 ..., 0 0 0]
...,
[0 0 0 ..., 3 0 0]
[0 0 0 ..., 0 1 0]
[0 0 0 ..., 0 0 3]]
Pipelining
Above we used PCA as a pre-processing step before applying our support vector machine classifier. Plugging
the output of one estimator directly into the input of a second estimator is a commonly used pattern; for this
reason scikit-learn provides a Pipeline object which automates this process. The above problem can be re-
expressed as a pipeline as follows:
clf.fit(X_train, y_train)
y_pred = clf.predict(X_test)
print(metrics.confusion_matrix(y_pred, y_test))
plt.show()
Out:
[[5 0 0 ..., 0 0 0]
[0 4 0 ..., 0 0 0]
[0 0 1 ..., 0 0 0]
...,
[0 0 0 ..., 3 0 0]
[0 0 0 ..., 0 1 0]
[0 0 0 ..., 0 0 3]]
Here we have used PCA eigenfaces as a pre-processing step for facial recognition. The reason we chose this
is because PCA is a broadly-applicable technique, which can be useful for a wide array of data types. Research
in the field of facial recognition in particular, however, has shown that other more specific feature extraction
methods are can be much more effective.
Total running time of the script: ( 0 minutes 3.886 seconds)
Download Python source code: plot_eigenfaces.py
Download Jupyter notebook: plot_eigenfaces.ipynb
Generated by Sphinx-Gallery
This is an example plot from the tutorial which accompanies an explanation of the support vector machine
GUI.
import numpy as np
from matplotlib import pyplot as plt
labels = np.ones(n_samples)
labels[:n_samples // 2] = -1
X, y = linear_model()
clf = svm.SVC(kernel='linear')
clf.fit(X, y)
plt.figure(figsize=(6, 4))
ax = plt.subplot(111, xticks=[], yticks=[])
ax.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.bone)
ax.scatter(clf.support_vectors_[:, 0],
clf.support_vectors_[:, 1],
delta = 1
y_min, y_max = -50, 50
x_min, x_max = -50, 50
x = np.arange(x_min, x_max + delta, delta)
y = np.arange(y_min, y_max + delta, delta)
X1, X2 = np.meshgrid(x, y)
Z = clf.decision_function(np.c_[X1.ravel(), X2.ravel()])
Z = Z.reshape(X1.shape)
labels = np.ones(n_samples)
labels[far_pts] = -1
X, y = nonlinear_model()
clf = svm.SVC(kernel='rbf', gamma=0.001, coef0=0, degree=3)
clf.fit(X, y)
plt.figure(figsize=(6, 4))
ax = plt.subplot(1, 1, 1, xticks=[], yticks=[])
ax.scatter(X[:, 0], X[:, 1], c=y, cmap=plt.cm.bone, zorder=2)
delta = 1
y_min, y_max = -50, 50
x_min, x_max = -50, 50
x = np.arange(x_min, x_max + delta, delta)
y = np.arange(y_min, y_max + delta, delta)
X1, X2 = np.meshgrid(x, y)
Z = clf.decision_function(np.c_[X1.ravel(), X2.ravel()])
Z = Z.reshape(X1.shape)
plt.show()
Demo overfitting, underfitting, and validation and learning curves with polynomial regression.
Fit polynomes of different degrees to a dataset: for too small a degree, the model underfits, while for too large
a degree, it overfits.
import numpy as np
import matplotlib.pyplot as plt
A polynomial regression
n_samples = 8
np.random.seed(0)
x = 10 ** np.linspace(-2, 0, n_samples)
y = generating_func(x)
for i, d in enumerate(degrees):
ax = fig.add_subplot(131 + i, xticks=[], yticks=[])
ax.scatter(x, y, marker='x', c='k', s=50)
ax.set_xlim(-0.2, 1.2)
ax.set_ylim(0, 12)
ax.set_xlabel('house size')
if i == 0:
ax.set_ylabel('price')
ax.set_title(titles[i])
n_samples = 200
test_size = 0.4
error = 1.0
# Plot the mean train error and validation error across folds
plt.figure(figsize=(6, 4))
plt.plot(degrees, validation_scores.mean(axis=1), lw=2,
label='cross-validation')
plt.plot(degrees, train_scores.mean(axis=1), lw=2, label='training')
plt.legend(loc='best')
plt.xlabel('degree of fit')
plt.ylabel('explained variance')
plt.title('Validation curve')
plt.tight_layout()
Learning curves
# Plot the mean train error and validation error across folds
plt.figure(figsize=(6, 4))
plt.plot(train_sizes, validation_scores.mean(axis=1),
lw=2, label='cross-validation')
plt.plot(train_sizes, train_scores.mean(axis=1),
lw=2, label='training')
plt.ylim(ymin=-.1, ymax=1)
plt.legend(loc='best')
plt.xlabel('number of train samples')
plt.ylabel('explained variance')
plt.title('Learning curve (degree=%i )' % d)
plt.tight_layout()
plt.show()
Total running time of the script: ( 0 minutes 0.935 seconds)
Download Python source code: plot_bias_variance.py
Download Jupyter notebook: plot_bias_variance.ipynb
Generated by Sphinx-Gallery
import numpy as np
import pylab as pl
from matplotlib.patches import Circle, Rectangle, Polygon, Arrow, FancyArrow
arrow1 = '#88CCFF',
arrow2 = '#88FF88',
supervised=True):
fig = pl.figure(figsize=(9, 6), facecolor='w')
ax = pl.axes((0, 0, 1, 1),
xticks=[], yticks=[], frameon=False)
ax.set_xlim(0, 9)
ax.set_ylim(0, 6)
Polygon([[5.5, 1.7],
[6.1, 1.1],
[5.5, 0.5],
[4.9, 1.1]], fc=box_bg),
if supervised:
patches += [Rectangle((0.3, 2.4), 1.5, 0.5, zorder=1, fc=box_bg),
Rectangle((0.5, 2.6), 1.5, 0.5, zorder=2, fc=box_bg),
Rectangle((0.7, 2.8), 1.5, 0.5, zorder=3, fc=box_bg),
FancyArrow(2.3, 2.9, 2.0, 0, fc=arrow1,
width=0.25, head_width=0.5, head_length=0.2),
Rectangle((7.3, 0.85), 1.5, 0.5, fc=box_bg)]
else:
patches += [Rectangle((7.3, 0.2), 1.5, 1.8, fc=box_bg)]
for p in patches:
ax.add_patch(p)
if supervised:
pl.text(1.45, 3.05, "Labels",
ha='center', va='center', fontsize=14)
else:
pl.text(8.05, 1.1,
"Likelihood\nor Cluster ID\nor Better\nRepresentation",
ha='center', va='center', fontsize=12)
pl.text(8.8, 5.8, "Unsupervised Learning Model",
ha='right', va='top', fontsize=18)
def plot_supervised_chart(annotate=False):
create_base(supervised=True)
if annotate:
fontdict = dict(color='r', weight='bold', size=14)
pl.text(1.9, 4.55, 'X = vec.fit_transform(input)',
fontdict=fontdict,
rotation=20, ha='left', va='bottom')
pl.text(3.7, 3.2, 'clf.fit(X, y)',
fontdict=fontdict,
rotation=20, ha='left', va='bottom')
pl.text(1.7, 1.5, 'X_new = vec.transform(input)',
fontdict=fontdict,
rotation=20, ha='left', va='bottom')
pl.text(6.1, 1.5, 'y_new = clf.predict(X_new)',
fontdict=fontdict,
rotation=20, ha='left', va='bottom')
def plot_unsupervised_chart():
create_base(supervised=False)
if __name__ == '__main__':
plot_supervised_chart(False)
plot_supervised_chart(True)
plot_unsupervised_chart()
pl.show()
D
diff, 508, 511
differentiation, 508
dsolve, 511
E
equations
algebraic, 510
differential, 511
I
integration, 509
M
Matrix, 511
P
Python Enhancement Proposals
PEP 255, 268
PEP 3118, 304
PEP 3129, 277
PEP 318, 270, 277
PEP 342, 268
PEP 343, 278
PEP 380, 269
PEP 380#id13, 269
PEP 8, 272
S
solve, 510
652