Hands-on Lab: Deep Learning with the Theano Python Library Frédéric Bastien Département d'Informatique et de Recherche Opérationnelle Université de Montréal Montréal, Canada [email protected] Presentation prepared with Pierre Luc Carrier and Arnaud Bergeron

GTC 2015

Introduction Theano Models Exercices

Slides

I PDF of the slides:

http://goo.gl/bcBeBV

I github repo of this presentation

https://github.com/nouiz/gtc2015/

2 / 56

Introduction Theano Models Exercices

Introduction Theano Compiling/Running Modifying expressions GPU Debugging Models Logistic Regression Convolution Exercices

3 / 56

Introduction Theano Models Exercices

High level Python <- {NumPy/SciPy/libgpuarray} <- Theano <- {...}

I Python: OO coding language I Numpy:

n-dimensional

array object and scientic computing

toolbox

I SciPy: sparse matrix objects and more scientic computing functionality

I libgpuarray: GPU

n-dimensional

array object in C for CUDA

and OpenCL

I Theano: compiler/symbolic graph manipulation

1 / 56

Introduction Theano Models Exercices

High level (2) Many [machine learning] library build on top of Theano

I Pylearn2 I blocks I PyMC 3 I lasagne I sklearn-theano: Easy deep learning by combining Theano and sklearn.

I theano-rnn I Morb I ...

2 / 56

Introduction Theano Models Exercices

Some models build with Theano Some models that have been build with Theano.

I Neural Networks I Convolutional Neural Networks I RNN, RNN CTC, LSTM I NADE, RNADE I Autoencoders I Alex Net's I GoogleLeNet I Overfeat I Generative Adverserial Nets I SVMs I

many variations of above models and more 3 / 56

Introduction Theano Models Exercices

Python

I General-purpose high-level OO interpreted language I Emphasizes code readability I Comprehensive standard library I Dynamic type and memory management I Easily extensible with C I Slow execution I Popular in

web development

and

scientic communities

4 / 56

Introduction Theano Models Exercices

NumPy/SciPy I NumPy provides an

n-dimensional

numeric array in Python

I

Perfect for high-performance computing

I

Slices of arrays are views (no copying)

I NumPy provides I

Elementwise computations

I

Linear algebra, Fourier transforms

I

Pseudorandom number generators (many distributions)

I SciPy provides lots more, including I

Sparse matrices

I

More linear algebra

I

Solvers and optimization algorithms

I

Matlab-compatible I/O

I

I/O and signal processing for images and audio 5 / 56

Introduction Theano Models Exercices

What's missing?

I Non-lazy evaluation (required by Python) hurts performance I Bound to the CPU I Lacks symbolic or automatic dierentiation I No automatic speed and stability optimization

6 / 56

Introduction Theano Models Exercices

Goal of the stack Fast to develop Fast to run

7 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Introduction Theano Compiling/Running Modifying expressions GPU Debugging Models Logistic Regression Convolution Exercices

8 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Description High-level domain-specic language for numeric computation.

I Syntax as close to NumPy as possible I Compiles most common expressions to C for CPU and/or GPU I Limited expressivity means more opportunities for optimizations I

Strongly typed -> compiles to C

I

Array oriented -> easy parallelism

I

Support for looping and branching in expressions

I

No subroutines -> global optimization

I Automatic speed and numerical stability optimizations

9 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Description (2)

I Automatic dierentiation and R op (Hessian Free Optimization)

I Sparse matrices (CPU only) I Can reuse other technologies for best performance I

BLAS, SciPy, CUDA, PyCUDA, Cython, Numba, PyCUDA, ...

I Extensive unit-testing and self-verication I Extensible (You can create new operations as needed) I Works on Linux, OS X and Windows

10 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Project status? I Mature: Theano has been developed and used since January 2008 (7 yrs old)

I Driven hundreds research papers I Good user documentation I Active mailing list with participants from outside our institute I Core technology for Silicon-Valley start-ups I Many contributors (some from outside our institute) I Used to teach many university classes I Has been used for research at big compagnies Theano:

deeplearning.net/software/theano/ deeplearning.net/tutorial/

Deep Learning Tutorials:

11 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Simple example import

theano

# declare symbolic v a r i a b l e a = theano . t e n s o r . v e c t o r ( "a" )

# build symbolic expression b = a + a

∗∗

10

# compile function f = theano . f u n c t i o n ( [ a ] ,

b)

# Execute with numerical value

print

f ([0 ,

1,

2])

# prints ` array ([0 , 2 , 1026]) ` 12 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Simple example

13 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Overview of Library

Theano is many things

I Language I Compiler I Python library

14 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Scalar math Some example of scalar operations:

import t h e a n o from t h e a n o import

tensor

as

T

x = T. s c a l a r () y = T. s c a l a r () z = x + y w = z



x

a = T . s q r t (w) b = T. exp ( a ) c = a

∗∗

b

d = T. log ( c )

15 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Vector math

from

theano

import

tensor

as

T

x = T. vector () y = T. vector ()

# S c a l a r math a p p l i e d e l e m e n t w i s e a = x



y

# V e c t o r dot p r o d u c t b = T. dot ( x ,

y)

# B r o a d c a s t i n g ( a s NumPy , v e r y p o w e r f u l ) c = a + b

16 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Matrix math

from

theano

import

tensor

as

T

x = T. matrix () y = T. matrix () a = T. vector ()

# Mat rix − m a t r i x p r o d u c t b = T. dot ( x ,

y)

# Mat rix − v e c t o r p r o d u c t c = T. dot ( x ,

a)

17 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Tensors Using Theano:

I Dimensionality dened by length of broadcastable argument I Can add (or do other elemwise op) two tensors with same dimensionality

I Duplicate tensors along broadcastable axes to make size match

from

theano

import

tensor

as

T

t e n s o r 3 = T. TensorType ( b r o a d c a s t a b l e =( F a l s e ,

False ,

False ) ,

d t y p e= ' f l o a t 3 2 ' ) x = T. tensor3 ()

18 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Reductions

from

theano

import

tensor

as

T

t e n s o r 3 = T. TensorType ( b r o a d c a s t a b l e =( F a l s e ,

False ,

False ) ,

d t y p e= ' f l o a t 3 2 ' ) x = tensor3 () total

= x . sum ( )

m a r g i n a l s = x . sum ( a x i s = ( 0 ,

2))

mx = x . max ( a x i s =1)

19 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Dimshue from

theano

import

tensor

as

T

t e n s o r 3 = T. TensorType ( b r o a d c a s t a b l e =( F a l s e ,

False ,

False ))

x = tensor3 () y = x . dimshuffle ((2 ,

1,

0))

a = T. matrix () b = a .T

# Same a s b c = a . dimshuffle ((0 ,

1))

# Adding t o l a r g e r t e n s o r d = a . dimshuffle ((0 , e = a + d

1,

'x ' )) 20 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Indexing As NumPy! This mean slices and index selection return view

# r e t u r n v i e w s , s u p p o r t e d on GPU a_tensor [ i n t ] a_tensor [ int ,

int ]

a_tensor [ s t a r t : stop : step , a_tensor [ : :

s t a r t : stop : step ]

−1] # r e v e r s e the f i r s t dimension

# Advanced i n d e x i n g , r e t u r n copy a _ t e n s o r [ a n _ i n d e x _ v e c t o r ] # S u p p o r t e d on GPU a_tensor [ an_index_vector , a_tensor [ int ,

an_index_vector ]

an_index_vector ]

a_tensor [ an_index_tensor ,

...]

21 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Compiling and running expression

I theano.function I shared variables and updates I compilation modes

22 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

theano.function >>>

from

theano

import

tensor

as

T

>>> x = T . s c a l a r ( ) >>> y = T . s c a l a r ( ) >>> >>> >>> >>> >>> >>> >>>

from

theano

import

function

# f i r s t a r g i s l i s t o f SYMBOLIC i n p u t s # s e c o n d a r g i s SYMBOLIC o u t p u t f =

function ([ x ,

y] ,

x + y)

# C a l l i t w i t h NUMERICAL v a l u e s # Get a NUMERICAL o u t p u t f (1. ,

2.)

array (3.0)

23 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Shared variables

I It's hard to do much with purely functional programming I shared variables add just a little bit of imperative programming

I A shared variable is a buer that stores a numerical value for a Theano variable

I Can write to as many shared variables as you want, once each, at the end of the function

I Can modify value outside of Theano function with get_value() and set_value() methods.

24 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Shared variable example >>>

from

theano

import

shared

>>> x = s h a r e d ( 0 . ) >>>

from

t h e a n o . compat . p y t h o n 2 x

>>> u p d a t e s = >>>

f =

>>>

f ()

[(x ,

import

OrderedDict

x + 1)]

function ([] ,

u p d a t e s=u p d a t e s )

>>> x . g e t _ v a l u e ( ) 1.0 >>> x . s e t _ v a l u e ( 1 0 0 . ) >>>

f ()

>>> x . g e t _ v a l u e ( ) 101.0

25 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Compilation modes

I Can compile in dierent modes to get dierent kinds of programs

I Can specify these modes very precisely with arguments to theano.function

I Can use a few quick presets with environment variable ags

26 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Example preset compilation modes

I FAST_RUN: default. Fastest execution, slowest compilation I FAST_COMPILE: Fastest compilation, slowest execution. No C code.

I DEBUG_MODE: Adds lots of checks. Raises error messages in situations other modes regard as ne.

I optimizer=fast_compile: as mode=FAST_COMPILE, but with C code.

I theano.function(..., mode=FAST_COMPILE) I THEANO_FLAGS=mode=FAST_COMPILE python script.py

27 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Modifying expressions

There are macro that automatically build bigger graph for you.

I theano.grad I Others Those functions can get called many times, for example to get the 2nd derivative.

28 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

The grad method >>> x = T . s c a l a r ( ' x ' ) >>> y = 2 .



x

>>> g = T . g r a d ( y ,

x)

# P r i n t t h e not o p t i m i z e d g r a p h >>> t h e a n o . p r i n t i n g . p y d o t p r i n t ( g )

29 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

The grad method >>> x = T . s c a l a r ( ' x ' ) >>> y = 2 .



x

>>> g = T . g r a d ( y ,

x)

# P r i n t the optimized graph >>>

f = theano . f u n c t i o n ( [ x ] ,

g)

>>> t h e a n o . p r i n t i n g . p y d o t p r i n t ( f )

30 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Others

I R_op, L_op for Hessian Free Optimization I hessian I jacobian I clone the graph with replacement I you can navigate the graph if you need (go from the result of computation to its input, recursively)

31 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Enabling GPU

I Theano's current back-end only supports 32 bit on GPU I libgpuarray (new-backend) supports all dtype I CUDA supports 64 bit, but it is slow on gamer GPUs

32 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

GPU: Theano ags

Theano ags allow to congure Theano. Can be set via a conguration le or an environment variable. To enable GPU:

I Set device=gpu (or a specic gpu, like gpu0) I Set oatX=oat32 I Optional: warn_oat64={'ignore', 'warn', 'raise', 'pdb'}

33 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

oatX

Allow to change the dtype between oat32 and oat64.

I T.fscalar, T.fvector, T.fmatrix are all 32 bit I T.dscalar, T.dvector, T.dmatrix are all 64 bit I T.scalar, T.vector, T.matrix resolve to oatX I oatX is oat64 by default, set it to oat32 for GPU

34 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

CuDNN

I R1 and R2 is supported. I It is enabled automatically if available. I Theano ag to get an error if can't be used: optimizer_including=cudnn

35 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Debugging

I DEBUG_MODE I Error message I theano.printing.debugprint

36 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Error message: code

import import import

numpy

as

np

theano theano . t e n s o r

as

T

x = T. vector () y = T. vector () z = x + x z = z + y f = theano . f u n c t i o n ( [ x , f ( np . o n e s ( ( 2 , ) ) ,

y] ,

z)

np . o n e s ( ( 3 , ) ) )

37 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Error message: 1st part T r a c e b a c k ( most r e c e n t c a l l l a s t ) : [...] V a l u e E r r o r : I n p u t d i m e n s i o n mis −match . ( i n p u t [ 0 ] . shape [ 0 ] = 3 , i n p u t [ 1 ] . shape [ 0 ] = 2) Ap pl y node t h a t c a u s e d t h e e r r o r : E l e m w i s e { add , n o _ i n p l a c e }(< TensorType ( f l o a t 6 4 , v e c t o r ) > , , ) I n p u t s t y p e s : [ TensorType ( f l o a t 6 4 , v e c t o r ) , TensorType ( f l o a t 6 4 , v e c t o r ) , TensorType ( f l o a t 6 4 , v e c t o r ) ] Inputs shapes : [ ( 3 , ) , (2 ,) , ( 2 , ) ] Inputs s t r i d e s : [(8 ,) , (8 ,) , (8 ,)] I n p u t s s c a l a r v a l u e s : [ ' not s c a l a r ' , ' not s c a l a r ' , ' not s c a l a r ' ]

38 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Error message: 2st part

HINT: Re-running with most Theano optimization disabled could give you a back-traces when this node was created. This can be done with by setting the Theano ags optimizer=fast_compile. If that does not work, Theano optimizations can be disabled with optimizer=None. HINT: Use the Theano ag exception_verbosity=high for a debugprint of this apply node.

39 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Error message: Traceback

T r a c e b a c k ( most r e c e n t c a l l l a s t ) : F i l e " t e s t . py " , l i n e 9 , i n f ( np . o n e s ( ( 2 , ) ) , np . o n e s ( ( 3 , ) ) ) F i l e " /u/ b a s t i e n f / r e p o s / t h e a n o / c o m p i l e / f u n c t i o n _ m o d u l e . py " , l i n e 5 8 9 , i n __call__ s e l f . fn . thunks [ s e l f . fn . position_of_error ] ) F i l e " /u/ b a s t i e n f / r e p o s / t h e a n o / c o m p i l e / f u n c t i o n _ m o d u l e . py " , l i n e 5 7 9 , i n __call__ outputs = s e l f . fn ()

40 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

Error message: optimizer=fast_compile

Backtrace File

when

the

" t e s t . py " ,

node line

is 7,

created :

in



z = z + y

41 / 56

Introduction Theano Models Exercices

Compiling/Running Modifying expressions GPU Debugging

debugprint

>>>

from

theano . p r i n t i n g

import

debugprint

>>> d e b u g p r i n t ( a ) E l e m w i s e { mul , n o _ i n p l a c e } | TensorConstant {2.0}

[ @A ]

' '

[ @B ]

| E l e m w i s e { add , n o _ i n p l a c e }

[ @C ]

'z '

|< T en so rT y pe ( f l o a t 6 4 ,

s c a l a r )>

[ @D ]

|< T en so rT y pe ( f l o a t 6 4 ,

s c a l a r )>

[ @E ]

42 / 56

Introduction Theano Models Exercices

Logistic Regression Convolution

Introduction Theano Compiling/Running Modifying expressions GPU Debugging Models Logistic Regression Convolution Exercices

43 / 56

Introduction Theano Models Exercices

Logistic Regression Convolution

Inputs # Load from d i s k and put i n s h a r e d v a r i a b l e . d a t a s e t s = load_data ( d a t a s e t ) train_set_x ,

train_set_y = datasets [ 0 ]

valid_set_x ,

valid_set_y = datasets [ 1 ]

# a l l o c a t e symbolic v a r i a b l e s f o r the data index = T. l s c a l a r () # index to a [ mini ] batch # generate symbolic v a r i a b l e s for input minibatch x = T. matrix ( ' x ' ) # data , 1 row p e r image y = T. i v e c t o r ( ' y ' ) # labels

44 / 56

Introduction Theano Models Exercices

Logistic Regression Convolution

Model n _ i n = 28



28

n_out = 10

# weights W = theano . shared ( numpy . z e r o s ( ( n_in ,

n_out ) ,

d t y p e=t h e a n o . c o n f i g . f l o a t X ) )

# bias b = theano . shared ( numpy . z e r o s ( ( n_out , ) , d t y p e=t h e a n o . c o n f i g . f l o a t X ) )

45 / 56

Introduction Theano Models Exercices

Logistic Regression Convolution

Computation # the forward pass p_y_given_x = T . n n e t . s o f t m a x (T . d o t ( i n p u t , W) + b )

# c o s t we m i n i m i z e : t h e n e g a t i v e l o g l i k e l i h o o d l

= T . l o g ( p_y_given_x )

cost =

−T . mean ( l

[T. arange ( y . shape [ 0 ] ) ,

y ])

# the e r r o r y _ p r e d = T . a r g m a x ( p_y_given_x , err

= T . mean (T . n e q ( y _ p r e d ,

a x i s =1)

y ))

46 / 56

Introduction Theano Models Exercices

Logistic Regression Convolution

Gradient and Updates

# compute t h e g r a d i e n t o f c o s t g_W,

g_b = T . g r a d ( c o s t=c o s t ,

w r t =(W,

b ))

# model p a r a m e t e r s u p d a t e s r u l e s updates =

[ (W, W (b ,

b

− −

learning_rate learning_rate

∗ ∗

g_W) , g_b ) ]

47 / 56

Introduction Theano Models Exercices

Logistic Regression Convolution

Training Function # c o m p i l e a Theano f u n c t i o n t h a t t r a i n t h e model train_model = theano . f u n c t i o n ( i n p u t s =[ i n d e x ] ,

o u t p u t s =( c o s t ,

err ) ,

u p d a t e s=u p d a t e s , g i v e n s ={ x:

train_set_x [ index



batch_size :

( i n d e x + 1) y:

train_set_y [ index





batch_size ] ,

batch_size :

( i n d e x + 1)



batch_size ]

} )

48 / 56

Introduction Theano Models Exercices

Logistic Regression Convolution

Introduction Theano Compiling/Running Modifying expressions GPU Debugging Models Logistic Regression Convolution Exercices

49 / 56

Introduction Theano Models Exercices

Logistic Regression Convolution

Inputs # Load from d i s k and put i n s h a r e d v a r i a b l e . d a t a s e t s = load_data ( d a t a s e t ) train_set_x ,

train_set_y = datasets [ 0 ]

valid_set_x ,

valid_set_y = datasets [ 1 ]

# a l l o c a t e symbolic v a r i a b l e s f o r the data index = T. l s c a l a r () # index to a [ mini ] batch x = T. matrix ( ' x ' ) y = T. i v e c t o r ( ' y ' )

# t h e data , 1 row p e r image # labels

# Reshape m a t r i x o f r a s t e r i z e d i m a g e s o f s h a p e ( b a t c # t o a 4D t e n s o r , c o m p a t i b l e f o r c o n v o l u t i o n layer0_input = x . reshape (( batch_size ,

1,

28 ,

28)) 50 / 56

Introduction Theano Models Exercices

Logistic Regression Convolution

Model i m a g e _ s h a p e =( b a t c h _ s i z e ,

1,

28 ,

f i l t e r _ s h a p e =( n k e r n s [ 0 ] ,

1,

5,

W_bound =

28) , 5) ,

...

W = theano . shared ( numpy . a s a r r a y ( r n g . u n i f o r m ( l o w=−W_bound ,

h i g h=W_bound ,

s i z e=f i l t e r _ s h a p e ) , d t y p e=t h e a n o . c o n f i g . f l o a t X ) , )

# t h e b i a s i s a 1D t e n s o r −− one b i a s p e r o u t p u t f e a b _ v a l u e s = numpy . z e r o s ( ( f i l t e r _ s h a p e [ 0 ] , ) , d t y p e = . . . b = theano . shared ( b_values ) 51 / 56

Introduction Theano Models Exercices

Logistic Regression Convolution

Computation # c o n v o l v e i n p u t f e a t u r e maps w i t h f i l t e r s c o n v _ o u t = c o n v . c o n v 2 d ( i n p u t=x ,

f i l t e r s =W)

# downsample each f e a t u r e map i n d i v i d u a l l y , u s i n g m p o o l e d _ o u t = d o w n s a m p l e . max_pool_2d ( i n p u t=c o n v _ o u t , d s =(2 ,

2) ,

//

poolsize

i g n o r e _ b o r d e r=T r u e ) output = T. tanh ( pooled_out + b. dimshuffle ( 'x ' ,

0,

'x ' ,

'x ' ))

52 / 56

Introduction Theano Models Exercices

Introduction Theano Compiling/Running Modifying expressions GPU Debugging Models Logistic Regression Convolution Exercices

53 / 56

Introduction Theano Models Exercices

ipython notebook

I Introduction I Exercices (Theano only exercices) I lenet (small CNN model to quickly try it)

54 / 56

Introduction Theano Models Exercices

Connection instructions

I Navigate to

nvlabs.qwiklab.com

I Login or create a new account I Select the Instructor-Led Hands-on Labs class I Find the lab called Theano and click Start I After a short wait, lab instance connection information will be shown

I Please ask Lab Assistants for help!

55 / 56

Introduction Theano Models Exercices

Questions, Acknowledgments

Questions? Acknowledgments I All people working or having worked at the LISA lab/MILA institute

I All Theano users/contributors I Compute Canada, RQCHP, NSERC, and Canada Research Chairs for providing funds or access to compute resources.

56 / 56

Hands-on Lab: Deep Learning with the Theano Python Library

PDF of the slides: http://goo.gl/bcBeBV. ▷ github repo of this ... High-level domain-specific language for numeric computation. ▷ Syntax as close ... Automatic differentiation and R op (Hessian Free. Optimization) .... x . set_value (100.) >>> f ( ).

3MB Sizes 94 Downloads 217 Views

Recommend Documents

Machine Learning and Deep Learning with Python ...
The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second ... Designing Data-Intensive Applications: The Big Ideas Behind Reliable, ...

[PDF BOOK] Deep Learning with Python - Full Online
Python for Data Analysis, 2e · The Elements of Statistical Learning: Data Mining, Inference, and Prediction, Second Edition (Springer Series in · Statistics) · Make Your Own Neural Network · Designing Data-Intensive Applications: The Big Ideas Behind