A U T O M AT I C D I F F E R E N T I AT I O N AN ABSTRACTION
 FOR MACHINE LEARNING MIA

A L E X W I LT S C H K O RESEARCH ENGINEER TWITTER A D VA N C E D T E C H N O L O G Y G R O U P

@ A W I LT S C H

MIA

EVOLUTION OF MODELING IN BIOLOGY Through the eyes of a formerly practicing neuroscientist

1. Partial differential equations (PDEs) — Physical simulation, or simulation by physical analogy 2. Probabilistic Graphical Models (PGMs) — Instantiating causal or correlative relationships directly in a computer program 3. Neural networks — Enormously data hungry adaptive basis regression, in an era of enormous data

MIA

NEURA L NETWORKS IN BIOLOGY That have nothing to do with networks of neurons

Predicting: • • • • • • •

DNA Binding (e.g. Kelley et al @ MIA) Predicting molecular properties (Duvenuad et al @ MIA) Behavioral modeling (Johnson et al) DNA expression DNA methylation state Protein folding Image correction

MIA

NEURA L NETWORKS IN BIOLOGY Biology now produces enough data to keep the deep learning furnace burning

No Data

1. Partial differential equations (PDEs) — Phys simulation by physical analogy

Some Data

2. Probabilistic Graphical Models (PGMs) — In or correlative relationships directly in a compu

Broad-sized Data

3. Neural networks — Enormously data hungry regression, in an era of enormous data

MIA

NEURA L NETWORKS IN BIOLOGY

MIA

NEURA L NETWORKS IN BIOLOGY

But how?

MIA

WE WORK ON TOP OF STAB LE ABSTRACTIONS We should take these for granted, to stay sane! Arrays

Linear Algebra

Common Subroutines

BLAS LINPACK LAPACK

Est: 1957

Est: 1979 (now on GitHub!)

Est: 1984

MIA

MAC HINE LEAR NING HAS OTHER ABSTRACTIONS These assume all the other lower-level abstractions in scientific computing

All gradient-based optimization (that includes neural nets) relies on Automatic Differentiation (AD)

"Mechanically calculates derivatives as functions expressed as computer programs, at machine precision, and with complexity guarantees." (Barak Pearlmutter). Not finite differences -- generally bad numeric stability. We still use it as "gradcheck" though. Not symbolic differentiation -- no complexity guarantee. Symbolic derivatives of heavily nested functions (e.g. all neural nets) can quickly blow up in expression size. MIA

AUTOMATIC DIFFERENTIATION IS THE 
 ABSTRACTION FOR GRADIENT-BASED ML All gradient-based optimization (that includes neural nets) relies on Automatic Differentiation (AD) • •

• •

Rediscovered several times (Widrow and Lehr, 1990) Described and implemented for FORTRAN by Speelpenning in 1980 (although forward-mode variant that is less useful for ML described in 1964 by Wengert). Popularized in connectionist ML as "backpropagation" (Rumelhart et al, 1986) In use in nuclear science, computational fluid dynamics and atmospheric sciences (in fact, their AD tools are more sophisticated than ours!) MIA

AUTOMATIC DIFFERENTIATION IS THE 
 ABSTRACTION FOR GRADIENT-BASED ML All gradient-based optimization (that includes neural nets) relies on Reverse-Mode Automatic Differentiation (AD) • •

• •

Rediscovered several times (Widrow and Lehr, 1990) Described and implemented for FORTRAN by Speelpenning in 1980 (although forward-mode variant that is less useful for ML described in 1964 by Wengert). Popularized in connectionist ML as "backpropagation" (Rumelhart et al, 1986) In use in nuclear science, computational fluid dynamics and atmospheric sciences (in fact, their AD tools are more sophisticated than ours!) MIA

FORWA RD MODE (SYMBOLIC VIEW)

FORWA RD MODE (SYMBOLIC VIEW)

FORWA RD MODE (SYMBOLIC VIEW)

FORWA RD MODE (SYMBOLIC VIEW)

FORWA RD MODE (SYMBOLIC VIEW)

Left to right:

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list"

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3 b=2

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3 b=2

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3 b=2 c=1

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3 b=2 c=1

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3 b=2 c=1 d = a * math.sin(b) = 2.728

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3 b=2 c=1 d = a * math.sin(b) = 2.728

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3 b=2 c=1 d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3

a=3

b=2 c=1 d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3

a=3 dada = 1

b=2 c=1 d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3

a=3 dada = 1

b=2

b=2

c=1 d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3

a=3 dada = 1

b=2

b=2 dbda = 0

c=1 d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3

a=3 dada = 1

b=2

b=2 dbda = 0

c=1

c=1

d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3

a=3 dada = 1

b=2

b=2 dbda = 0

c=1

c=1 dcda = 0

d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3

a=3 dada = 1

b=2

b=2 dbda = 0

c=1

c=1 dcda = 0

d = a * math.sin(b) = 2.728

d = a * math.sin(b) = 2.728

return 2.728

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3

a=3 dada = 1

b=2

b=2 dbda = 0

c=1

c=1 dcda = 0

d = a * math.sin(b) = 2.728

d = a * math.sin(b) = 2.728 ddda = math.sin(b) = 0.909

return 2.728

From Baydin 2016

MIA

FORWA RD MODE (PROGRAM VIEW) Left-to-right evaluation of partial derivatives (not so great for optimization)

We can write the evaluation of a program in a sequence of operations,
 called a "trace", or a "Wengert list" a=3

a=3 dada = 1

b=2

b=2 dbda = 0

c=1

c=1 dcda = 0

d = a * math.sin(b) = 2.728

d = a * math.sin(b) = 2.728 ddda = math.sin(b) = 0.909

return 2.728

From Baydin 2016

return 0.909

MIA

REVERSE MODE (SYMBOLIC VIEW)

REVERSE MODE (SYMBOLIC VIEW)

REVERSE MODE (SYMBOLIC VIEW)

REVERSE MODE (SYMBOLIC VIEW)

REVERSE MODE (SYMBOLIC VIEW)

Right to left:

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3 b=2

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3 b=2 c=1

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3 b=2 c=1 d = a * math.sin(b) = 2.728

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3 b=2 c=1 d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3 b=2 c=1 d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3 b=2 c=1 d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3

a=3

b=2 c=1 d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3

a=3

b=2

b=2

c=1 d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3

a=3

b=2

b=2

c=1

c=1

d = a * math.sin(b) = 2.728 return 2.728

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3

a=3

b=2

b=2

c=1

c=1

d = a * math.sin(b) = 2.728

d = a * math.sin(b) = 2.728

return 2.728

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

From Baydin 2016

a=3

a=3

b=2

b=2

c=1

c=1

d = a * math.sin(b) = 2.728

d = a * math.sin(b) = 2.728

return 2.728

dddd = 1

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3

a=3

b=2

b=2

c=1

c=1

d = a * math.sin(b) = 2.728

d = a * math.sin(b) = 2.728

return 2.728

dddd = 1 ddda = dd * math.sin(b) = 0.909

From Baydin 2016

MIA

REVERSE MODE (PROGRAM VIEW) Right-to-left evaluation of partial derivatives (the right thing to do for optimization)

a=3

a=3

b=2

b=2

c=1

c=1

d = a * math.sin(b) = 2.728

d = a * math.sin(b) = 2.728

return 2.728

dddd = 1 ddda = dd * math.sin(b) = 0.909 return 0.909, 2.728

From Baydin 2016

MIA

A trainable
 neural network
 in torch-autograd Any numeric
 function can
 go here These two fn's
 are split only
 for clarity This is the API > This is a how
 the parameters
 are updated MIA

A trainable
 neural network
 in torch-autograd Any numeric
 function can
 go here These two fn's
 are split only
 for clarity This is the API > This is a how
 the parameters
 are updated MIA

sum sq

WH AT'S ACTUALLY HAPPENING? As torch code is run, we build up a compute graph

sub add

target

mult

b W

input MIA

sum sq

WH AT'S ACTUALLY HAPPENING? As torch code is run, we build up a compute graph

sub add

target

mult

b W

input MIA

sum sq

WH AT'S ACTUALLY HAPPENING? As torch code is run, we build up a compute graph

sub add

target

mult

b W

input MIA

sum sq

WH AT'S ACTUALLY HAPPENING? As torch code is run, we build up a compute graph

sub add

target

mult

b W

input MIA

sum sq

WH AT'S ACTUALLY HAPPENING? As torch code is run, we build up a compute graph

sub add

target

mult

b W

input MIA

sum sq

WH AT'S ACTUALLY HAPPENING? As torch code is run, we build up a compute graph

sub add

target

mult

b W

input MIA

WE TRACK COMPUTATION VIA OPERATOR OVERLOADING Linked list of computation forms a "tape" of computation

MIA

sum

WH AT'S

sq ACTUALLY

sum sq HAPPENING?

Whole-Model When it comes time to evaluate partialLayer-Based derivatives, we just
 sub derivatives from a table sub have to look up the partial

add mult

sum sq Full Autodiff

add mult

sub

add mult

We can then calculate the derivative of the loss w.r.t. inputs via the chain rule! MIA

AUTOGRAD EXAMPLES Autograd gives you derivatives of numeric code, without a special mini-language

MIA

AUTOGRAD EXAMPLES Control flow, like if-statements, are handled seamlessly

MIA

AUTOGRAD EXAMPLES Scalars are good for demonstration, but autograd is most often used with tensor types

MIA

AUTOGRAD EXAMPLES Autograd shines if you have dynamic compute graphs

MIA

AUTOGRAD EXAMPLES Recursion is no problem. 
 Write numeric code as you ordinarily would, autograd handles the gradients

MIA

AUTOGRAD EXAMPLES Need new or tweaked partial derivatives? Not a problem.

MIA

AUTOGRAD EXAMPLES Need new or tweaked partial derivatives? Not a problem.

MIA

AUTOGRAD EXAMPLES Need new or tweaked partial derivatives? Not a problem.

MIA

SO WHAT DIFFERENTIATES N.NET LIBRARIES? The granularity at which they implement autodiff ...

Whole-Model

sum

sum

sum

sq

sq

sq

sub

Layer-Based

add mult

scikit-learn

sub

Full Autodiff

add mult

Torch NN Keras Lasagne

sub

add mult

Autograd Theano TensorFlow MIA

SO WHAT DIFFERENTIATES N.NET LIBRARIES? ... which is set by the partial derivatives they define

Whole-Model

sum

sum

sum

sq

sq

sq

sub

Layer-Based

add mult

scikit-learn

sub

Full Autodiff

add mult

Torch NN Keras Lasagne

sub

add mult

Autograd Theano TensorFlow MIA

SO WHAT DIFFERENTIATES N.NET LIBRARIES? Do they actually implement autodiff, or do they wrap a separate autodiff library? "Shrink-wrapped" "Autodiff Wrappers" "Autodiff Engines"

Whole-Model

sum

sum

sum

sq

sq

sq

sub

Layer-Based

add mult

sub

Full Autodiff

add mult

sub

add mult

MIA

SO WHAT DIFFERENTIATES N.NET LIBRARIES? How is the compute graph built?

sum sq sub add

target

mult

b W

input MIA

SO WHAT DIFFERENTIATES N.NET LIBRARIES? How is the compute graph built?

sum sq sub add

target

mult

b W

input MIA

SO WHAT DIFFERENTIATES N.NET LIBRARIES? How is the compute graph built?

sum sq sub add

target

mult

b W

input MIA

SO WHAT DIFFERENTIATES N.NET LIBRARIES? How is the compute graph built?

sum sq sub add

target

mult

b W

input MIA

SO WHAT DIFFERENTIATES N.NET LIBRARIES? How is the compute graph built?

sum sq sub add

target

mult

b W

input MIA

SO WHAT DIFFERENTIATES N.NET LIBRARIES? How is the compute graph built?

sum sq sub add

target

mult

b W

input MIA

SO WHAT DIFFERENTIATES N.NET LIBRARIES? How is the compute graph built?

Explicit

Ahead-of-Time

Just-in-Time

NN

TensorFlow

Autograd

Caffe

Theano

Chainer

No compiler optimizations, no dynamic graphs

Dynamic graphs can be awkward to work with

No compiler optimizations

MIA

SO WHAT DIFFERENTIATES N.NET LIBRARIES? What is the graph?

Static Dataflow

Hybrid

JIT Dataflow

NN

TensorFlow

Autograd

Sea of Nodes (Click & Paleczny, 1995)

? Caffe

Theano

Chainer

Ops are nodes, edges are data, graph can't change

Ops are nodes, edges data, but the runtime has to work hard for control flow

Ops are nodes, edges are data, graph can change freely

Control flow and dataflow merged in this AST representation

MIA

WE WANT NO LIMITS ON THE MODELS WE WRITE Why can't we mix different levels of granularity?

Whole-Model

sum

sum

sum

sq

sq

sq

sub

Layer-Based

add mult

sub

Full Autodiff

add mult

sub

add mult

These divisions are usually the function of wrapper libraries MIA

NEURA L NET T HREE WAYS The most granular — using individual Torch functions

NEURA L NET T HREE WAYS Composing pre-existing NN layers. If we need layers that have been highly optimized, this is good

NEURA L NET T HREE WAYS We can also compose entire networks together (e.g. image captioning, GANs)

NEURA L NETWORKS IN BIOLOGY That have nothing to do with networks of neurons

Predicting: • • • • • • •

DNA Binding (e.g. Kelley et al @ MIA) Predicting molecular properties (Duvenuad et al @ MIA) Behavioral modeling (Johnson et al) DNA expression DNA methylation state Protein folding Image correction

MIA

IMPACT AT TWITTER Prototyping without fear



We try crazier, potentially high-payoff ideas more often, because autograd makes it essentially free to do so (can write "regular" numeric code, and automagically pass gradients through it)



We use weird losses in production: large classification model uses a loss computed over a tree of class taxonomies



Models trained with autograd running on large amounts of media at Twitter



Often "fast enough”, no penalty at test time



"Optimized mode" is nearly a compiler, but still a work in progress

MIA

OTH E R AUTO DIFF IDEAS That haven't fully landed in machine learning yet •

Checkpointing — don't save all of the intermediate values. Recompute them when you need them (memory savings, potentially speedup if compute is faster than load/store, possibly good with pointwise functions like ReLU). MXNet I think is the first to implement this generally for neural nets.



Mixing forward and reverse mode — called "cross-country elimination". No need to evaluate partial derivatives only in one direction! For diamond or hour-glass shaped compute graphs, perhaps dynamic programming can find the right order of partial derivative folding.



Stencils — image processing (convolutions) and element-wise ufuncs can be phrased as stencil operations. More efficient general-purpose implementations of differentiable stencils needed (compute graphics does this, Guenter 2007, extending with DeVito et al 2016).



Source-to-source — All neural net autodiff packages are either AOT or JIT graph construction, with operator overloading. The original autodiff (in FORTRAN, in the 80s) was source transformation. Considered gold-standard for performance in autodiff field. Challenge is control flow.



Higher-order gradients — hessian = grad(grad(f)). Not many efficient implementations, need to take advantage of sparsity. Fully closed versions in e.g. autograd, DiffSharp, Hype. MIA

YOU SHOULD BE USING IT It's easy to try

MIA

YOU SHOULD BE USING IT It's easy to try

Anaconda is the de-facto distribution for scientific Python. Works with Lua & Luarocks now. https://github.com/alexbw/conda-lua-recipes

MIA

YOU SHOULD BE USING IT It's easy to try

Anaconda is the de-facto distribution for scientific Python. Works with Lua & Luarocks now. https://github.com/alexbw/conda-lua-recipes

MIA

QUESTIONS? Happy to help. Find me at: @awiltsch [email protected] github.com/alexbw

MIA

Automatic Differentiation (MIA, Oct 26, 2016).pdf

@AWILTSCH. ALEX WILTSCHKO. RESEARCH ENGINEER. TWITTER. ADVANCED TECHNOLOGY GROUP. Whoops! There was a problem loading this page.

3MB Sizes 2 Downloads 198 Views

Recommend Documents

Automatic Differentiation of Algorithms for Machine ...
National University of Ireland Maynooth, Co. Kildare, Ireland. Abstract. Automatic differentiation—the mechanical transformation of numeric computer programs to calculate derivatives efficiently and accurately—dates to the origin of the computer

oct. 26, 2015 harvest party
Oct 26, 2015 - Free family event to celebrate the harvest season! Join us Wednesday, Oct. 26, 2015 at Cornerstone Baptist Church in. Eldridge from 5:30 to 7:00 p.m. for a FREE family fun event! Come dressed in your Halloween costume and enjoy all the

arXiv:1110.5706v1 [math.FA] 26 Oct 2011
Oct 25, 2011 - of the adjoint”, submitted; http://arxiv.org/abs/1103.6239v1, March 2011. ... [20] R. Cross, Multivalued Linear Operators, Marcel Dekker, Inc, New ...

Weekly-Activites-Sept-26-thru-Oct.-1.pdf
Softball-Edmond North-JV/Var. Away 5:00/6:30. MDTC MDTC-Accreditation Workshop Rm. A&B 7:30am ... Away 4/5/6. MCHS Volleyball-Putnam City-Fresh/JV/Var. Away 4/5/6. CAMS Football-CNP-7th&8th Away 5:30/6:45 ... CAMS North Central Honors Orchestra Away

EV Daily Total Nov 2017-Oct 26.pdf
Page 1 of 1. Location Mon Tue Wed Thu TOTAL. STAN STANART. Harris County Clerk & Chief Election Official. DAILY RECORD OF EARLY VOTING.

Syllabus-SVIIT-CSE-BTech_28CCE_29-VI-26-Oct ...
BTECIOT703 EC Wireless Sensor. Networks 60 20 20 30 20 3 1 2 5. Whoops! There was a problem loading this page. Retrying... Syllabus-SVI ... r FoS_29.pdf.

2016 PROYECTO DE GESTION 26 OCT 2016.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item.

EV Daily Total Nov 2016 - Oct 26.pdf
Kyle Chapman Courthouse Annex SRD128P 1238 1424 1793 0 0 0 0 0 0 0 0 0 4455. Freeman Branch Library SRD129 1607 1998 2018 0 0 0 0 0 0 0 0 0 5623. Harris County Scarsdale Annex SRD129S 1488 1677 1591 0 0 0 0 0 0 0 0 0 4756. Juergen's Hall Community Ce

Syllabus-SVIIT-CSE-MTech_28CSE_29-III-26-Oct ...
Research. 2. To plan and design business research using scientific and statistical methods. COURSE OUTCOMES ... Measures of Dispersion: Standard Deviation – Variance – Coefficient of Variance, Skewness. 3. Correlation - Karl ... Page 3 of 4. Syll

MIA/411141
May 19, 1994 - roll in a winder for winding a traveling paper web into a. 2'863'175 A ..... Winder apparatus, such as a bearing housing, to rotate about its arbor ...

differentiation cheat sheet.pdf
Stars & Steps Chart. Learning Styles Processes. • Auditory/Verbal: Cooperative Learning structures,. presentations, Podcasts. • Tactile/Kinesthetic: models, card sorts,. demonstrations. • Visual: graphic organizers, color-coding, Thinking. Maps

Differentiation Across Subjects.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Differentiation ...Missing:

juli mia khalifa.pdf
etiket icarly instagramgörÃ1⁄4ntÃ1⁄4leyici. Imamal ghazali das buch. der ehe schariagegner. Images tagged with gettothetop on instagram. Ariana grande 39 s ...

Mia Malkova Dirty.pdf
Mia malkova yoga porn photos instant galleries to share with friends. Mia malkova blog. pornstar picturesand videos. Mia malkova brazzers xxx pico galleries!

mama mia film.pdf
the filmmamma mia! opens, it 39 scool to admit you like abba. Mamma mia! the movie dvd 2008 amazon.co.uk meryl streep. Crazy. filmguy mamma mia! 2008 .

Multilineage Differentiation Capability Comparison between ... - hikari
MSCs were observed under the electron microscopes. Scanning .... the process of human development” [Verfaillie et al., 2003] urges us to reexamine the.

Mia malkova handjob
Page 1 of 24. The Americas Test KitchenDo-it-yourselfCookbook.Molitrifft 5.13410945932 - Download Mia malkova handjob.Santa Claus Is Comin to. Town 1970 1080p BluRay x264-HDMaNiAcS [P.Currently, the United States is the only industrialized nation tha

Syllabus-SVIIT-CSE-B.Tech-CCE-I -26-Oct-2017 _28After FoS_29.pdf
J. Stewart, Calculus: Early Transcendentals, 5th Ed, Thomas Learning (Brooks/ Cole), Indian ... Solid State Physics: Free electron model, Qualitative Analysis of Kronig ... mass, Fermi level for Intrinsic and Extrinsic semiconductors, P-N junction ..

Scheme-SVIIT-CSE-B.Tech-CCE-II -26-Oct-2017 _28After FoS_29.pdf
Page 1 of 1. Shri Vaishnav Vidyapeeth Vishwavidyalaya. Bachelor of Technology (Computer and Communication Engineering). Choice Based Credit System (CBCS). SEMESTER - I. End Sem. University. Exam (60%). Two. Term. Exam. (20%). Teachers. Assessment*. (