The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

The BOSARIS Toolkit Theory, Algorithms and Code for Surviving the New DCF

Niko Brümmer and Edward de Villiers AGNITIO Research, South Africa

SRE’11 Analysis Workshop, Atlanta, 6–7 December 2011

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Agenda

The purpose of this presentation is to: 1

Introduce the BOSARIS Toolkit.

2

Discuss database size requirements.

3

Introduce the normalized Bayes error-rate plot, a tool for evaluating calibrated likelihood-ratios.

4

Discuss interesting relationships between DET-curves, minDCF and EER.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Collaborators Practical details

Outline

1

The BOSARIS Toolkit Introduction Collaborators Practical details

2

How many trials do we need?

3

Normalized Bayes error-rate plot

4

Relationships between DET/ROC, EER and minDCF

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Collaborators Practical details

The BOSARIS Toolkit Introduction

The BOSARIS Toolkit is a freely available MATLAB Toolkit, for processing binary classifier scores. The emphasis is on: Efficient processing of large trial lists. Coverage of a wide range of operating points, including the challenging ‘new DCF’. Evaluation of the goodness of both uncalibrated scores and calibrated likelihood-ratios. Fusion and calibration.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Collaborators Practical details

The BOSARIS Toolkit Collaborators

The Toolkit was created during SRE’10 and the subsequent BOSARIS Workshop. Core implementation by AGNITIO Collaboration with BUT, CRIM, SRI, Politecnico Torino, SVOX and University of Zaragoza Notable contributions from Lukáš Burget, Oldˇrich Plchot and Nicolas Scheffer

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Collaborators Practical details

The BOSARIS Toolkit Practical details

http://sites.google.com/site/bosaristoolkit Documentation: User Guide (contents similar to this presentation) User Manual (with practical coding details). MATLAB version: R2008a or later (toolkit has object-oriented API). Platform-independent interface with other tools: (large) text files (more efficient) binary HDF5 files AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Counting errors The problem Analysis Rule of thumb

Outline 1

The BOSARIS Toolkit

2

How many trials do we need? Counting errors The problem Analysis Rule of thumb

3

Normalized Bayes error-rate plot

4

Relationships between DET/ROC, EER and minDCF

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Counting errors The problem Analysis Rule of thumb

How many trials do we need? Counting errors

We need error counts for everything: All of the evaluation criteria that we consider below, depend explicitly or implicitly on the counting of errors. Training of fusion and calibration and more generally all kinds of system optimization, depend on evaluation criteria and therefore on error counts.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Counting errors The problem Analysis Rule of thumb

How many trials do we need? The problem

The problem with error counting is: For any system (no matter how accurate), and for any supervised database (no matter how large) there are operating points where false-alarm or miss counts become small and vanish. Small, or zero error-counts give unreliable estimates of the error-rates that are to be expected on unseen data.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Counting errors The problem Analysis Rule of thumb

How many trials do we need? Analysis

This problem can be analysed with various frequentist (e.g. confidence interval) or, Bayesian (e.g. credible interval) methods. The answers will differ, because different analyses depend on different assumptions to make them tractable.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Counting errors The problem Analysis Rule of thumb

How many trials do we need? Analysis

One such analysis, Doddington’s Rule of 30, uses the assumption of independent Bernoulli trials, to recommend: You need at least 30 errors, for the count frequency to give a probably, approximately correct error-rate estimate. We found in SRE’10 and afterwards that this is a useful, practical rule of thumb.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Counting errors The problem Analysis Rule of thumb

How many trials do we need? Rule of thumb

Rule of thumb: When training, fusing, calibrating, testing, evaluating, etc. you need the supervised database to be large enough so that, for the system under consideration, you get: at least 30 false alarms and at least 30 misses at all operating points of interest. The BOSARIS Toolkit makes provision for indicating on various plots where the error counts drop below 30.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Outline 1

The BOSARIS Toolkit

2

How many trials do we need?

3

Normalized Bayes error-rate plot Introduction Bayes risk Bayes error-rate The plot

4

Relationships between DET/ROC, EER and minDCF

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Normalized Bayes error-rate plot

The Normalized Bayes error-rate plot is a new tool for: calibration-sensitive evaluation of the decision-making ability of system likelihood-ratios, over a representative range of operating points. We show how to generalize the familiar DCF to construct such plots.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

DCF DCF evaluates submitted hard decisions, made by the evaluee: DCF = πCmiss Pmiss + (1 − π)Cfa Pfa where 0 < π < 1 is the target prior, Cmiss , Cfa > 0 are costs, Pmiss , Pfa are the empirical miss and false-alarm rates at some score decision threshold set by the evaluee. DCF evaluates at a fixed, known operating point.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Bayes risk Bayes risk evaluates submitted likelihood ratios: R(π, Cmiss , Cfa ) = πCmiss Pmiss (η) + (1 − π)Cfa Pfa (η) where decisions are made by the evaluator by comparing the likelihood-ratios against the Bayes decision threshold: η=

(1 − π)Cfa πCmiss

Bayes risk can evaluate over a range of operating points, by varying the parameters, π, Cmiss and Cfa .

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Bayes error-rate From 3 to 1 dimensions

How can we cover all of the values of π, Cmiss and Cfa in a single evaluation procedure? We show: We can combine these 3 parameters into a one-dimensional operating point, without loss of generality.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Bayes error-rate From 3 to 1 dimensions

Define the effective prior: π ˜=

πCmiss πCmiss + (1 − π)Cfa

and the Bayes error-rate: E(˜ π ) = R(˜ π , 1, 1) =

AGNITIO

R(π, Cmiss , Cfa ) πCmiss + (1 − π)Cfa

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Bayes error-rate Error-rate and risk are equivalent

So that: Bayes error-rate and Bayes risk are proportional: E(˜ π ) = k R(π, Cmiss , Cfa ) where k > 0 does not depend on the system under evaluation. For evaluation purposes, these two criteria are equivalent.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Bayes error-rate Operating point

We have constructed the Bayes error rate criterion: E(˜ π) = π ˜ Pmiss

1 − π ˜ π ˜

+ (1 − π ˜ )Pfa

1 − π ˜ π ˜

which is parametrized by a single, scalar ‘operating point’ parameter: 0 < π ˜ < 1. Note, π ˜ plays two roles: it weights the error-rates it determines the decision threshold

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Bayes error-rate Operating point

NIST’s operating points were: ‘Old DCF’: π ˜ ≈ 0.092 ‘New DCF’: π ˜ = 0.001 But now we generalize. We construct an evaluation recipe that sweeps the operating point, just like the DET curve does.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Bayes error-rate Reference values

Default error-rate: Decisions based on π ˜ alone (i.e. LR = 1) give: E0 (˜ π ) = min(˜ π, 1 − π ˜) minDCF: Evaluator optimizes the decision threshold (γ): Emin (˜ π ) = minDCF(˜ π , 1, 1) = min π ˜ Pmiss (γ) + (1 − π ˜ )Pfa (γ) γ

At operating point π ˜ , a system is well-calibrated, if E(˜ π ) ≈ Emin (˜ π ). badly calibrated, if E(˜ π ) > E0 (˜ π ).

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Normalized Bayes error-rate plot Definition

To turn Bayes error-rate into a graphical analysis tool, we plot Bayes error-rate as a function of the operating point: The vertical axis is the normalized Bayes error-rate: y=

E(˜ π) E0 (˜ π)

The horizontal axis is the (effective) prior log odds: x = logit(˜ π ) = log

π ˜ = − log η 1−π ˜

where η is just the Bayes decision threshold. AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Normalized Bayes error-rate plot Axis amplification

Reasons for axis amplifications: Un-normalized E(˜ π ) becomes small to the point of invisibility, near π ˜ = 0 and π ˜ = 1. x = logit(˜ π ) spreads out operating points that would otherwise be compressed against π ˜ = 0 and π ˜ = 1.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Normalized Bayes error-rate plot Examples

Below we show some examples of normalized Bayes error-rate plots, for: Synthetic data, to show the effects of deliberate miscalibration. LRs from three real automatic speaker recognition systems, from NIST SRE 2010.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Synthetic example

We generate Gaussian scores, with likelihoods: P(s|target) = N (s|3, 2),

P(s|non-target) = N (s|0, 1)

The true log LR(s) is a parabola, with turning point at log LR ≈ −2. See plot.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Log LR for synthetic Gaussian scores 20

15

log LR

10

5

0

−5 −3

−2

−1

0

1

2

3

score

AGNITIO

BOSARIS Toolkit

4

5

6

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

Normalized Bayes error-rate plot Synthetic LR

normalized Bayes error−rate

1.2

1

log LR = 0 Emin

0.8

true log LR 2 × log LR 0.5 × log LR log LR + 2 log LR − 2 30 false alarms 30 misses

0.6

0.4

0.2 −10

−8

−6

−4

−2

AGNITIO

0 logit π

2

BOSARIS Toolkit

4

6

8

10

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

NIST SRE10: Example of Excellent Calibration BUT PLDA i−vector condition 3 new DCF point dev misses dev false−alarms dev act DCF eval misses eval false−alarms eval min DCF eval act DCF eval DR30

1

normalized DCF

0.8

0.6

0.4

0.2

0 −10

−9

−8

−7

−6

AGNITIO

−5 logit Ptar

−4

−3

BOSARIS Toolkit

−2

−1

0

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

NIST SRE10: Example of Bad Calibration BUT i−vector full−cov condition 2 new DCF point dev misses dev false−alarms dev act DCF eval misses eval false−alarms eval min DCF eval act DCF eval DR30

1

normalized DCF

0.8

0.6

0.4

0.2

0 −10

−9

−8

−7

−6

AGNITIO

−5 logit Ptar

−4

−3

BOSARIS Toolkit

−2

−1

0

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction Bayes risk Bayes error-rate The plot

NIST SRE10: Example of Worse Calibration BUT JFA10 condition 2 new DCF point dev misses dev false−alarms dev act DCF eval misses eval false−alarms eval min DCF eval act DCF eval DR30

1

normalized DCF

0.8

0.6

0.4

0.2

0 −10

−9

−8

−7

−6

AGNITIO

−5 logit Ptar

−4

−3

BOSARIS Toolkit

−2

−1

0

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction ROC and minDCF Examples Conclusion

Outline 1

The BOSARIS Toolkit

2

How many trials do we need?

3

Normalized Bayes error-rate plot

4

Relationships between DET/ROC, EER and minDCF Introduction ROC and minDCF Examples Conclusion AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction ROC and minDCF Examples Conclusion

DET/ROC, EER and minDCF

We’ve discussed DCF and its generalization, Bayes error-rate, which are calibration-sensitive evaluation criteria. Here we are interested in criteria that evaluate the optimal decision-making potential of uncalibrated scores, when calibration is not of immediate interest. DET/ROC, EER and minDCF are well-known, but we discuss some interesting relationships between them.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction ROC and minDCF Examples Conclusion

DET vs ROC

For the purpose of this discussion DET and ROC are equivalent: ROC has: x = Pfa , y = Pmiss DET has: x = probit(Pfa ), y = probit(Pmiss ) where the probit function is the inverse of the normal CDF.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction ROC and minDCF Examples Conclusion

ROC and minDCF

For decision threshold γ, denote: DCF(γ|π, Cmiss , Cfa ) = πCmiss Pmiss (γ) + (1 − π)Cfa Pmiss (γ) and minDCF(π, Cmiss , Cfa ) = min DCF(γ|π, Cmiss , Cfa ) γ

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction ROC and minDCF Examples Conclusion

ROCCH and minDCF A point, x, y is on the: Steppy ROC curve, iff y=

Cmiss 1 x= min max DCF(γ|π, Cmiss , Cfa ) π Cfa Cfa γ

ROCCH curve, iff y=

1 Cmiss x= max minDCF(π, Cmiss , Cfa ) Cfa Cfa π

Both curves are generated by sweeping

AGNITIO

Cmiss Cfa

BOSARIS Toolkit

from 0 to ∞.

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction ROC and minDCF Examples Conclusion

Examples: Steppy vs ROCCH DET EER read off ROCCH-DET

EER via minDCF on classical DET

Miss probability (in %)

0.12 40 30 20

0.1 0.08

10 5 2 1 0.5 0.2 0.1

0.06 0.04 ROCCH-DET (EER = 10.7%) classical DET 0.10.20.5 1 2 5 10 20 30 40 False Alarm probability (in %)

EER minDCF(Cmiss=Cfa=1)

0.02 0

0

0.2

0.4

0.6

0.8

1

Ptar

EER read off ROCCH-DET

EER via minDCF on classical DET

Miss probability (in %)

0.2 40 30 20

0.15

10 5 2 1 0.5 0.2 0.1

0.1

0.05

EER minDCF(Cmiss=Cfa=1)

ROCCH-DET (EER = 17.1%) classical DET 0.10.20.5 1 2 5 10 20 30 40 False Alarm probability (in %)

AGNITIO

0

0

0.2

0.4

0.6 Ptar

BOSARIS Toolkit

0.8

1

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction ROC and minDCF Examples Conclusion

ROCCH and minDCF Note: minDCF lives on the smooth ROCCH (ROC convex hull) curve, not exactly on the steppy ROC. The ROCCH curve is a tight lower bound for the steppy ROC (examples below). For large databases, the steps are tiny and the curves are close. The ROCCH curve has many attractive theoretical and practical properties (see full paper). Here we conclude with one final comment.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction ROC and minDCF Examples Conclusion

ROCCH and EER Final comment

Let the error-rate-ratio be R = curve, iff

Pmiss Pfa .

Point x, y is on the ROCCH

y = Rx = max minDCF(π, Cmiss = R, Cfa = 1) π

The location on the ROCCH curve is parametrized by R. The special case, R = 1, gives the EER (equal-error-rate). In general, any point on the ROCCH curve can be interpreted as a tight upper bound for minDCF, over a range of operating points. This makes any such point an attractive objective criterion for system optimization. See examples below. AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction ROC and minDCF Examples Conclusion

Examples: EER = max minDCF EER read off ROCCH-DET

EER via minDCF on classical DET

Miss probability (in %)

0.12 40 30 20

0.1 0.08

10 5 2 1 0.5 0.2 0.1

0.06 0.04 ROCCH-DET (EER = 10.7%) classical DET 0.10.20.5 1 2 5 10 20 30 40 False Alarm probability (in %)

EER minDCF(Cmiss=Cfa=1)

0.02 0

0

0.2

0.4

0.6

0.8

1

Ptar

EER read off ROCCH-DET

EER via minDCF on classical DET

Miss probability (in %)

0.2 40 30 20

0.15

10 5 2 1 0.5 0.2 0.1

0.1

0.05

EER minDCF(Cmiss=Cfa=1)

ROCCH-DET (EER = 17.1%) classical DET 0.10.20.5 1 2 5 10 20 30 40 False Alarm probability (in %)

AGNITIO

0

0

0.2

0.4

0.6 Ptar

BOSARIS Toolkit

0.8

1

The BOSARIS Toolkit How many trials do we need? Normalized Bayes error-rate plot Relationships between DET/ROC, EER and minDCF

Introduction ROC and minDCF Examples Conclusion

Conclusion

If you are a MATLAB fan, go and try the BOSARIS Toolkit. If not, I hope the full paper has enough detail for others to implement some of these ideas in other languages.

AGNITIO

BOSARIS Toolkit

The BOSARIS Toolkit - Theory, Algorithms and Code for ...

Analysis. Rule of thumb. Outline. 1. The BOSARIS Toolkit. 2. How many trials do we need? .... To turn Bayes error-rate into a graphical analysis tool, we plot.

335KB Sizes 12 Downloads 159 Views

Recommend Documents

The BOSARIS Toolkit - Theory, Algorithms and Code for ...
The BOSARIS Toolkit. Theory, Algorithms and Code for Surviving the New DCF .... R(π,Cmiss,Cfa) = πCmissPmiss(η)+(1 − π)CfaPfa(η) where decisions are ...

Learning theory and algorithms for shapelets and ...
sparse combinations of features based on the similarity to some shapelets are useful [4, 8, 6]. Similar situations ... Despite the empirical success of applying local features, theoretical guarantees of such approaches are not fully ... tion that map

A LOGICAL TOOLKIT FOR THEORY (RE ...
number of clear-cut rules of inference, is an advantage over informal theo- rizing that can ... be made. At least as important as a formal representation of an argument ... true and false, mathematical equations have fewer limitations than do logical

Optimization – Theory and Algorithms Jean Cea - School of ...
on an open set in Rn and some of their properties. These spaces play an important role in the weak (or variational) formulation of elliptic problems which we shall consider in the following. All our functionals in the examples will be defined on thes

Introduction to Bandits: Algorithms and Theory
Rn = ERn = nµ. ∗ − E n. ∑ t=1. gIt ,t = nµ. ∗ − E. K. ∑ i=1. Ti (n)µi = K. ∑ i=1. ∆i ETi (n). Jean-Yves Audibert,. Introduction to Bandits: Algorithms and Theory. 8/38 ...

Margin Based Feature Selection - Theory and Algorithms
criterion. We apply our new algorithm to var- ious datasets and show that our new Simba algorithm, which directly ... On the algorithmic side, we use a margin based criteria to ..... to provide meaningful generalization bounds and this is where ...

principles of robot motion theory algorithms and implementations pdf ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. principles of ...

PDF Download Scheduling: Theory, Algorithms, and Systems Read ...
and Systems Read Full Textbook. Books detail. Title : PDF ... Graduate students in operations management, operations research, industrial engineering, and ...