The Bayesian Draughtsman: A Model for Visuomotor Coordination in Drawing Ruben Coen Cagli1 , Paolo Coraggio1, Paolo Napoletano2 , and Giuseppe Boccignone2 1

DSF, Robot Nursery Laboratory - Universit` a di Napoli Federico II via Cintia, Napoli, Italy {coen,pcoraggio}@na.infn.it 2 Natural Computation Lab DIIIE - Universit` a di Salerno via Ponte Don Melillo, 1 Fisciano (SA), Italy {pnapoletano,boccig}@unisa.it

Abstract. In this article we present a model of realistic drawing accounting for visuomotor coordination, namely the strategies adopted to coordinate the processes of eye and hand movement generation, during the drawing task. Starting from some background assumptions suggested by eye-tracking human subjects, we formulate a Bayesian model of drawing activity. The resulting graphical model is shaped in the form of a Dynamic Bayesian Network that combines features of both the Input– Output Hidden Markov Model and the Coupled Hidden Markov Model, and provides an interesting insight on mechanisms for dynamic integration of visual and proprioceptive information.

1

Introduction

It has been argued that the function of art and the function of the visual brain are one and the same, and that the aims of art constitute an extension of the functions of the brain [1]. In this article we address a broader picture: that of art making as an extension of visuomotor coordination. We consider realistic drawing, that is the activity of representing an original scene by means of visible traces on a surface (the canvas), trying to render the contours defining objects/regions within the scene as faithfully as possible on the canvas. Subjects involved with this task clearly adopt a visuomotor strategy; further, even though strategies can vary significantly among individuals, interesting regularities can be observed. In a more general view, the issue we address here is at the crossing edge of most current research in neuroscience, Active Vision, and Artificial Intelligence: the understanding and the modeling of strategies adopted by any agent situated in the world to coordinate vision and action in order to succeed in a performing a given task. Sensorimotor coordination has been treated in the framework of either motor control (with or without feedback) or active perception. Sensorimotor models F. Mele et al. (Eds.): BVAI 2007, LNCS 4729, pp. 161–170, 2007. c Springer-Verlag Berlin Heidelberg 2007 

162

R. Coen Cagli et al.

usually reflect the functional architecture of the primate cortico-cerebellar system [4]. Most successful ones cast the issue of movement planning and execution as an optimization problem [5]. In such framework the sensory apparatus is always considered as passive. On the other hand, in the case of active vision, the object of study is the overt attentional process, namely how sensory resources are allocated, e.g. via eye movements (saccades). Models have been proposed that reflect the functional organization of the primate visual system, and generate saccades on the basis of image properties alone [7] or combined with top–down cognitive influences [8]. As opposed to motor control research, eye tracking research [9,10] has shown that most fixations are targeted to extract information that is relevant to the motor execution of the task. Further, most recent results suggest that spatial attention is the consequence of motor preparation (premotor theory of attention [12] ). Yet, we lack a well defined framework for integrating active vision models with feedback motor control strategies. In this article we present a computational model of realistic drawing accounting for visuomotor coordination, namely strategies adopted to coordinate the processes of eye and hand movement generation, during the drawing task. The model extends a previous one [3], whose aim was to simulate the scanpath of the draughtsman, and is formulated in terms of a Bayesian generative model and its corresponding graphical model, a novel kind of Dynamic Bayesian Network (DBN). The rationale behind the adoption of a probabilistic framework grounds in the fact that signals in sensory and motor systems are corrupted by variability and noise, and the nervous system needs to estimate these states [6]. Background assumptions of the model rely upon eye-tracking experiments with human subjects, some of which are presented in the following Section.

2

Basic Assumptions and Behavioral Analysis

Eye tracking experiments on draughtsmen at work [2] provide evidence of two nested execution cycles: the longer, external cycle is an oscillation between periods when the hand is not drawing and globally distributed eye movements can be observed, and periods when the hand is tracing; within the tracing period a shorter nested cycle can be noticed, with eye movements localized alternately in small parts of the scene and the canvas. Further analysis [3] indicates that four main subtasks should be distinguished: 1) Segmentation of the original scene; 2) Evaluation of the emerging result; 3) Feature extraction for motion planning; 4) Visual feedback for motion control. The oscillation between local and global scanpaths may be understood by recalling that gaze–shifts can be considered as the motor realization of overt shifts of attention. Visual attention arises from the activation of those same circuits that process sensory and motor data [12]. In particular, selective attention for spatial locations is related to the dorsal visual stream that has been named

The Bayesian Draughtsman

163

action pathway after Goodale and Humprey [11], and is mainly devoted to trigger prompt actions in response to environmental varying conditions (Vision for Action). On the contrary, selective attention for objects derives from activation of ventral cortical areas involved in the perception pathway, responsible for object recognition, with tight integration to high–level, cognitive tasks of frontal areas (Vision for Perception,[11]). Clearly, the two pathways are not segregated but cooperate/compete to provide a coherent picture of the world and gaze control is the ultimate product of such integration. In this framework behaviors 1 and 2, that require globally distributed eye movements, could be associated to the Vision for Perception stream, while 3 and 4 produce localized eye movements related to the Vision for Action stream. Thus, the oscillation can be seen as a part of a high level strategy, which takes advantage of the functional architecture of the human visual system to keep separate two classes of visual behaviors, the first of which is global in nature and perceptual in purpose, while the second is local and pragmatic, sub-serving a precise hand movement. In this article we will focus exclusively on subtasks 3 and 4, since tightly coupling vision (eye movements) and action (hand drawing). Thus, in the following we take for granted that the viewed scene has been already segmented in a finite set of objects (cf. [3]) Three assumptions can be introduced to capture the essential features that distinguish drawing from other tasks [3]. 1. All fixations on an object are executed within a time interval in which no fixations occur on other objects. 2. Fixations are distributed among the original objects according to the number of salient points on each object, and on each single object following the distribution of most salient points. 3. The sequence of fixations on the original scene is constrained to maximize continuity of tracing hand movements. The first assumption states that a peculiar feature of the drawing behavior is that the gaze does not move back and forth among different objects, but proceeds sequentially. Gaze is directed to an object only when it becomes relevant to the task, i.e. during the time that it is being copied. Salient points can be defined as those with local orientation contrast [7] above a given threshold and the second assumption requires the draughtsman to move the gaze towards all salient points. This implies a segmentation which is finer than the initial object-based segmentation and is directly related to pragmatic sensorimotor control. Third assumption implies that feedback information on hand motion plays an important role in determining the actual scanpath. One possible implication is that the scanpath on the original scene should resemble a coarse–grained edge following along the contours of the objects, which has never been observed in the eye–tracking literature up to the best of our knowledge.

164

2.1

R. Coen Cagli et al.

Experiments with Eye-Tracked Subjects

We performed eye tracking experiments on three subjects who were given the instruction to make realistic drawings of simple bidimensional shapes. Fig. 1 illustrates the experimental set-up (cf., Appendix A, for details).

Fig. 1. Experimental setup for eye tracking recordings during the drawing task. The Subject sits in front of a vertical Tablet. In the left half of the Tablet hand–drawn images are displayed, while the Subject is instructed to copy the images the right half. The eye tracker integrates data from the Eye Camera and the Magnetic Sensor and Transmitter; eye position is then superimposed on the Scene Camera video stream, which takes the approximate subjective point of view.

Due to space constraints, here we do not present the complete data analysis, but focus on those aspects directly related to the hypotheses. In one of the trials the image displayed was composed by two closed contours that are spatially separated (Fig. 2(a)). From qualitative analysis it resulted that all the subjects started drawing the second object only after completion of the first one. Thus we defined, for each subject, two time intervals, T1 and T2 , corresponding to the two drawing phases, and two Regions Of Interest (ROI), R1 and R2 , each one containing one object. Fig. 3 shows, for each subject, the distribution of the number of fixations on the original image (F ), over the three regions OF F , R1 , R2 . In accordance with assumption 1, the maximum of the distribution is always in the region corresponding to the time interval considered, and the percentage of F in the wrong region is always below 13%. Analysis of the same trial shows also agreement with hypothesis 2, as appears from the comparison, for each subject, of the saliency map (Fig. 2(e)) of the original image with the x–y plot (Fig. 2(b), 2(c), 2(d)) of the fixations for the complete trial, and the fixation map (Fig. 2(f), 2(g), 2(h)). Finally, the temporal sequence of fixations is addressed in Fig. 4. It shows for each subject, the cumulative x–y plot of fixations at increasing times after the beginning of a trial with curve shape, which provides evidence that the scanpath on the original image can be well described as a coarse grained edge following.

The Bayesian Draughtsman

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

165

Fig. 2. Figures 2(a) and 2(e) show respectively the Regions Of Interest and the saliency map in the two–objects trial. For each subject (columns 2–4) we show the x–y plot (2(b), 2(c), 2(d)) of the fixations (circles) and the fixation map (2(f), 2(g), 2(h)).

1

T1

1

T1

0.9

1

T2

0.9

0.7

0.5

0.4

0.6

0.5

0.4

0.6

0.5

0.4

Fixations (%)

0.8

0.7

Fixations (%)

0.8

0.7

Fixations (%)

0.8

0.7

0.6

0.6

0.5

0.4

0.4

0.3

0.3

0.3

0.2

0.2

0.2

0.2

0.2

0.1

0.1

0.1

0.1

0.1

off

R1

ROI

(a)

R2

0

off

R1

ROI

(b)

R2

0

off

R1

ROI

(c)

R2

0

off

R1

ROI

(d)

R2

0.7

0.5

0.3

0

T2

0.9

0.8

0.6

0.3

0

1

T1

0.9

0.8

0.7

Fixations (%)

Fixations (%)

1

T2

0.9

0.8

Fixations (%)

1

0.9

0.6

0.5

0.4

0.3

0.2

0.1

off

R1

ROI

(e)

R2

0

off

R1

R2

ROI

(f)

Fig. 3. Distribution of the number of fixations over three regions (outside, R1 or R2). Each couple of plots refer to time intervals T1 and T2 for one subject.

3

The Model

The model accounts for the sensorimotor coupling between the Vision for Action stream and the motor system and is based on four core modules and their interactions. Top-down FOA scheduling produces appropriate plans for generating gazeshifts, while Motor Planning drives hand movement planning. The Action and Motor State modules play the role of generating suitable sensory inputs to the planning modules, respectively providing information extracted from the visual input along the visual dorsal stream, and information about the state of the hand on the basis of proprioception. Here we are not concerned with how such inputs are generated, but only with how they contribute to the joint planning of eye and hand movements. The tight interplay between saccades and hand movements is provided by the following cross-connections: a) Action → FOA Scheduling; b) Action → Motor

166

R. Coen Cagli et al.

(a)

(b)

(c)

(d)

(e)

(f)

(g)

(h)

(i)

(j)

(k)

(l)

Fig. 4. Cumulative x–y plot of fixations (circles) at increasing times (left to right). Each rowshows the results for one subject.

Planning; c) Motor State → Motor Planning; d) Motor State → FOA Scheduling; e) Motor Planning ↔ FOA Scheduling. Here a) to d) are input connections: in particular b) and d) provide an indirect coupling between the visual and motor systems, since they express respectively the influence of visual information on the generation of hand movements (Vision For Action), and the influence of proprioceptive information on the state of the hand in generating a saccade (Proprioceptive Feedback ). The bidirectional connection e) represents the direct reciprocal influence of eye and hand motor plans, which must unfold in time appropriately to preserve a task–specific causal relation between eye and hand movements. We call the two directions of such connection Eye To Hand (E2H) – i.e. the process of generating a saccade on the basis of the previous hand plan –, and Hand To Eye (H2E) – the generation of a hand movement on the basis of previous saccades. Figure 5 outlines the functional model at a glance. In the same figure, the information flow between modules is represented via dotted lines. Inputs and outputs are formally identified in terms of the following variables: – u: the input for eye and hand movement planning processes; it concerns information regarding the perceived current position of the hand (fusing visual and proprioceptive data) and features extracted from the portion of the original image corresponding to the previous fixation; – xe : the state of the eye movement process, encoding the planned eye movement as a displacement vector relative to the current fixation point; – ye : the eye–movement output, encoding the performed displacement;

The Bayesian Draughtsman

167

SENSORYMOTOR PRIOR KNOWLEDGE

xE

xH

TOP-DOWN FOA SCHEDULING

MOTOR PLANNING

yE

INVERSE KINEMATICS

yH

eye movement

u1

PERCEPTION

ACTION

u2

hand movement

MOTOR STATE

EARLY VISION

visual input

proprioceptive input

Fig. 5. The functional architecture: each module (box) can be seen as an implementation of a specific process. Overlaid (dotted lines), the underlying graphical model, which will be explicitly represented in Fig. 6.

– xh , yh ; the state and output of the hand–movement process, analogous to eye state and output variables; Indeed, the computational problem we want to solve is the joint evaluation of eye and hand movement state at a given time. To this end, we resort to a probabilistic Bayesian framework and consider the values of such variables as realizations of corresponding random variables. This way we can map the functional model outlined in Fig. 5 into the graphical model shown in Fig.6, where nodes denote the random variables, and arrows, conditional dependencies. Note that, since we are dealing with a process unfolding in time, the network is in the form of a Dynamical Bayesian Network (DBN [13]) and the graph depicted in Fig 6 pictures two temporal slices. Notice that, within each time slice, we assume a causal relation (directed edge) from eye movement to hand movement; this reflects the behavior we observed in the experiments on the drawing task, where most fixations could be classified as look–ahead [9], i.e. with the gaze moving to a location where the hand will move shortly after. In such framework, the input streams a) to d) can be treated as conditioning both planning processes by a single variable (the arrows out of the upper circle in figure 6). This way, the H2E process, which accounts for the probability of the current fixation conditional on previous fixation and hand movement, can be formally modeled as the probability distribution p(xet+1 |ut+1 , xet , xht ). Similarly, we can write E2H, which considers the probability of the current hand movement given the current fixation and the previous hand movement, as p(xht+1 |ut+1 , xet+1 , xht ). Both terms denote state–transition probabilities, and represent the core modules H2E and E2H respectively, enriched with the input.

168

R. Coen Cagli et al. e

e

ut

u t+1

e

e

xt

x t+1 h

h

xt e

x t+1 e

yt

y t+1 h

yt

h

y t+1

Fig. 6. The IOCHMM’s for combined eye and hand movements. The gray circles denote the input (u) and output (y ) variables. Dotted connections in the hidden layer highlight the subgraph that represent the E2H core module, while continuous connections denote H2E.

By considering again the dependencies in the graphical model, we can write the statistical dependence of the eye output signal on the corresponding state e |xet+1 ); similarly for the output hand movevariable as the distribution p(yt+1 h ment, we can write the density p(yt+1 | ut+1 , xht+1 ) which also depends on the input value. Both represent the emission probability distributions. Eventually, by generalizing the time slice snapshot of Fig. to a time interval [1, T ] we can write the joint distribution of the state and output variables, conditioned on the input variables as: ¯1:T ) = p(xe1 | u1 )p(y1e | xe1 )p(xh1 | u1 , xe1 )p(y1h | u1 , xh1 ) p(¯ x1:T , y¯1:T | u T −1  e p(xet+1 | ut+1 , xet , xht )p(yt+1 | xet+1 ) · t=1

 h ·p(xht+1 | ut+1 , xet+1 , xht )p(yt+1 | ut+1 , xht+1 ) ,

(1)

where u ¯1:T denotes the input sequence from t = 1 to T , x ¯1:T denotes the pair of state sequences (xe1:T , xh1:T ), and similarly for y¯1:T .

4

Discussion and Final Remarks

The formalization provided in the previous Section, seems to suggest that visuomotor coordination requires a regular switching in time between the two modalities E2H and H2E, which depends on input and outputs; this results in a DBN graphical model that unifies two kinds of DBNs known in the literature, the Input–Output Hidden Markov Model and the Coupled Hidden Markov Model [13]. We call the DBN represented in Fig. 6) an Input–Output Coupled Hidden Markov Model (IOCHMM). It is worth noting, though beyond the scope of this article, that the joint probability distribution in Eq. (1) can be further simplified in terms of mean field

The Bayesian Draughtsman

169

approximation [13]) by defining suitable potential functions, that express local dependencies among the hidden and input variables; then, standard algorithms for network learning and inference can be easily exploited [13]. Here, due to space limitation, we prefer focusing on the modeling part of our current research work, and the main result is that a Bayesian approach can be suitably adopted for sensorimotor integration in the drawing task. To the best of our knowledge this is the first attempt in this direction. On the one hand the adoption of a Bayesian framework, allows to formalize a computational model in a principled way, by incorporating constraints and prior knowledge as derived by experimental observations of human subjects and theoretical findings in the current literature of visual spatial attention and sensorimotor coordination. On the other hand, the model reconciles the active vision and the feedback motor control approaches and we believe that understanding how such formal model may be linked to the underlying activity in the visual and motor areas of the human brain could shed new light on the problem of visuomotor coordination in general. Interestingly enough, the anatomical correlates for the input stream that we related to Vision For Action is the existence of several frontoparietal circuits, by means of which the outputs of the visual dorsal stream are projected from IP to oculomotor and premotor areas [12]. Conversely, we suggest that the pathway related to Proprioceptive Feedback could correspond to the portion of the cortico–cerebellar loop in which the cerebellum returns projections to cortical areas of the frontal lobe via the thalamus [4]. Further, the core connections we called E2H and H2E could find a biological justification in the existence of cortico–cortical connections among premotor and oculomotor areas. Eventually, current research work concentrates on performing more experiments with human drawers in order to compare with preliminary results of simulations obtained via the IOCHMM prototype and its integration with the segmentation module developed in previous work.

Acknowledgments The authors wish to express their gratitude to prof. A. Marcelli for providing eye–tracking resources, and prof. G. Trautteur for enlightening discussions.

References 1. Zeki, S.: Inner Vision. An Exploration of Art and the Brain. Oxford University Press, Oxford, UK (1999) 2. Tchalenko, J., Dempere-Marco, R., Hu, X.P., Yang, G.Z.: Eye Movement and Voluntary Control in Portrait Drawing. In: The Mind’s Eye: Cognitive and Applied Aspects of Eye Movement Research, ch. 33, Elsevier, Amsterdam (2003) 3. Coen Cagli, R., Coraggio, P., Napoletano, P.: DrawBot – A Bio–Inspired Robotic Portraitist. Digital Creativity Journal (in press, 2007)

170

R. Coen Cagli et al.

4. Ramnani, N.: The primate cortico–cerebellar system: anatomy and function. Nature Review Neuroscience 7 (2006) 5. Todorov, E., Jordan, M.: Optimal feedback control as a theory of motor coordination. Nat. Neurosci. 5, 1226–1235 (2002) 6. Kording, K.P., Wolpert, D.M.: Bayesian decision theory in sensorimotor control. Trends in Cognitive Sciences 10(7) (2006) 7. Itti, L., Koch, C.: Computational modelling of visual attention. Nature Reviews Neuroscience 2(3), 194–203 (2001) 8. Pylyshyn, Z.W.: Situating vision in the world. Trends in Cognitive Sciences 4(5) (2000) 9. Land, M., Mennie, N., Rusted, J.: Eye movements and the roles of vision in activities of daily living: making a cup of tea. Perception 28, 1311–1328 (1999) 10. Hayhoe, M.M., Ballard, D.H.: Eye Movements in Natural Behavior. Trends in Cognitive Science 9(188) (2005) 11. Goodale, M.A., Humphrey, G.K.: The objects of action and perception. Cognition 67, 181–207 (1998) 12. Rizzolatti, G., Riggio, L., Sheliga, B.M.: Space and selective attention. In: Umilt` a, C., Moscovitch, M. (eds.) Attention and Performance XV, MIT Press, Cambridge (1994) 13. Murphy, K.: Dynamic Bayesian Networks: Representation, Inference and Learning. PhD dissertation, Berkeley, University of California, Computer Science Division (2002)

A

Experimental Settings

Eye scan records were obtained from three right–handed individuals, one female ages 27-33. All had normal or corrected-to-normal vision. The experimental setup is shown in figure 1. Subjects were presented with a horizontal tablet 40 cm × 30 cm, viewed binocularly from such a distance that they could comfortably draw. Slight head movements were allowed. In the left half of the tablet hand–drawn images were displayed, while a white sheet was on the right half. The original images represented simple contours drawn by hand with a black pencil on white paper. One image per trial was shown, and the subjects were instructed to copy its contours faithfully on the right hand. These instructions did not make specific mention of eye movements and did not give constraints on the execution time. The subject’s left eye movements were recorded with a remote eye tracker (ASL Model 504) with the aid of a magnetic head tracker, with the eye position sampled at the rate of 60 Hz. The instrument can integrate eye and head data in real time and can deliver a record with an accuracy of less than 1 deg.

The Bayesian Draughtsman: A Model for Visuomotor ...

by eye-tracking human subjects, we formulate a Bayesian model of draw- ..... PhD dissertation, Berkeley, University of California, Computer Science Division.

4MB Sizes 1 Downloads 203 Views

Recommend Documents

BAYESIAN HIERARCHICAL MODEL FOR ...
NETWORK FROM MICROARRAY DATA ... pecially for analyzing small sample size data. ... correlation parameters are exchangeable meaning that the.

Nonparametric Hierarchical Bayesian Model for ...
results of alternative data-driven methods in capturing the category structure in the ..... free energy function F[q] = E[log q(h)] − E[log p(y, h)]. Here, and in the ...

Nonparametric Hierarchical Bayesian Model for ...
employed in fMRI data analysis, particularly in modeling ... To distinguish these functionally-defined clusters ... The next layer of this hierarchical model defines.

A Weakly Supervised Bayesian Model for Violence ...
Social media and in particular Twitter has proven to ..... deriving word priors from social media, which is ..... ics T ∈ {1,5,10,15,20,25,30} and our significance.

A nonparametric hierarchical Bayesian model for group ...
categories (animals, bodies, cars, faces, scenes, shoes, tools, trees, and vases) in the .... vide an ordering of the profiles for their visualization. In tensorial.

A Collective Bayesian Poisson Factorization Model for ...
Request permissions from [email protected]. KDD'15, August 10-13, 2015, Sydney, NSW, Australia. ... Event-Based Social Networks, Cold-start Recommendation. 1. INTRODUCTION ... attract more users to register on their websites. Although ... mented in

Bayesian Model Averaging for Spatial Econometric ...
Aug 11, 2005 - represents a cross-section of regions located in space, for example, counties, states, or countries. y ¼ rWy ю ... If the sample data are to determine the posterior model probabilities, the prior probabilities ..... averaged estimate

Dialogic RSA: A Bayesian Model of Pragmatic ...
Episodes are analogous to turns in natural language dialogue, as each .... http://www.aaai.org/ocs/index.php/FSS/FSS11/paper/download/4186/4502. Frank ...

Anatomically Informed Bayesian Model Selection for fMRI Group Data ...
A new approach for fMRI group data analysis is introduced .... j )∈R×R+ p(Y |ηj,σ2 j. )π(ηj,σ2 j. )d(ηj,σ2 j. ) is the marginal likelihood in the model where region j ...

Bayesian Language Model Interpolation for ... - Research at Google
used for a variety of recognition tasks on the Google Android platform. The goal ..... Equation (10) shows that the Bayesian interpolated LM repre- sents p(w); this ...

Bayesian Model Averaging for Spatial Econometric ...
Aug 11, 2005 - There is a great deal of literature on Bayesian model comparison for nonspatial .... structure of the explanatory variables in X into account. ...... Further computational savings can be achieved by noting that the grid can be.

A Bayesian hierarchical model of Antarctic fur seal ...
Mar 30, 2012 - transmitter (Advanced Telemetry Systems, Isanti, Min- nesota, USA), while 211 females were instrumented with only a radio transmitter, and 10 ...

A Bayesian Approach to Model Checking Biological ...
1 Computer Science Department, Carnegie Mellon University, USA ..... 3.2 also indicates an objective degree of confidence in the accepted hypothesis when.

Bayesian Model Averaging for Spatial Econometric ...
11 Aug 2005 - We extend the literature on Bayesian model comparison for ordinary least-squares regression models ...... with 95 models having posterior model probabilities 40.1%, accounting for. 83.02% probability ...... choices. 2 MATLAB version 7 s

Quasi-Bayesian Model Selection
the FRB Philadelphia/NBER Workshop on Methods and Applications for DSGE Models. Shintani gratefully acknowledges the financial support of Grant-in-aid for Scientific Research. †Department of Economics, Vanderbilt University, 2301 Vanderbilt Place,

A Bayesian hierarchical model with spatial variable ...
towns which rely on a properly dimensioned sewage system to collect water run-off. Fig. ... As weather predictions are considered reliable up to 1 week ahead, we ..... (Available from http://www.abi.org.uk/Display/File/Child/552/Financial-Risks-.

A Bayesian Approach to Model Checking Biological ...
of the system interact and evolve by obeying a set of instructions or rules. In contrast to .... because one counterexample to φ is not enough to answer P≥θ(φ).

Quasi-Bayesian Model Selection
We also thank the seminar and conference participants for helpful comments at ... tification in a more general framework and call it a Laplace-type estimator ...... DSGE model in such a way that CB is lower triangular so that the recursive identifi-.

Bayesian Two-Covariance Model Integrating Out the ...
Oct 20, 2010 - Let Y be any of the previous speaker identity variables sets. ... Let M = (µ, B, W) be the set of all the parameters of the model and My = (µ, B). 1 ...

The subspace Gaussian mixture model – a structured model for ...
Aug 7, 2010 - We call this a ... In HMM-GMM based speech recognition (see [11] for review), we turn the .... of the work described here has been published in conference .... ize the SGMM system; we do this in such a way that all the states' ...

Visuomotor characterization of eye movements in a ...
tive analysis of the data collected in drawing, it was clear that all subjects ...... PhD dissertation, Berkeley, University of California, Computer Science. Division.

A Behavioural Model for Client Reputation - A client reputation model ...
The problem: unauthorised or malicious activities performed by clients on servers while clients consume services (e.g. email spam) without behavioural history ...

A Bayesian trans-dimensional approach for the fusion ...
Accordingly, the ERT data that are collected by using two-dimensional acquisition ... not only in enhancing the subsurface information but also as a survey design tool to .... logical structures and may not be appropriate for complex geologies.