A generative traversability model for monocular robot self-guidance Michael Sapienza and Kenneth P. Camilleri Dept. Systems and Control Engineering University of Malta

July 31, 2012

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

1 / 21

Why? Explore dangerous/unknown environments Assist in household/office chores Intreact with us: Human Robot Interaction (HRI) Exploration

HRI Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

2 / 21

Why? Explore dangerous/unknown environments Assist in household/office chores Intreact with us: Human Robot Interaction (HRI) Exploration

HRI Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

2 / 21

Why? Explore dangerous/unknown environments Assist in household/office chores Intreact with us: Human Robot Interaction (HRI) Exploration

HRI Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

2 / 21

Outline 1

Introduction Computer Vision Monocular Camera Motivation Problem Definition

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

3 / 21

Outline 1

Introduction Computer Vision Monocular Camera Motivation Problem Definition

2

Previous Work Traversability Detection Objectives and Contributions

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

3 / 21

Outline 1

Introduction Computer Vision Monocular Camera Motivation Problem Definition

2

Previous Work Traversability Detection Objectives and Contributions

3

Methods Feature Extraction Classification Vision Algorithm Locomotion

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

3 / 21

Outline 1

Introduction Computer Vision Monocular Camera Motivation Problem Definition

2

Previous Work Traversability Detection Objectives and Contributions

3

Methods Feature Extraction Classification Vision Algorithm Locomotion

4

Experimental Results

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

3 / 21

Outline 1

Introduction Computer Vision Monocular Camera Motivation Problem Definition

2

Previous Work Traversability Detection Objectives and Contributions

3

Methods Feature Extraction Classification Vision Algorithm Locomotion

4

Experimental Results

5

Conclusion Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

3 / 21

Introduction

Computer Vision

Computer Vision “Vision is the process of discovering from images what is present in the world, and where it is”

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

4 / 21

Introduction

Computer Vision

Computer Vision “Vision is the process of discovering from images what is present in the world, and where it is”

192 164 153 178 181 91 1 54 138 177 201 195 99 103 56 78 154 1 92 101 123 167 165 154 163 19 127 196 4 155 191 154 19 99 33 103 56 78 154 192 43

“If vision is really an information processing task, then I should be able to make my computer do it...” [Marr, 1982] Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

4 / 21

Introduction

Monocular Camera Motivation

Monocular Camera Motivation Camera Advantages Passive, low-cost Actions driven by environment semantics Permits communication with robot through images VISAR01

Single Image Advantages Lower processing cost compared to stereo-vision Single images contain enough information for navigation

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

5 / 21

Introduction

Monocular Camera Motivation

Monocular Camera Motivation Camera Advantages Passive, low-cost Actions driven by environment semantics Permits communication with robot through images VISAR01

Single Image Advantages Lower processing cost compared to stereo-vision Single images contain enough information for navigation

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

5 / 21

Introduction

Monocular Camera Motivation

Monocular Camera Motivation Camera Advantages Passive, low-cost Actions driven by environment semantics Permits communication with robot through images VISAR01

Single Image Advantages Lower processing cost compared to stereo-vision Single images contain enough information for navigation

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

5 / 21

Introduction

Monocular Camera Motivation

Monocular Camera Motivation Camera Advantages Passive, low-cost Actions driven by environment semantics Permits communication with robot through images VISAR01

Single Image Advantages Lower processing cost compared to stereo-vision Single images contain enough information for navigation

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

5 / 21

Introduction

Monocular Camera Motivation

Monocular Camera Motivation Camera Advantages Passive, low-cost Actions driven by environment semantics Permits communication with robot through images VISAR01

Single Image Advantages Lower processing cost compared to stereo-vision Single images contain enough information for navigation

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

5 / 21

Introduction

Problem Definition

Problem Definition Traversability Detection Finding a set of pixels that define the boundary between traversable ground regions and obstacle regions.

What does it involve? Binary Image Segmentation, the segmentation of an image into two classes: [0,1]. Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

6 / 21

Previous Work

Traversability Detection

Traversability Detection MIT: Pebbles Robot, [Lorigo et al., 1997] Unstructured environments, safe window, Patch based, threshold on histogram matching.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

7 / 21

Previous Work

Traversability Detection

Traversability Detection MIT: Pebbles Robot, [Lorigo et al., 1997] Unstructured environments, safe window, Patch based, threshold on histogram matching. CMU: Robotic Wheelchair, [Ulrich & Nourbakhsh, 2000] Indoor environments, safe window, Pixel based, histogram thresholding.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

7 / 21

Previous Work

Traversability Detection

Traversability Detection MIT: Pebbles Robot, [Lorigo et al., 1997] Unstructured environments, safe window, Patch based, threshold on histogram matching. CMU: Robotic Wheelchair, [Ulrich & Nourbakhsh, 2000] Indoor environments, safe window, Pixel based, histogram thresholding. Cranfield University, [Katramados et al., 2009] Outdoor environments, safe window, Pixel based, temporal histogram model.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

7 / 21

Previous Work

Traversability Detection

Traversability Detection MIT: Pebbles Robot, [Lorigo et al., 1997] Unstructured environments, safe window, Patch based, threshold on histogram matching. CMU: Robotic Wheelchair, [Ulrich & Nourbakhsh, 2000] Indoor environments, safe window, Pixel based, histogram thresholding. Cranfield University, [Katramados et al., 2009] Outdoor environments, safe window, Pixel based, temporal histogram model. Georgia Institute of Technology, [Kim et al., 2007] Outdoor environments, self-supervised, Superpixel based, probabilistic model. Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

7 / 21

Previous Work

Objectives and Contributions

Objectives and Contributions Project Goals Design real-time vision framework to allow a mobile robot to guide itself in an unknown environment using a low resolution monocular camera.

Contrubutions Complementary set of colour and texture features. Smaller safe-window to allow movement in close proximity with obstacles. Novel generative approach which models the feature dissimilarity distribution. Underlying Assumptions Initially, a safe region in front of the robot is traversable. The ground and obstcales can be differentiated from their appearance.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

8 / 21

Previous Work

Objectives and Contributions

Objectives and Contributions Project Goals Design real-time vision framework to allow a mobile robot to guide itself in an unknown environment using a low resolution monocular camera.

Contrubutions Complementary set of colour and texture features. Smaller safe-window to allow movement in close proximity with obstacles. Novel generative approach which models the feature dissimilarity distribution. Underlying Assumptions Initially, a safe region in front of the robot is traversable. The ground and obstcales can be differentiated from their appearance.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

8 / 21

Previous Work

Objectives and Contributions

Objectives and Contributions Project Goals Design real-time vision framework to allow a mobile robot to guide itself in an unknown environment using a low resolution monocular camera.

Contrubutions Complementary set of colour and texture features. Smaller safe-window to allow movement in close proximity with obstacles. Novel generative approach which models the feature dissimilarity distribution. Underlying Assumptions Initially, a safe region in front of the robot is traversable. The ground and obstcales can be differentiated from their appearance.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

8 / 21

Previous Work

Objectives and Contributions

Objectives and Contributions Project Goals Design real-time vision framework to allow a mobile robot to guide itself in an unknown environment using a low resolution monocular camera.

Contrubutions Complementary set of colour and texture features. Smaller safe-window to allow movement in close proximity with obstacles. Novel generative approach which models the feature dissimilarity distribution. Underlying Assumptions Initially, a safe region in front of the robot is traversable. The ground and obstcales can be differentiated from their appearance.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

8 / 21

Previous Work

Objectives and Contributions

Objectives and Contributions Project Goals Design real-time vision framework to allow a mobile robot to guide itself in an unknown environment using a low resolution monocular camera.

Contrubutions Complementary set of colour and texture features. Smaller safe-window to allow movement in close proximity with obstacles. Novel generative approach which models the feature dissimilarity distribution. Underlying Assumptions Initially, a safe region in front of the robot is traversable. The ground and obstcales can be differentiated from their appearance.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

8 / 21

Methods

Feature Extraction

Descriptive Features Colour Illumination invariant colour features: Hue & Saturation from HSV colour space. Combination of channels from YCbCr & LAB.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

HSV Colour cone

July 31, 2012

9 / 21

Methods

Feature Extraction

Descriptive Features Colour Illumination invariant colour features: Hue & Saturation from HSV colour space. Combination of channels from YCbCr & LAB.

HSV Colour cone

Texture Colour is not always reliable eg: white walls; different objects with similar colours. Texture featrues look at the pixel intensity in relation to its neighbours: Edge magnitudes & Orientation patterns. Local Binary Patterns. Calculating the LBP8,1 code Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

9 / 21

Methods

Feature Extraction

Classification primitives Pixels May result in noisy/spotty classification. Pixels

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

10 / 21

Methods

Feature Extraction

Classification primitives Pixels May result in noisy/spotty classification. Pixels

Patches Allows local feature distributions to be extracted. May contain multiple object boundaries.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

Patches

July 31, 2012

10 / 21

Methods

Feature Extraction

Classification primitives Pixels May result in noisy/spotty classification. Pixels

Patches Allows local feature distributions to be extracted. May contain multiple object boundaries.

Patches

Superpixels Groups of homogeneous pixels: Computationally efficient Preserves image structure Allows rich statistics to be extracted from perceptually meaningful regions Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

Superpixels

July 31, 2012

10 / 21

Methods

Classification

Binary Classification Problem Definition The Classification Objective The classification of each superpixel S with its correct class label Θ ∈ {θ1 , θ2 } given a vector of traversability cues X = hX1 , X2 , ...Xj , ...Xn i, which is a function of dissimilarity between the superpixel S and the model region M .

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

11 / 21

Methods

Classification

Binary Classification Problem Definition The Classification Objective The classification of each superpixel S with its correct class label Θ ∈ {θ1 , θ2 } given a vector of traversability cues X = hX1 , X2 , ...Xj , ...Xn i, which is a function of dissimilarity between the superpixel S and the model region M .

Dissimilarity Metric M The G-statistic g(hS j khj )

=2

B X b=1

hS j [b] log

hS j [b] hM j [b]

Sapienza et al (Dept. SCE)

(1)

Autonomous Visual Guidance

July 31, 2012

11 / 21

Methods

Classification

Simple Prior

(4)

0.6

P(Θ= 1|C)

(3)

0.4

 1  1 − exp(−λC) Y0 1 P (Θ = 1|C) = Y1 1 + Y0 (eλC − 1)

P (C|Θ = 0) =

(2)

0.2

1 exp(−λC) Y1

0.0

P (C|Θ = 1) =

0.8

1.0

Probability of a Traversable Surface? A prior that favours superpixels closer to the robot as being traversable:

0

20

40

60

80

100

120

C

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

12 / 21

Methods

Classification

Simple Prior

(4)

0.6

P(Θ= 1|C)

(3)

0.4

 1  1 − exp(−λC) Y0 1 P (Θ = 1|C) = Y1 1 + Y0 (eλC − 1)

P (C|Θ = 0) =

(2)

0.2

1 exp(−λC) Y1

0.0

P (C|Θ = 1) =

0.8

1.0

Probability of a Traversable Surface? A prior that favours superpixels closer to the robot as being traversable:

0

20

40

60

80

100

120

C

In Practice The height of the superpixel centre point C defines the prior likelihood of finding traversable ground. Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

12 / 21

Methods

Classification

Likelihood Functions A Truncated Exponential Mixture Model What is the probability of a dissimilarity metric value given it was grenerated from the traversable/non-traversable class?    1 P Xj |Θ = θ1 = exp − αj1 Xj (5) Z1    1 exp αj2 Xj P Xj |Θ = θ2 = Z0

Sapienza et al (Dept. SCE)

(6)

Autonomous Visual Guidance

July 31, 2012

13 / 21

Methods

Classification

Likelihood Functions A Truncated Exponential Mixture Model What is the probability of a dissimilarity metric value given it was grenerated from the traversable/non-traversable class?    1 P Xj |Θ = θ1 = exp − αj1 Xj (5) Z1    1 exp αj2 Xj P Xj |Θ = θ2 = Z0

(6)

The Evidence Normalized traversability cue values accumulated in a histogram. Values obtained from Static Traversability dataset. Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

13 / 21

Methods

Classification

Naive Bayes Classification Posterior Probability To which class Θ does the vector of traversability cues X belong? From Bayes’ rule:  Pˆ (Θ = θl )P (X1 ...Xn |Θ = θl ) P Θ = θl |X1 ...Xn = P ˆ m P (Θ = θm )P (X1 ...Xn |Θ = θm )

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

(7)

14 / 21

Methods

Classification

Naive Bayes Classification Posterior Probability To which class Θ does the vector of traversability cues X belong? From Bayes’ rule:  Pˆ (Θ = θl )P (X1 ...Xn |Θ = θl ) P Θ = θl |X1 ...Xn = P ˆ m P (Θ = θm )P (X1 ...Xn |Θ = θm )

(7)

Assuming the traversability cues are conditionally independent: P (X1 ...Xn |Θ) =

n Y

P (Xj |Θ)

(8)

j=1

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

14 / 21

Methods

Classification

Naive Bayes Classification Posterior Probability To which class Θ does the vector of traversability cues X belong? From Bayes’ rule:  Pˆ (Θ = θl )P (X1 ...Xn |Θ = θl ) P Θ = θl |X1 ...Xn = P ˆ m P (Θ = θm )P (X1 ...Xn |Θ = θm )

(7)

Assuming the traversability cues are conditionally independent: P (X1 ...Xn |Θ) =

n Y

P (Xj |Θ)

(8)

j=1

Using the Maximum a Posteriori (MAP) decision rule: Θ ← arg max P (Θ = θl ) θl

Sapienza et al (Dept. SCE)

n Y

P (Xj |Θ = θl )

(9)

j=1

Autonomous Visual Guidance

July 31, 2012

14 / 21

Methods

Classification

Expectation Maximization Learning Task Estimate a hypothesis hj = hαj1 , αj2 i that describe the rate parameters of the truncated exponential mixture model.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

15 / 21

Methods

Classification

Expectation Maximization Learning Task Estimate a hypothesis hj = hαj1 , αj2 i that describe the rate parameters of the truncated exponential mixture model. EM process E -Step: Calculate the Expected value E[X|Θ = θl ] assuming the current hypothesis h holds. M -Step: Calculate the new ML 0 0 hypothesis h0 = hαj1 , αj2 i assuming that the values for E[X|Θ = θl ] were those calculated from E -Step.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

15 / 21

Methods

Vision Algorithm

Algorithm

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

16 / 21

Methods

Vision Algorithm

Algorithm

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

16 / 21

Methods

Vision Algorithm

Algorithm

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

16 / 21

Methods

Vision Algorithm

Algorithm

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

16 / 21

Methods

Vision Algorithm

Algorithm

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

16 / 21

Methods

Vision Algorithm

Algorithm

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

16 / 21

Methods

Vision Algorithm

Algorithm

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

16 / 21

Methods

Locomotion

Locomotion Depth Estimation Orientation and distance of obstacle regions can be calculated using trigonometric identities.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

17 / 21

Methods

Locomotion

Locomotion Depth Estimation Orientation and distance of obstacle regions can be calculated using trigonometric identities.

Sapienza et al (Dept. SCE)

Boundary Interpretation Polar range plot is analysed and largest obstacle free areas beyond predefined distance are identified.

Autonomous Visual Guidance

July 31, 2012

17 / 21

Experimental Results

Results

Play video

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

18 / 21

Conclusion

Summary and Conclusion Design of vision system for a robot to guide itself safely in proximity of obstacles using the smallest reported safe window.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

19 / 21

Conclusion

Summary and Conclusion Design of vision system for a robot to guide itself safely in proximity of obstacles using the smallest reported safe window.

The results demonstrate its competence in various indoor and outdoor environments without the need for prior training, temporal model, or adjustments to system parametrization.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

19 / 21

Conclusion

Summary and Conclusion Design of vision system for a robot to guide itself safely in proximity of obstacles using the smallest reported safe window.

The results demonstrate its competence in various indoor and outdoor environments without the need for prior training, temporal model, or adjustments to system parametrization.

We modelled the feature dissimilarity distribution with a truncated exponential mixture.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

19 / 21

Conclusion

Summary and Conclusion Design of vision system for a robot to guide itself safely in proximity of obstacles using the smallest reported safe window.

The results demonstrate its competence in various indoor and outdoor environments without the need for prior training, temporal model, or adjustments to system parametrization.

We modelled the feature dissimilarity distribution with a truncated exponential mixture.

Simple movement behaviour - move towards largest open space.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

19 / 21

Conclusion

Future Work

Learn multiple ground models; How to transition from one ground surface type to another automatically?.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

20 / 21

Conclusion

Future Work

Learn multiple ground models; How to transition from one ground surface type to another automatically?.

Temporal model of traversable area, how does it change with robot movement?

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

20 / 21

Conclusion

Future Work

Learn multiple ground models; How to transition from one ground surface type to another automatically?.

Temporal model of traversable area, how does it change with robot movement?

Movement driven from probability of each superpixel being traversable, not from binary classification result.

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

20 / 21

Conclusion

Questions?

Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

21 / 21

Conclusion

Katramados, I., Crumpler, S., & Breckon, T. (2009). Real-time traversable surface detection by colour space fusion and temporal analysis. In Int. Conf. Computer Vision Systems, volume 5815 (pp. 265–274). Kim, D., Oh, S., & Rehg, J. (2007). Traversability classification for UGV navigation: a comparison of patch and superpixel representations. In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (pp. 3166–3173). Lorigo, L., Brooks, R., & Grimson, W. (1997). Visually-guided obstacle avoidance in unstructured environments. In IEEE/RSJ Int. Conf. on Intelligent Robots and Systems (pp. 373–379). Marr, D. (1982). Vision. A Computational Investigation into the Human Representation and Processing of Visual Information. W.H. Freeman and Company, first edition. Ulrich, I. & Nourbakhsh, I. (2000). Appearance-based obstacle detection with monocular color vision. In AIII Conf. on Artificial Intelligence (pp. 866–871). Sapienza et al (Dept. SCE)

Autonomous Visual Guidance

July 31, 2012

21 / 21

A generative traversability model for monocular robot ...

Jul 31, 2012 - able to make my computer do it...” [Marr, 1982]. Sapienza et ... Binary Image Segmentation, the segmentation of an image into two classes: [0,1].

4MB Sizes 3 Downloads 260 Views

Recommend Documents

A Revisit of Generative Model for Automatic Image ...
visit the generative model by addressing the learning of se- ... [15] proposed an asymmetrical support vector machine for region-based .... site i for the kth image.

A Generative Model for Rhythms - Research at Google
When using cross-validation, the accuracy Acc of an evaluated model is given by ... For the baseline HMM model, double cross-validation optimizes the number ...

A Generative Word Embedding Model and its Low ...
Semidefinite Solution. Shaohua Li1, Jun .... lent to finding a transformed solution of the lan- ..... admit an analytic solution, and can only be solved using local ...

Comparative Relation Generative Model
Index Terms—generative model, comparison mining, comparative sentences .... Training Sentence. ID ...... show results for the 50:50 training and test data splits.

GENERATIVE MODEL AND CONSISTENT ...
is to propose a coherent statistical framework for dense de- formable templates both in ... Gaussian distribution is induced on β. We denote the covari- ... pair ξi = (βi,τi). The likeli- hood of the observed data can be expressed as an integral

Comparative Relation Generative Model - Hady W. Lauw
Our insight is that there is a significant synergy between the two levels. ..... comparison dimension specified by the aspect, and we generate the comparison ...

Generative Model-Based [6pt] Text-to-Speech ... - Research at Google
Speech: auto-encoder at FFT spectra [ , ] ! positive results ... Joint acoustic feature extraction & model training .... Joint training, direct waveform modelling.

KINEMATIC CONTROLLER FOR A MITSUBISHI RM501 ROBOT
Jan 20, 2012 - Besides that, some notebook computers did .... planning where no collision with obstacle occurs [9]. .... Atmel data manual, AT89C51ID2.pdf. [7].

Monocular Navigation for Long-Term Autonomy - GitHub
computationally efficient, needs off-the-shelf equipment only and does not require any additional infrastructure like radio beacons or GPS. Contrary to traditional ...

Monocular Navigation for Long-Term Autonomy - GitHub
Taking into account that the time t to traverse a segment of length s is t = s/vk we can calculate the robot position (bx,by) after it traverses the entire segment as:.

A Behavioural Model for Client Reputation - A client reputation model ...
The problem: unauthorised or malicious activities performed by clients on servers while clients consume services (e.g. email spam) without behavioural history ...

A Generative-Discriminative Framework using Ensemble ... - Microsoft
e.g. in text dependent speaker verification scenarios, while design- ing such test .... ts,i:te,i |Λ) (3) where the weights λ = {ai, bi}, 1 ≤ i ≤ n are learnt to optimize ..... [2] C.-H. Lee, “A Tutorial on Speaker and Speech Verification”,

Hybrid Generative/Discriminative Learning for Automatic Image ...
1 Introduction. As the exponential growth of internet photographs (e.g. ..... Figure 2: Image annotation performance and tag-scalability comparison. (Left) Top-k ...

A Generative-Discriminative Framework using Ensemble ... - Microsoft
sis on the words occurring in the middle of the users' pass-phrase in comparison to the start and end. It is also interesting to note that some of the un-normalized ...

A GENERATIVE-DISCRIMINATIVE FRAMEWORK ...
cation or mis-verification functions [11, 12] of these discriminative measures. Although such ..... pendent Speaker Verification - Field Trail Evaluation and Simu-.

Integrating human / robot interaction into robot control architectures for ...
architectures for defense applications. Delphine Dufourda and ..... focusses upon platform development, teleoperation and mission modules. Part of this program ...

The subspace Gaussian mixture model – a structured model for ...
Aug 7, 2010 - We call this a ... In HMM-GMM based speech recognition (see [11] for review), we turn the .... of the work described here has been published in conference .... ize the SGMM system; we do this in such a way that all the states' ...

building a robot for the 2013 field robot competition
May 23, 2014 - Corresponding Author -- Email: [email protected] ... The robot consists of 4 major systems: body frame, .... Automatic control algorithms were.

Monocular Obstacle Detection
Ankur Kumar and Ashraf Mansur are students of Robot Learning Course at Cornell University, Ithaca, NY 14850. {ak364, aam243} ... for two labeled training datasets. This is used to train the. SVM. For the test dataset, features are ..... Graphics, 6(2

Resolving Scale Ambiguity for Monocular Visual ...
times help to overcome the scale ambiguity. ... public datasets: the KITTI odometry dataset (on-road) ..... “Real-time monocular visual odometry for on-road.

A comparison of loop closing techniques in monocular SLAM
Careful consideration is given to the distinctiveness of the features – identical but indistinctive observations receive a low probability of having come from the same place. This minimises false loop closures. • Image–to–map – Corresponden

A Robot Supervision Architecture for Safe and ... - Robotics Institute
+1-412-268-7988; email: [email protected] ... email: [email protected] ..... [9] G. Podnar, J. Dolan, A. Elfes, M. Bergerman, H.B. Brown and A.D. Guisewite.

Vision for Mobile Robot Navigation: A Survey
One can now also design a vision-based navigation system that uses a ...... planning, it did not provide a meaningful model of the environment. For that reason ...

Spatial Concept Acquisition for a Mobile Robot that ...
Language acquisition. – The robot does not have .... Bayesian learning of a language model from continuous speech. .... *1:http://julius.sourceforge.jp/index.php.