Aspect Coherence for Graph-Based Image Labelling Giuseppe Passino, Ioannis Patras, Ebroul Izquierdo∗ Queen Mary, University of London Mile End Road, London, E1 4NS United Kingdoms {giuseppe.passino,ioannis.patras,ebroul.izquierdo}@elec.qmul.ac.uk

Abstract

local features in a probabilistic framework is becoming gradually more feasible.

Semantic image labelling is the task of assigning each pixel of an image to a semantic category. To this end, in low-level image labelling, a labelled training set is available. In such a situation, structural information about the correlation between different image parts is particularly important. When a partbased inference algorithm is used to perform the association of semantic classes to pixels, however, a good choice on how to use structural information is crucial for learning an efficient and generalisable probabilistic model for the labelling task. In this paper we introduce an efficient way to take into account correlation between different image parts, embedding the parts relationships in a graph built according to aspect coherence of neighbouring image patches.

In this paper a system for automatic semantic image segmentation based on patches obtained through oversegmentation and a discriminative probabilistic graphical model is presented. Ultimately, we split the problem of grouping image areas having similar aspect models, and learning the association of aspect and high-level concepts. Oversegmentation can be then tackled considering aspect dissimilarity between neighbouring patches. The advantages of using such an approach are:

1 Introduction Low-level image analysis and inference is an important tool for semantic image classification, object detection and segmentation. One important challenge related to these tasks is to associate high-level concepts with images or patches within them. These research areas are experiencing a period of particular excitement from the research community due to the number of practical applications involved. In particular, some examples of systems that would benefit from advances in semantic image analysis are human-computer interaction systems, image database browsing and management, and Internet image search. The roles of a low-level image analysis system are indeed broad as it can be used by itself or integrated in a more complex system for multimedia search, indexing and retrieval. In the field of content-based multimedia analysis research, a low-level image analysis system by definition does not take advantages of preexistent, high level concepts information and structures (e.g., ontologies). Instead, it entirely relies on features extracted from the images being analysed. The association between concepts and image features is given in a set of annotated examples that can be used for training. Particularly appealing is the problem of part-based image analysis, in which features are extracted from specific areas rather than from the whole image. In this scenario the inference system can deal with the appearance associated to pictured objects instances instead of whole scenarios. Due to the ever-growing computational capabilities of modern computers, taking into account ∗ The research leading to this paper was supported by the European Commission under contract FP6-027026, Knowledge Space of semantic inference for automatic annotation and retrieval of multimedia content - K-Space, and under the COST Action 292.

• considering patches as basic elements of the probabilistic model eases the inference process, shifting the problem on a coarser domain; • an accurate oversegmentation is likely to place patches boundaries on the actual object boundaries, giving a good basis for obtaining accurate pixel-level labelling and coherent feature extraction; • the use of a discriminative graphical model allows to directly model and learn the a posteriori probability distribution for the semantic categories given the observation (extracted features); this simplifies the learning process and makse thus possible to consider richer information in the inference framework [5]. The paper is organised as follows: Section 2 presents a brief review of works related to the low-level semantic image labelling problem. In Section 3 the segmentation algorithm used to obtain the image patches is discussed, and the features associated to the so-obtained patches are discussed in Section 4. The learning process is treated in Section 5, and experimental results are presented in Section 6. Finally, Section 7 closes the paper giving final comments on the proposed approach.

2 Related Work Although image segmentation is an old problem in computer vision, semantic segmentation, that is, the association of a semantic category to each image pixel, is relatively recent. In this case the problem lies on two different levels: at a first level, there is the need to accurately identify objects boundaries (the classical segmentation problem); additionally, different areas of the images have to be associated coherently to high-level concepts that are often broad and not related with a single aspect.

One of the most popular and successful methods in associating concepts to features in images is the probabilistic Latent Semantic Analysis (pLSA) [16], an application to the image domain of the bag-of-words framework [4]. Features are grouped into visual words and they are associated to a set of latent topics (the flexibility of the framework allows not to have any ground truth on the topics, even though partial segmentation in the training set helps the training). The main shortcoming of pLSA is that spatial information is discarded in favour of a simpler model. Different works addressed this aspect recently [19, 9, 6]. Spatial weighting of the features according to neighbours similarities, imposition of structural potentials in the learning process, or considering features at different scales are common approaches to take into greater account spatial relationships. On the other side, the utilisation of a probabilistic model to accurately describe the belonging of the patches to semantic categories is difficult due to the high dimensionality of the problem, and the complexity of learning a complete model for this task makes it intractable. In practice, an effective way to impose realistic constraints to the probability distribution function describing the likelihood of a particular segmentation is the use of graphical models. Representing image elements as nodes in a graph, the appropriate structure on it can be imposed to limit the interaction between random variables. In particular, particular fortune has been experienced by discriminative models [17], that directly estimating the a posteriori probability of a particular segmentation given the extracted features, eliminate the need to explicitly learn the aspect model of the categories and allow to relax strong independence assumptions on the observation. Conditional Random Fields (CRF), introduced in text-labelling problems [5], have proven to be a valuable tool for image labelling [15, 2, 20]. CRF is the discriminative version of a Random Markov Field, a probabilistic model in which a configuration likelihood is described by a function factorisable on the graph cliques. In [15] pixels are regarded as atomic elements in the probabilistic framework, and each pixel is a random variable with a certain distribution over the semantic topics. Such a system is characterised by a big dimensionality and a close locality of the relationships between the entities considered in the inference. More long-range interactions are integrated by using a boosting approach on texture descriptors. The main role of the random field is to impose a smooth labelling of the pixels. In [2] patches composed by different pixels are taken into account. CRF however are not used to link adjacent patches but to relate patches to sets of spatial patterns at multiple scales, thus embedding long-range positional constraints. Basing the system on a set of patterns reduces on the other hand the generalisation capabilities of such an approach. Also in [20], where long-range relationships are incorporated via global descriptors, patches are rectangular groups of pixels arbitrarily chosen. The choice of patches in the images is a key step in order to learn a coherent aspect model for them [12]. Locating patches on salient points is a common choice for object detection systems, obtaining stable and meaningful edge and texture fea-

tures, but this strategy is not effective for segmentation problems due to the fact that some concepts are not well represented and the distribution of the salient points is in general uneven. Another possibility is to base the analysis on oversegmented images (e.g., [13]). In [3] a Normalised Cuts (NCuts) [14] approach is used to obtain oversegmentation. This approach has got the advantage of splitting the image into patches in correspondence of object boundaries, favouring homogeneous, consistent patches. The patches are analysed with a mixture of standard CRF in which neighbouring patches labels are assigned through consideration of a concepts compatibility table.

3 From Pixels to Patches The segmentation approach used in [3] can be exploited to provide the CRF framework with a simple but stable reduced representation of the image. This eases the learning process, stabilises the extracted features, and allows proximity dependences that extend to longer ranges than relations between adjacent pixels. This approach aims at breaking the complexity of the semantic segmentation problem. An accurate oversegmentation of the images is obtained, considering texture and edge information, but discarding the context (and the semantics). Patches are obtained using the approach presented in [1]. The target is an oversegmented image in which patches are homogeneous and can be related to a single semantic concept. For this purpose, the NCuts algorithm [14] is used. NCuts is a spectral clustering method, in which the aim is to cluster connected pixels grouping them according to a similarity measure. Introducing the similarity matrix W = {wij } in which wij measures the similarity between the pixels i and j, and considering pixels as nodes in a graph G = {V, E}, the cut P between the disjoint partitions A, B ∈ V is cut(A, B) = a∈A,b∈B wab , P w . The NCuts algorithm for K and vol(A) = av a∈A,v∈V clusters minimises the function KN cuts(V1 , . . . , VK ) =

K X cut(Vk , V \Vk ) k=1

vol(Vk )

,

(1)

S where Vi ∩Vj = ∅ for i 6= j and k Vk = V . This problem can be solved efficiently (although not exactly) by computing the eigenvalues and eigenvectors of the generalised eigenproblem (D − W)y = λDy ,

(2)

where D is the diagonal matrix of the vertices degrees di = P j wij . Minimising Equation (1) in practice leads to balanced regions, that will therefore have a comparable area (due to the terms vol(Vk )). The similarity measure used to calculate W is decisive to obtain a good quality segmentation. However, since an oversegmentation is needed, this factor is less critical compared to the classical image segmentation scenario. Splitting two regions in correspondence of a real object boundary becomes more likely by increasing the number of segments. A failure in doing so can anyway result in spurious patches that will add noise in

Figure 1: Some examples of oversegmentation (300 patches) using NCuts (original images size 321 × 214). In the lower row, ground-truth for the same images are displayed. the learning phase. We use the similarity measure described in [10]. Region boundaries can occur either due to the presence of strong edges, or due to a change in the texture pattern. The nature of these two kind of boundaries is very different. Boundaries due to edges present a strong response to gradient-based features, that fail in presence of textures. On the other side, textures are well represented by response to Gaussian filterbanks. The problem addressed in [10] is related to the combination of features of different nature to cope with these two types of boundaries. An example of the segmentations results obtained with such an approach is given in Fig. 1, together with the related ground truths for a visual evaluation of the accuracies of the categories boundaries.

4 Feature Extraction The classification is based on features extracted from the patches. A CRF is used to learn jointly the aspect of the patches and informations on the relations among them. For the classification step, two types of features have been used: Texture/edge features: SIFT descriptors [8] have been extracted frmo areas centred in the centre of gravity of the patches; the support for the textures is a disc proportional to the patch size; SIFT descriptors have proven to be a valid compact way to summarise a patch aspect [11]. Colour features: we use the robust hue descriptor of [18], that provides invariance to a range of colour distortions. The support for these features is the entire patch area, with the contribution of different pixels weighted based on the distance from the patch centre. Both the descriptors are invariant to patch luminance: this information has been ignored, since its validity is very local and limited. Scale invariance for texture descriptors is not taken into account, even if scale is partially accounted when considering a variable size support for SIFT.

Regarding the support of the SIFT descriptors, one of the problems to face is how to handle very stretched patches obtained through segmentation since the support of the original SIFT descriptor is circular. Considering a support proportional to the circle inscribed in the patch leads to inaccurate descriptions for elongated patches, often associated to large homogeneous regions. An improvement of the results can be obtained considering the circumscribed circle, relying on the observation that elongated regions are often delimited by regions similar in aspect (as in the examples shown in Fig. 1). However, this approach tends to introduce errors at object boundaries. A better compromise is obtained by averaging the radius of the inscribed and the circumscribed circles while evaluating the patches supports (as results in Section 6 prove). As noticed by other authors (e.g. [15]), colour is not a strong clue for image classes, and few object categories are associable to a single colour. On the other side, locally, colour can provide strong information on object boundaries: a sharp colour change between two patches gives a strong clue that those patches can belong to different objects. Additionally, up to a certain extent, objects instances tend to be colour homogeneous. For this reason, colour information is used as a support taking into account patch relationships, but not when learning aspect models (even though for certain concepts, as for example grass or sky, it can improve the classification results).

5 From Patch Aspects to Aspect Coherence To demonstrate the role of the aspect coherence in the patching process we started considering a simple softmax model that treats different patches in an image as independent (no structural information). In a second phase, this model has been enriched with boundaries compatibility tables: this is a CRF model, in which connections have to be accurately chosen. Finally, a CRF model taking into account the aspect differences while considering neighbours, has been introduced and tested.

5.1 Learning Aspect Models A discriminative model is used to learn independently the aspect of each patch. A softmax model expresses the probability that a patch j takes the label l, given the observation vector xj ∈ Rn , as p(l|xj ; θ) = P

eθl ·xj . θl′ ·xj l′ ∈L e

(3)

The vectors θl encode, in the scalar product, the compatibility between the aspect vector (feature) xj and the label l. The model is called softmax because is the smoothed version of the function p(l|x) = δ(l, arg maxl′ (θl′ · x)). The probability Q of a labelling of the entire image is thus p(l|X) = j p(l|xj ; θ). Training and inference in such a model are straightforward, since there are no dependencies between patches. Training can be performed by the maximisation of the log-likelihood P P P of the training set, L(θ) = log(p(l |X ; θ)) = i i i i j log(p(lij |xij ; θ)) (the sum is over all the images i in the training set). 5.2 Patches Adjacency The former model is simple and efficientbut it has an important drawback: the patches are considered as independent, while, due to the oversegmentation, this is rarely the case. In particular it often happens, especially for some concepts as grass and sky, that neighbouring patches belong to the same semantic category. This can be taken into account, and the neighbouring relations can be learned, by including in the total labelling probability factors that depend on the labels of pairs of patches. The simplest way to do so is to consider only a dependency of these factors on the specific labels assignment, that is, a compatibility look-up table between neighbours labels. The extension of the model introduced with Equation (3) is immediate, introducing a change in the notation. We identify each one of the n · |L| elements of θ, one for each pair of label and feature vector element, with an index k. We introduce then selector functions φ1k (l, x), that return the feature vector element associated with the index k if the label of the patch is also the one associated with the same index k, or zero otherwise. The total labelling probability then P P becomes p(l|X) = exp( j k θk φ1k (lj |xj ))/Z(X), where P P P Z(X) = l exp( j k θk φ1k (lj |xj )) is a global normalisation factor. In a CRF the probability of a labelling configuration depends also on factors ψk2 (li , lj , xi , xj ) related to pairs of patches (pairwise factors), so that p(l|X) =

eΨ(l,X,θ) , Z(X; θ)

(4)

being Ψ(l, X, θ) =

XX j

+

θk φ1k (lj |xj )+

k

X X

(i,j)∈E

k

θk φ2k (li , lj , xi , xj ) ,

(5)

Figure 2: Some examples of Minimum Spanning Tree based on aspect coherence. where E is the set of graph edges. The function Ψ is commonly referred to as “local function”. The simplest configuration, as anticipated, is to use functions φ2k that are independent on the aspect of the pair of patches. The coherence of the aspect can then be taken into account considering either the difference between the vectors xi and xj , or their distance according to a particular metric. Including such an information in the structure of the functions φ2k further enriches the information available to the CRF for the inference. Learning the parameters of such a model is still feasible, depending on the structure of the connections. For structured models such CRF, learning a model in presence of loopy graphs is problematic due to the fact that there is no optimised algorithm to estimate the correct marginal probabilities of each single node. Algorithms for approximate inference exist, such Loopy Belief Propagation, but when used in correspondence with gradient-ascent iterative maximisation algorithms (the LBFGS quasi-Newton ascent method [7] is employed in this work) they can bring inconsistencies in the gradient calculations, preventing the optimisation from finding the correct solution of the problem. To overcome this problem, we considered a loopless graph (a tree). The tree has been obtained trimming the neighbourhood graph connections via a Minimum Spanning Tree (MST) algorithm run on the graph including all the edges corresponding to neighbouring patches. This algorithm allows to introduce the aspect coherence mentioned in Section 5 in a simple but effective way. The edges of the graph, for the sake of the MST algorithm, have been weighted according to the distance between the corresponding colour features, thus favouring links between patches with similar colour and likely to belong to the same object. The metric used to calculate the distance between colour features vector is the symmetric Kullback-Leibler divergence, defined as DKLs (P ||Q) = DKL (P ||Q) + DKL (Q||P ) ,

(6)

where DKL is the (asymmetric) Kullback-Leibler divergence   X P (i) . (7) DKL (P ||Q) = P (i) log Q(i) i Results of the MST algorithm are presented in Fig. 2 for two images from the experimental database. It is possible to notice how different objects are very little connected, most of the edges lying within the same category.

SMi SMc SMa SMw

buil. grass tree sky cow plane 45% 82% 32% 21% 45% 24% 45% 82% 30% 31% 55% 19% 45% 80% 38% 28% 58% 21% 47% 69% 54% 28% 63% 32%

Table 1: Classification results for the softmax model.

6 Experiments For the experiments we used the Microsoft Research Cambridge Object Recognition Image Database1 . We considered only 6 categories, namely building, grass, tree, sky, cow and airplane. The reason is that other categories are either less represented in the database, or their neighbourhood ground-truth information is missing. The segmentation examples in Fig. 1 are taken from this database, as well as the ground truth data. The first results are related to the softmax model described in Section 5.1, and are reported in Table 1. The first three rows show the classification results with different choices for the support of the SIFT descriptor, respectively proportional to the inscribed circle, circumscribed circle, and to the average, for SMi , SMc , SMa , as previously discussed in Section 4. Classification results generally improve considering circumscribed circles as supports, especially for concepts that tend to have smooth appearance, like sky or grass. Further improvements in the results for the SMa model are due to reduction of noise associated to edges outside of the considered patches. Results are particularly good for the grass concept, primarily because this concept is well characterised by a texture descriptor such SIFT, and secondarily due to the high frequency of grass patches in the training database. This is an unwanted behaviour, especially because the grass concept is of somewhat little interest for the potential user. A way to partially unbias the learning of concepts is to weight single examples (images) in the likelihood according to the contained concepts. We did so introducing a weighting vector wc whose elements are the reciprocals of the frequencies of the categories in the entire database, or wcj = 1/p(lj ). If the categories distribution in the image i is represented by the vector pli , the weight for the likelihood of the i-th image is wc · pli . Results obtained with this model are shown in the last row (SMw ) in Table 1. It is possible to notice an improvement on the fairness of classification. Experiments on structured model have also been run, considering the MST structure introduced in Section 5.2 for the CRFs. In Table 2 results are shown for both the CRF model with pairwise functions φ2k depending only on patches labels (CRFLUT ), and both on labels and colour features already used for the MST (CRFHUE ). It is possible to notice a dramatic improvement of the results for the first model, compared to the results in Table 1. However, this is not the case for the second model. Although in this case more information is available, the results 1 available at: http://research.microsoft.com/vision/ cambridge/recognition/.

CRFLUT CRFHUE

buil. grass tree sky cow plane 48% 95% 73% 56% 71% 26% 39% 69% 63% 36% 69% 24%

Table 2: Classification results for the CRF model, with pairwise functions depending only on labels (CRFLUT ) and both on labels and colour features (CRFHUE ). are less convincing. This fact can be explained with the fact that, considering only the pair-wise connections present in the MST, the difference between the two vector is often very small and the (noisy) uninformative content is not negligible. On the other side, the pair-wise functions depending only on the labels simply enforce a smooth change on neighbours labelling, and they are therefore effective. The only exception to the improvements is the airplane concept, for which most of the pixels are misclassified, the performances being poorer than for the softmax model in Table 1. This is due to the fact that the airplane patches are often different in colour (different parts of the airplane having different colours) and not very connected (the airplane does not occupy a large area in the scene). Additionally, the characterisation of the airplane through SIFT descriptors is not very effective. In Fig. 3 two examples of patching on test set images for the CRFLUT model are shown.

7 Conclusions In this paper we introduced a system to patch regions according to their semantic content. The system uses a CRF, a discriminative probabilistic model, to account for patches proximity. A spectral clustering algorithm is used as a preprocessing and segmentation step to find accurate regions boundaries and provide simplified, aggregated data for the inference algorithm to work. One of the novelties of the model is to sample connections between patches in order to perform fast and accurate inference, optimising the choice of the considered dependences. This sampling step is based on aspect coherence between neighbouring patches, defined as difference between colour features vectors. The model proves to be effective showing good improvements in the labelling process. The model presents different interesting directions for improvement, one of the most promising being the utilisation of aspect coherence in association with more complex graphical structures. This can be done directly considering the additional information in the CRF framework, although a modified learning algorithm has to be devised in this case, because an exact inference on the model is no more possible.

References [1] C. Fowlkes and J. Malik. How much does globalization help segmentation? Technical report, Division of Computer Science, University of California, Berkeley, July 2004.

Figure 3: Examples of labelling on test set images, for the CRF model depending only on neighbouring patch labels. [2] X. He, R. S. Zemel, and M. A. Carreira-Perpinan. Multiscale conditional random fields for image labeling. In CVPR ’04, volume 2, pages 695–702, 2004.

[12] G. Passino, I. Patras, and E. Izquierdo. On the role of structure in part-based object detection. In Proceedings of ICIP 2008 (to appear), 2008.

[3] X. M. He, R. S. Zemel, and D. Ray. Learning and incorporating top-down cues in image segmentation. In European Conference in Computer Vision (ECCV), May 2006.

[13] I. Patras, E. A. Hendriks, and R. L. Lagendijk. Video segmentation by map labeling of watershed segments. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 23(3):326–332, 2001.

[4] T. Hofmann. Unsupervised learning by probabilistic latent semantic analysis. Mach. Learn., 42(1-2):177–196, 2001. [5] J. Lafferty, A. McCallum, and F. Pereira. Conditional random fields: Probabilistic models for segmenting and labeling sequence data. In Proc. 18th International Conf. on Machine Learning, pages 282–289. Morgan Kaufmann, San Francisco, CA, 2001. [6] S. Lazebnik, C. Schmid, and J. Ponce. Beyond bags of features: Spatial pyramid matching for recognizing natural scene categories. In CVPR ’06, pages 2169–2178, 2006. [7] D. Liu and J. Nocedal. On the limited memory method for large scale optimization. Mathematical Programming B, 45(3):503–528, 1989. [8] D. G. Lowe. Distinctive image features from scaleinvariant keypoints. Int. J. Comput. Vision, 60(2):91–110, 2004. [9] M. Marszałek and C. Schmid. Spatial weighting for bagof-features. In IEEE Conference on Computer Vision & Pattern Recognition, 2006. [10] D. R. Martin, C. C. Fowlkes, and J. Malik. Learning to detect natural image boundaries using local brightness, color, and texture cues. Pattern Analysis and Machine Intelligence, IEEE Transactions on, 26(5):530–549, 2004. [11] K. Mikolajczyk and C. Schmid. A performance evaluation of local descriptors. IEEE Trans. Pattern Anal. Mach. Intell., 27(10):1615–1630, October 2005.

[14] J. Shi and J. Malik. Normalized cuts and image segmentation. IEEE Transactions on Pattern Analysis and Machine Intelligence, 22(8):888–905, 2000. [15] J. Shotton, J. Winn, C. Rother, and A. Criminisi. Textonboost: Joint appearance, shape and context modeling for multi-class object recognition and segmentation. In European Conference on Computer Vision, pages 1–15, 2006. [16] J. Sivic, B. Russell, A. A. Efros, A. Zisserman, and B. Freeman. Discovering objects and their location in images. In International Conference on Computer Vision (ICCV 2005), October 2005. [17] I. Ulusoy and C. M. Bishop. Generative versus discriminative methods for object recognition. In Proceedings of the 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition, pages 258–265, Washington, DC, USA, 2005. IEEE Computer Society. [18] J. van de Weijer and C. Schmid. Coloring local feature extraction. In European Conference on Computer Vision, volume Part II, pages 334–348. Springer, 2006. [19] J. Verbeek and B. Triggs. Region classification with markov field aspect models. In Computer Vision and Pattern Recognition, 2007. CVPR ’07. IEEE Conference on, pages 1–8, 2007. [20] J. Verbeek and B. Triggs. Scene segmentation with crfs learned from partially labeled images. In Advances in Neural Information Processing Systems, 2007.

Aspect Coherence for Graph-Based Image Labelling

Queen Mary, University of London. Mile End Road, London, ..... Video seg- mentation by map ... European Conference on Computer Vision, pages 1–15,. 2006.

609KB Sizes 1 Downloads 175 Views

Recommend Documents

content aspect ratio preserving mesh-based image ...
CONTENT ASPECT RATIO PRESERVING MESH-BASED IMAGE RESIZING. Kazu Mishiba1, Masaaki Ikehara2 and Takeshi Yoshitome1. 1Department of Electrical and Electronic Engineering, Tottori University, Tottori, Japan. 2Department of Electronics and Electrical Eng

Aspect-Oriented Design with Reusable Aspect ... - Semantic Scholar
below, and the full models can be downloaded from [8]. The main .... implements a virtual machine which, when requested to do so, launches the ex- ecution of ...... without having to piece information together from multiple windows or sources,.

Aspect-Oriented Design with Reusable Aspect ... - Semantic Scholar
1 School of Computer Science, McGill University, Canada. 2 SINTEF IKT, Oslo, Norway. 3 INRIA, Centre Rennes - Bretagne Atlantique, Rennes, France ..... |Allocatable object as being allocated by calling allocate, free it again by calling ...

In-Network Cache Coherence
valid) directory-based protocol [7] as a first illustration of how implementing the ..... Effect on average memory latency. ... round-trips, a 50% savings. Storage ...

PartBook for Image Parsing
effective in handling inter-class selectivity in object detec- tion tasks [8, 11, 22]. ... intra-class variations and other distracted regions from clut- ...... learning in computer vision, ECCV, 2004. ... super-vector coding of local image descripto

PartBook for Image Parsing
effective in handling inter-class selectivity in object detec- tion tasks [8, 11, 22]. ... automatically aligning real-world images of a generic cate- gory is still an open ...

kUDID Labelling User Guide.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

Adaptive Extraction of Information Using Relaxation Labelling ... - IJRIT
Abstract-Internet forums are important services where users can request and exchange information ... 2.2 Web data extraction based on partial tree alignment:.

Adaptive finite elements with high aspect ratio for ...
of grid points adaptive isotropic finite elements [12,13] have been used. Further reduction of ...... The initial solid grain is a circle of. Fig. 15. Dendritic growth of ...

Adaptive finite elements with high aspect ratio for ...
Institut d'Analyse et Calcul Scientifique, Ecole Polytechnique Fйdйrale de Lausanne, 1015 Lausanne, ..... degrees of freedom, the triangles may have large aspect ..... computation of solidification microstructures using dynamic data structures ...

A Unified Model for Service- and Aspect- Oriented ...
As a first approach, we are using IBM's WBI [15], a programmable ..... 2005, and the ACM Symposium on Applied Computing (SAC) since 2006. He has been an ...

Flexible Model Element Introduction Policies for Aspect ... - ORBilu
The Model-Driven Engineering (MDE) paradigm [17] proposes to consider models as ..... that keeps track of the global elements of the advice model. Basically ...

An Aspect-Oriented Programming Model for Bag-of ...
implementing a grid application, using traditional grid platforms, is not a ... A software system can be seen as a set of structured modules ... joinpoints (which are well-defined points in the ... the thread class to a file (line 5) and creating a t

Particle-fixed Monte Carlo model for optical coherence ...
Mar 21, 2005 - Y. Pan, R. Birngruber, J. Rosperich, and R. Engelhardt, ..... complex partitioning schemes may be possible (like oct-tree or kd-tree [17]), they help little ..... its cluster and Dr. Tony Travis for the tip of programming on the cluste

Policy Coherence for Development : A Background paper ... - Hal-SHS
Sep 18, 2014 - country include information and communication costs about the host country, which obviously vary according ..... certainly because of the expected benefits from this technology transmission mechanism that the literature has ...... vers

Adaptive Finite Elements with High Aspect Ratio for ... - Springer Link
An adaptive phase field model for the solidification of binary alloys in two space dimensions is .... c kρsφ + ρl(1 − φ). ( ρv + (k − 1)ρsφvs. )) − div. (. D(φ)∇c + ˜D(c, φ)∇φ. ) = 0, (8) where we have set .... ena during solidif

Policy Coherence for Development : A Background ...
Sep 18, 2014 - Comments on this paper would be welcome and should be sent to the OECD. Development Centre, 2, rue .... III. IS ATTRACTION OF FDI A REASONABLE POLICY? ..... When factors are mobile in this framework, perfectly ..... in the simplest way