Reducing Label Cost by Combining Feature Labels and Crowdsourcing

Jay Pujara Dept. of Computer Science, University of Maryland, College Park, MD, USA 20742

[email protected]

Ben London [email protected] Dept. of Computer Science, University of Maryland, College Park, MD, USA 20742 Lise Getoor Dept. of Computer Science, University of Maryland, College Park, MD, USA 20742

Abstract Decreasing technology costs, increasing computational power and ubiquitous network connectivity are contributing to an unprecedented increase in the amount of available data. Yet this surge of data has not been accompanied by a complementary increase in data annotation. This lack of labeled data complicates data mining tasks in which supervised learning is preferred or required. In response, researchers have proposed many approaches to cheaply construct training sets. One approach, referred to as feature labels (McCallum & Nigam, 1999), chooses features that strongly correlate with the label space to create high precision examples to bootstrap the learning process. Another technique, crowdsourcing, exploits our ever-increasing connectivity to request annotation from a broader community (who may or may not be domain experts), thereby refining and expanding the labeled data. Together, these techniques provide a means to obtain supervision from large, unlabeled data sources. In this paper, we combine feature labels from domain experts and instance labels from crowdsourcing in a unified framework which we call active bootstrapping. We show that this technique produces more reliable labels than either approach individually, resulting in a better classifier at minimal cost. We demonstrate the efficacy of our Presented at the ICML 2011 Workshop on Combining Learning Strategies to Reduce Label Cost, Bellevue, WA, USA, 2011. Copyright 2011 by the author(s)/owner(s).

[email protected]

approach through a sentiment analysis task on data collected from the Twitter microblog service.

1. Introduction A longstanding problem in supervised learning is finding labeled data. Data is produced in high volumes from sources as varied as sensor networks to mobile phone users. Each dataset can be used for many possible applications from intrusion detection to sentiment analysis. Even if labor is expended to meticulously label data, the training set may not adequately represent the distribution of test instances. For all these reasons, methods to cost-effectively produce training data are an important component of machine learning research. Many researchers have considered the problem of scarce training data. Approaches can be broadly divided between those that find cheaper ways of acquiring training labels and those that design algorithms that benefit from unlabeled data, with a large body of research that combines both approaches (Seeger, 2001). 1.1. Acquiring Labels Cheaply Data annotation can be expensive and timeconsuming, requiring hours of manual inspection by a domain expert. To reduce this overhead, researchers have developed clever strategies for acquiring labeled data at minimal cost. One such strategy is to employ a domain-specific heuristic to produce labels. A common heuristic, feature labels (McCallum & Nigam, 1999), selects a set of features that are strongly associated with the output space, and annotates instances containing these features accordingly. For example, in

Reducing Label Cost by Combining Feature Labels and Crowdsourcing

the context of sentiment analysis, the keyword “overjoyed” is strongly correlated with the class Happy; as such, all instances containing the keyword “overjoyed” could be labeled as happy with high confidence in the accuracy. Heuristic rules work for any amount of unlabeled data, and can often be reused on different datasets. Another recent innovation is crowdsourcing, which brings the labeling task to a broader community of willing, motivated participants. This is often accomplished by rewarding volunteers for their service, either monetarily or by designing a game around the task at hand. Not only is this incredibly cost-effective, it is also highly parallelizable; given the multitudes of potential participants all over the world (connected via the internet), annotation can be orders of magnitude faster than would be possible with just a handful of domain experts. An interesting distinction between these approaches is the precision and recall of the acquired labels. Generally, the heuristics used to create feature labels are chosen to have high precision, to strongly correlate with a given label. However these features often have low recall, applying to only a small subset of the instance space. On the other hand, crowdsourced labels can be acquired uniformly across the instance space, thus providing high recall. Yet with no guarantee on the reliability of the labels, the precision of crowdsourcing can be low. 1.2. Leveraging Unlabeled Data There are essentially two approaches for leveraging unlabeled data in supervised learning: incorporating the unlabeled examples directly into the model (i.e. semisupervised learning) or acquiring more labels. For the latter, we explore two popular strategies: bootstrapping and active learning. Bootstrapping is an iterative process of training and evaluation. First, a highly selective, high precision training set is used to train a model. This model is then used to predict the labels of the unlabeled set. The intermediate predictions with the highest confidence are then used to supplement the existing training set in the next iteration. While the theoretical underpinnings of bootstrapping are not well-explored (Daum´e III, 2007), a possible advantage is that the training set is augmented with the most polarized instances, thus reducing ambiguity in the training set. In active learning, the learner is able to influence the distribution of training examples. Starting with a completely (or partially) unlabeled instance space,

the learner iteratively requests the labels of a chosen sequence of examples. Intuitively, this allows the learner to focus on the examples it finds most ambiguous. Typically, the most uncertain instances are those that lie close to the decision boundary. Querying for these labels helps to better define the optimal decision boundary, resulting in a better model. Furthermore, by focusing less on the obvious examples (those further from the boundary), it reduces the sample complexity. 1.3. Combining Approaches In practice, it is not uncommon to explore cheap, effective annotation strategies while also leveraging unlabeled data. In linguistics tasks, such as semantic analysis, feature labels are often combined with bootstrapping. A set of keywords strongly associated with a type of document are defined, and these labels are used as the seed training set for a classifier trained through bootstrapping (McCallum & Nigam, 1999). Active learning and crowdsourcing have a similar synergy, where uncertain predictions on unlabeled data are converted to queries in crowdsourcing, which are labeled by participants on the Internet (Ambati et al., 2010) (Quinn et al., 2010). 1.4. Drawbacks One should note that the aforementioned techniques are not without some risk. For bootstrapping and feature labels, the choice of high precision examples often results in high inductive bias and poor generalization. As the number of bootstrapping iterations increases, the potential for overfitting to a small portion of the instance space also rises. Both methods are based on some human intuition about the distinguishing features, and while human reasoning has very high precision, it has significantly lower recall. Other active approaches in this setting have used unsupervised approaches to suggest features to use as labels (Liu et al., 2004). Our hypothesis is that active learning via crowdsourcing will improve the recall by introducing labeled data that bootstrapping and feature labels failed to produce. In crowdsourcing, the primary concern is the quality of the acquired data. Participants may not be familiar with the data or rigorously trained in the labeling procedure, resulting in noisy labels. Tasks with a concrete objective (e.g. is this person happy?) are often easier to pose in crowdsourcing platforms than openended questions (e.g. what words show this person is happy?). In a realistic setting, the noise increases as the examples approach the decision boundary, which is precisely the region of the instance space active learn-

Reducing Label Cost by Combining Feature Labels and Crowdsourcing

ing explores. As such, we require that an active learning algorithm be robust to moderate levels of random classification noise (RCN). Recent theoretical results (Castro & Nowak, 2006) (Castro & Nowak, 2007) prove that it is indeed possible to learn an -optimal classifier in the presence of unbounded RCN, with an exponential dependence on the noise margin. Further results (Balcan et al., 2006) have shown that an exponential decrease in sample complexity is realizable when the noise rate is sufficiently low or a high constant — though no improvement can be made under certain high-noise conditions (K¨ a¨ ari¨ ainen, 2006). Finally, while both feature labels and crowdsourcing are cheaper than traditional approaches to labeling they can still incur significant costs. As previously mentioned, selecting feature label heuristics requires attention from a domain expert to identify trends and correlations, often involving hours of manual data mining. Similarly, in preparing a crowdsourcing task, one must translate data between formats, write training instructions and validate results. Thus, though these approaches offload much of the work, they still require a considerable investment in skilled labor that should not be overlooked.

2. Active Bootstrapping Let D denote a distribution over an instance space X ⊆ Rd . We assume the existence of a deterministic mapping c : X → Y, where Y is a finite set of labels. In the context of sentiment analysis, Y = {1, −1}. The goal is to train a classifier h : X → Y that minimizes the expected error Prx∼D [c(x) 6= h(x)]. Algorithm 1 illustrates our proposed technique, which we refer to as active bootstrapping. We are given a training set U ∼ Dn , consisting of n examples, sampled independently and identically according to D. We are also given a heuristic F : X → Y mapping certain features to labels. More precisely, if x ∈ X contains a certain feature that is strongly correlated with a y ∈ Y, then F(x) = y; otherwise, F(x) outputs a null value. We thus begin by invoking F(x), for all x ∈ U . Let S denote the set of instances for which F(x) returned a value. We then update U by removing all instances also found in S, leaving only unlabeled examples in U . The algorithm then iterates over the following steps. A classifier h is trained on S. Consequently, h is used to predict labels for the remaining unlabeled examples in U . From this result, the top-k most confident predictions from each class are added to S. Similarly, the top-(αk) more uncertain predictions are crowdsourced

to obtain (possibly noisy) labels. U is then updated by removing all instances from S. The algorithm terminates when the maximum number of iterations, Tmax , is reached. Algorithm 1 Active-Bootstrapping Algorithm: Augments training data with active learning and bootstrapping Require: Unlabeled data, U ⊆ Rd Require: Heuristic mapping F : X → Y Require: Constants k, α and Tmax S ← instances of U with features from F and their labels U ←U −S for t = 1 to Tmax do Train a classifier, h on S Predict labels on U using h S ← S ∪ {top-k positive instances} S ← S ∪ {top-k negative instances} S ← S ∪ {crowdsourced responses of top-(αk) uncertain predictions} U ←U −S end for

2.1. Our Contribution As mentioned earlier, label acquisition strategies have differing strengths, particularly in terms of the precision and recall of the labels. Feature labels are assumed to be very high precision indicators of class membership, but suffer from poor recall; crowdsourcing can potentially expand the recall, but suffers from RCN, which hampers its precision. Our hypothesis is that by combining both strategies, we can balance precision and recall. Selecting the top-(αk) uncertain predictions (across the entire sample) explores the instance space, thereby improving recall; whereas selecting the top-k most confident predictions exploits the obvious examples to improve precision. Active bootstrapping performs both exploration and exploitation simultaneously, thus generating higher quality training data without the costs associated with domain-expert annotation.

3. Evaluation The following section outlines the experimental procedure used to validate our hypothesis — namely, that a classifier trained with labels generated from both heuristic and crowdsourced labels will outperform a classifier using either strategy alone. To do so, we created a classification task involving textual data (“tweets”) from Twitter’s microblogging network. Our objective was to distinguish between happy and sad

Reducing Label Cost by Combining Feature Labels and Crowdsourcing

emotional content, using emoticons to generate feature labels and Amazon’s Mechanical Turk system to crowdsource annotations. 3.1. Dataset We acquired Twitter data from a corpus included in the Stanford Large Network Dataset Collection as reported in (Yang & Leskovec, 2011). This dataset is believed to contain approximately 20% of all publicly visible tweets (476M tweets) from a period spanning June-December 2009. Since our goal was to predict emotional content and many tweets are objective statements, we filtered the data using a set of emotional indicators, specifically emoticons. Filtering for emoticons resulted in 41M tweets, from which we sampled 1% to obtain a more tractable dataset. Each instance in this set consisted of a user ID, a time-stamp and a tweet. We balanced the number of positive and negative instances, and applied the normalization steps described in subsection 3.2 to yield a total of 77,920 instances (39033 negative and 38887 positive), used for unlabeled data in our experiments. For evaluation, we manually labeled a set of 500 tweets with the same processing described earlier. 3.2. Normalization Since Twitter is used for a variety of purposes, it was first necessary to separate messages that communicated a personal state of being from those that were likely focused on just sharing information. To this end, we removed those that contained URLs, under the assumption that they were less likely to be emotional. We also removed tweets that were shorter than 40 characters total. To reduce the lexicon even further, we removed all punctuation, emoticons, username mentions, hashtags, HTML escape sequences, and non-ASCII characters. We then lowercased all text to provide a uniform view. This data was used for classification as well as for crowdsourcing. Learning a lexicon from Twitter data can be difficult. The tweet length limitation has given rise to a whole new language of acronyms, abbreviations and phonetic abbreviations. Informal context makes users less attentive to proper spelling and “morphological emphasis” is commonly used — e.g. “yay” becomes “yaaaay”, or “yes” becomes “yessss”. To remove infrequent terms and misspellings, we removed all terms with frequency below the mean (the two lower quartiles). Since it is rare for the same character to appear more than twice consecutively in English words, we replaced three or more occurrences of a character with a single character.

3.3. Feature Labels We assume that certain lexical features — in this case, emoticons — serve as a proxy for emotional content in tweets, and can thus be used in lieu of manuallyassigned labels. To achieve as much recall as possible from these features, we used the list of emoticons found on Wikipedia (Wikipedia, 2010), which is fairly comprehensive. From this we obtained 97 emoticons, which we manually partitioned into 66 Happy and 31 Sad. We then produced a regular expression mapping to this list of emotional indicators. Because we have not manually confirmed the labels, we refer to them as feature labels. 3.4. Crowdsourcing To acquire crowdsourced labels, we created a “Human Intelligence Task” on Amazon’s Mechanical Turk. Users on the service received compensation between five and ten cents for labeling a series of ten tweets with labels Happy, Sad or Neither. One of the ten tweets shown to the users was a tweet that had already been labeled by the authors; those responses which had an incorrect label for this tweet were discarded and the user did not receive payment for them. This acted as a simple quality control for filtering out bad data from disinterested or exploitive users. Furthermore, we required that each crowdsourced tweet used in training be labeled by a minimum of two users; as such, each label would either be supported or invalidated (producing RCN) by another label. 3.5. Experiments We compared an active bootstrapping approach with baseline results from bootstrapping, using seed sets generated from feature labels or crowdsourced labels. When bootstrapping with feature labels, we used initial seed sets of 1,000, 2,000, and 10,000 feature-labeled instances. When using crowdsourced labels, we requested a total of 2,000 labels on 1,000 instances that were randomly chosen from the training set. Approximately 1,600 labels were acquired through crowdsourcing, and after employing the validation steps described in subsection 3.4, a total of 670 labels remained. At each iteration, the training set was augmented with a number of instances equal to 10% of the seed set size. These instances were drawn from the most confident predicted labels on unlabeled data, sampled equally from the positive and negative classes. Bootstrapping was run for 8 iterations. Evaluating active bootstrapping, we used the same 1,000 feature-labeled instances from the baseline clas-

Reducing Label Cost by Combining Feature Labels and Crowdsourcing

sifier as a seed training set. Following algorithm 1, we augmented our training set with 100 instances from predicted labels on unlabeled data. We also augmented the training set with approximately 100 instances that were queried through crowdsourcing. These queries were generated from unlabeled instances with uncertain predictions. Each instance was labeled by two users, yielding 200 total labels. Since we applied a data quality filter, the number of crowdsourced instances actually added could vary at each iteration. 3.6. Results Through the experiments described above, we validate intuitions about the feature label and crowdsourcing approaches and demonstrate that active bootstrapping provides an advantage over bootstrapping, using the same potentially noisy or biased seed sets. Table 1 shows the test error on our manually labeled evaluation set for the different approaches. Bootstrapping with an initial set of crowdsourced labels has poor performance compared to seed sets generated from feature labels. This suggests that noise inherent in the crowdsourcing approach, amplified by bootstrapping, can subvert learning. However, the test error using feature labeled data also increases over the bootstrapping process and may indicate the hazards of overfitting and inductive bias arising from the choice of high-precision heuristics. Increasing the seed set using more feature labels can improve performance, but error still increases over bootstrapping iterations. In contrast, Active bootstrapping achieves the lowest error in this experiment, improving over bootstrapping iterations despite starting out with a higher initial error than some of the baselines. The progress of active bootstrapping and conventional bootstrapping over iterations of the bootstrapping algorithm is shown in Figure 1. While conventional bootstrapping increases test error over iterations, active bootstrapping decreases test error. Using a diverse seed set from crowdsourced labels produces poor results, with an error of .478 after 8 iterations. This is perhaps due to the influence of high RCN in crowdsourced labels. Meanwhile, the crowdsourced label noise does not appear to adversely impact Active Bootstrapping, which has an error rate of .292. This is also superior to bootstrapping with feature labels (when using the same seed set), which attains an error of .367. Thus, the addition of crowdsourced labels, rather than bootstrapped alone, noticeably improves the learned model. When using a slightly larger seed set of 2,000 feature-labeled instances, adding 200 bootstrapping instances at each iteration (an upper bound on Active Bootstrapping), we see an error of .353 still exceeds the error for ac-

Method Feature Labels, s=1K Feature Labels, s=2K Feature Labels, s=10K Crowdsourcing, s=670 Active Bootstrapping, s=1K

Err0 .332 .302 .295 .374 .332

Err8 .367 .353 .348 .478 .292

Cost $50 $100 $500 $40 $91

Table 1. Error on a Twitter sentiment analysis task with a seed set of size s, after 8 iterations of various bootstrapping algorithms. The error is reported on a manually labeled test set. Cost is estimated based on actual costs or realistic assumptions (see 3.6) for our application. In all cases, traditional bootstrapping causes the test error to increase, while active bootstrapping maintains constant performance.

tive bootstrapping. This suggests that the difference in performance is not due to the number of instances added each iteration, but rather the quality or informativeness of the instances. Even with a very large seed set of 10,000 instances, with 1,000 labels added in each iteration, bootstrapping with feature labels results in increased error relative to Active Bootstrapping. While a rigorous analysis of the real-world cost of each of these strategies is difficult, concrete numbers for our application are shown in Table 1. The cost of feature labels, or more generally heuristic rules, can be considered a function of the desired coverage of the heuristically-labeled set. We assume modestly paid ($25/hr) graduate students generate highprecision heuristic rules, and that each hour of writing heuristics results in 500 heuristic-labeled instances. Creating a crowdsourcing task, writing instructions, and using scripts to generate and validate data on the Mechanical Turk framework required an hour of effort ($25). The cost of the crowdsourced seed set was $15 while the crowdsourcing component of Active Bootstrapping cost $16. Our results suggest that a small set of heuristic rules coupled with crowdsourcing provide a cost-effective method of generating training data while reducing classification error.

4. Conclusion We have presented a framework for acquiring labeled data from inexpensive sources, as well as leveraging unlabeled data. Feature labels and crowdsourcing provide two inexpensive methods of acquiring labeled data with complementary strengths. Bootstrapping and active learning both leverage unlabeled data to improve classifier performance. Our method, Active Bootstrapping, balances the precision and recall advantages of both labeling techniques by exploiting unlabeled data,

Reducing Label Cost by Combining Feature Labels and Crowdsourcing

References Ambati, V., Vogel, S., and Carbonell, J. Active learning and crowd-sourcing for machine translation. LREC, 2010. Balcan, Maria-Florina, Beygelzimer, Alina, and Langford, John. Agnostic active learning. In ICML, pp. 65–72, 2006. Castro, Rui M. and Nowak, Robert. Upper and lower bounds for active learning. In Allerton Conference on Communication, Control and Computing. Allerton House, University of Illinois, 2006. Castro, Rui M. and Nowak, Robert D. Minimax bounds for active learning. In COLT, pp. 151–156, 2007. Figure 1. Graph of test error over bootstrapping iterations. Without active labeling, bootstrapping error increases. By using crowdsourced labels each iteration, the error remains in a tighter range and eventually decreases

Daum´e III, Hal. Bootstrapping, 2007. http://nlpers.blogspot.com/2007/09/ bootstrapping.html.

URL

K¨a¨ari¨ainen, Matti. Active learning in the nonrealizable case. In NIPS Workshop on Foundations of Active Learning, 2006.

thereby improving the overall quality of the training data at minimal expense. We have tested our hypothesis using a real-world data set, showing a marked improvement over baseline methods. In particular, our findings support the assertions that heuristics succumb to overfitting and bias and crowdsourced judgments are susceptible to considerable noise. By combining these two approaches, Active Bootstrapping is able to achieve superior performance. In future work, we hope to provide a more rigorous theoretical justification for the benefits of active bootstrapping. One approach to this end is to construct a simple abstraction that illustrates analytically how bootstrapping with feature labels complements active learning with a potentially noisy oracle. Additionally, added experiments comparing different methods across different domains with realistic cost estimates could provide a useful perspective on the best methods to produce labels. Comparing against methods such as traditional active learning or adapting active approaches for acquiring feature labels (Liu et al., 2004) to a crowdsourced model could provide additional improvements. Increasing the interaction between the two labeling strategies is also an interesting direction and we hope to explore iteratively adaptive feature labels, where feature labels are updated based on the results of each crowdsourced query.

Liu, Bing, Li, Xiaoli, Lee, Wee Sun, and Yu, Philip S. Text classification by labeling words. In AAAI, pp. 425–430, 2004. McCallum, Andrew and Nigam, Kamal. Text classification by bootstrapping with keywords, em and shrinkage. In ACL99 - Workshop for Unsupervised Learning in Natural Language Processing, pp. 52– 58, 1999. Quinn, A.J., Bederson, B.B., Yeh, T., and Lin, J. Crowdflow: Integrating machine learning with mechanical turk for speed-cost-quality flexibility. Technical report, Tech. Rep. HCIL-2010-09, University of Maryland, College Park, 2010. Seeger, Matthias. Learning with labeled and unlabeled data. Technical report, University of Edinburgh, 2001. Wikipedia. List of emoticons, 2010. URL http://en. wikipedia.org/wiki/List_of_emoticons. Yang, Jaewon and Leskovec, Jure. Patterns of temporal variation in online media. In WSDM. Stanford InfoLab, 2011. URL http://ilpubs.stanford. edu:8090/984/.

Reducing Label Cost by Combining Feature Labels ...

Dept. of Computer Science, University of Maryland, College Park, MD, USA 20742 .... these labels helps to better define the optimal decision boundary, resulting ...

272KB Sizes 1 Downloads 154 Views

Recommend Documents

Reducing false-positive detections by combining ... - Semantic Scholar
It is important to realize that mammographic image analysis is an extremely challenging task ... Digital Database of Screening Mammography (DDSM) website, with the modification that the breast region ..... Lifetime Data Analysis, 1998. 4: p.

Combining Local Feature Scoring Methods for Text ... - Semantic Scholar
ommendation [3], word sense disambiguation [19], email ..... be higher since it was generated using two different ...... Issue on Automated Text Categorization.

Active Dual Supervision: Reducing the Cost of ...
we deal only with text classification in this paper, all .... and 1000 negative reviews from the Internet Movie .... next best feature to label is one that will result in.

Active Dual Supervision: Reducing the Cost of ...
As a canonical running example for the theme of this paper, consider the problem of ..... which raises the question of whether one can apply the same principle to ...

Approaches for Reducing the Computational Cost of ...
otherwise, set y = y′ and go to Step 2. and go to Step 2. III. ENHANCEMENTS TO THE KM ALGORITHMS. Five enhancements to the KM algorithms are introduced in this section. Their computational costs are also compared with the original KM algorithms. Al

White Paper - Reducing the cost of video surveillance by 88.pdf ...
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. White Paper ...

Perennial Labels - Master List
color, clusters of pale blue star shaped flowers. Anemone ... that will be covered with masses of tiny star shaped white flowers in ..... Dwarf Japanese Sweet Flag.

Enhancing perceptual learning by combining
Sep 22, 2010 - frequency-discrimination task in human listeners when practice on that ...... sures receives some support from the tentative conclusion that.

Fast Bootstrapping by Combining Importance ... - Tim Hesterberg
The combination (\CC.IS") is effective for ... The first element of CC.IS is importance ...... Results are not as good for all statistics and datasets as shown in Table 1 ...

Fast Bootstrapping by Combining Importance ... - Tim Hesterberg
The original data is X = (x1 x2 ::: xn), a sample from an unknown distribution ...... integration"| estimating an integral, or equivalently the expected value of a ...

Remote Sensing Image Segmentation By Combining Spectral.pdf ...
Loading… Whoops! There was a problem loading more pages. Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Remote Sensin ... Spectral.pdf. Remote Sensing ... g Spectral.pdf. Open. Extract. Open with. S

FAST FINGERTIP POSITIONING BY COMBINING ...
Hand gesture recognition has been a popular research topic in recent ... and there are many works related to these topics. ... After converting the frame to a bi-.

Black Label B Black Label C Black Label D Sleeve Luff ...
Black Label D. Sleeve Luff. Black Label E. Sleeve Luff. Black Label A & B. Black Label C. Black Label D. Black Label E. Comparison of sail sizes (Black Label rigs). Mainsails shown in position when rigged on a. Graphite boom. © 2009. Black Label A.

label options.pdf
Label Options. Default Label. Label Size: 2” X 2”. Other Label Options. Label Size: 2” X 2” Label Size: 4” X 2.5” (should put 4” X 3” in printer settings). Label Size: ...

Label EFA.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Label EFA.pdf.Missing:

Labels for Education.pdf
Page 1 of 1. Marquette Catholic is accepting these items and labels to raise money for improvements to the school. environment: Box Tops, Prairie Farms Milk ...

TV-Based Multi-Label Image Segmentation with Label ...
Without the label cost term, thresholding the solution of the convex relaxation ui gives the global minimum .... Clearly, when the maximum of the labeling function uk(x) ∈ {0,1}, 1 ≤ k ≤ n, on the whole image ..... image and video segmentation

Cheap wholesale brand new Barcode label printers clothing label ...
Cheap wholesale brand new Barcode label printers cloth ... rt 20-82mm width printing Print speed is very fast.pdf. Cheap wholesale brand new Barcode label ...

Label EFA.pdf
d'Aviron» 2 étoiles à l'association SPORT NAUTIQUE D'ABBEVILLE. En répondant aux critères du label «École Française d'Aviron» 2 étoiles cette structure.

kitchen labels blog.pdf
Page. 1. /. 5. Loading… Page 1 of 5. Page 1 of 5. Page 2 of 5. Page 2 of 5. Page 3 of 5. Page 3 of 5. Page 4 of 5. Page 4 of 5. kitchen labels blog.pdf. kitchen labels blog.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying kitchen label