000 001 002

Automated Textual Descriptions for a Wide Range of Video Events with 48 Human Actions?

000 001 002 003

003 004

P. Hanckmann, K. Schutte, G.J. Burghouts

004 005

005

TNO, The Hague, The Netherlands

006

006

007

007

008

008

009

Abstract. Presented is a hybrid method to generate textual descriptions of video based on actions. The method includes an action classifier and a description generator. The aim for the action classifier is to detect and classify the actions in the video, such that they can be used as verbs for the description generator. The aim of the description generator is (1) to find the actors (objects or persons) in the video and connect these correctly to the verbs, such that these represent the subject, and direct and indirect objects, and (2) to generate a sentence based on the verb, subject, and direct and indirect objects. The novelty of our method is that we exploit the discriminative power of a bag-of-features action detector with the generative power of a rule-based action descriptor. Shown is that this approach outperforms a homogeneous setup with the rulebased action detector and action descriptor.

010 011 012 013 014 015 016 017 018 019 020

1

Introduction

027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044

012 013 014 015 016 017 018 019 020

023 024

024

026

011

022

022

025

010

021

021

023

009

This paper proposes a method to generate textual action descriptions from general videos. The action descriptions are centered around 48 verbs such as walk, bury, approach, give, etc [1]. The amount of video data is increasing daily, both on the internet (e.g. YouTube) and for surveillance applications. This poses a challenge on extracting information from this huge bulk of data. In this paper, we consider the automated search for relevant event in videos. One determinant of an event’s relevancy, is the action that is performed by humans in the scene. We argue that most events are characterized by multiple actions, and not a single one. A typical event is that one person approaches the other, walks up to the other person, and gives something. These actions, ‘walk’, ‘approach’, ‘give’ and ‘receive’, occur in a particular order, and are partially overlapping. Moreover, there are two persons in this event. In general, events may consist of multiple actions and performed by one or more persons. Such events are the topic of this paper. Therefore, we progress beyond single-actor datasets such as KTH [2] and Weizmann [3]. The UCF Sports [4], Hollywood2 [5] and YouTube [6] datasets are much more challenging as they involve interactions with other people and items and the recording conditions are harder. Yet they lack the realistic property of having video events which comprise multiple actions. We consider the DARPA dataset ?

This work has been sponsored by DARPA, Mind’s Eye program.

025 026 027 028 029 030 031 032 033 034 035 036 037 038 039 040 041 042 043 044

2 045 046 047 048 049 050 051 052 053

ECCV-12 submission ID 369

[1] in which videoclips are annotated in terms of 48 human actions, where each event consists of on average 7 actions. In this paper, we consider the automated tagging of realistic video events. We propose a method that produces textual descriptions. The reason for this is that text is intuitive: the popularity of the Google search engine is that it enables a user to perform a text-based search. Our method produces descriptions that cover a wide range of events, they are not limited to a particular domain, and they are based on 48 generic human actions. Figure 1 illustrates the textual descriptions.

Ground truth examples: Man flees while woman chases him. A man and woman stand side by side, the man begins running and the woman follows him. One person is running and leaving. The other person starts chasing. Our system response: Person 4 goes. Person 4 leaves. Person 4 walks. Person 7 flees from person 4.

056 057 058 059 060 061 062 063

070

Fig. 1. The image shows two people who chase each other. Next to the image the ground truth provided by 3 different people is printed. Our system response provides the detected actions as verbs in a short sentence with their connected subject and object. It shows that our system response captures the essence of the action as described in the ground truth.

072

Prior research on creating textual descriptions from video has been focused on:

076 077 078 079 080 081 082 083 084 085 086 087 088 089

051 052 053

056 057 058 059 060 061 062 063

066 067 068 069 070

072 073 074

074 075

050

071

071

073

049

065

065

069

048

064

064

068

047

055

055

067

046

054

054

066

045

– using speech recognition to generate video subscriptions [7], – detecting and extracting text which is present in the video [7], – detecting patterns in a restricted environment and use the detected patterns to generate a description [8, 9]. The first two options generate a description based on what can be read or heard in a video. In this paper we rather aim to deduct these description from the video data itself. The third option has only been applied in action restricted environments (e.g. video data from sports in which strict rules apply). Detecting the state of the game directly translates in a description. Behavior seen in general videos is not as structured. The proposed method in this paper includes a detector for 48 human actions and a generic descriptor that generates sentences for a wide range of events based on these actions. The approach in [10] is also generic, but there are three limitations compared to our method: (1) it has a strong focus on describing the environment and describing the subjects’ emotional states, where in this paper we do not exploit emotional states as they do not occur in

075 076 077 078 079 080 081 082 083 084 085 086 087 088 089

ECCV-12 submission ID 369 090 091 092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108

3

the considered 48 human actions, (2) it assumes that the subject is always a person, our method generalizes subjects to both people and vehicles, and (3) we extend the set from 5 actions to 48 actions: approach, arrive, attach, bounce, bury, carry, catch, chase, close, collide, dig, drop, enter, exchange, exit, fall, flee, fly, follow, get, give, go, hand, haul, have, hit, hold, jump, kick, leave, lift, move, open, pass, pick up, push, put down, raise, receive, replace, run, snatch, stop, take, throw, touch, turn, and walk. The contributions of our work are the action classifier and the description generator. The novelty of our work is that we take advantage of the discriminative power of 48 bag-of-features action detectors [11] to identify the subset of likely actions, and to subsequently describe them with a rule-based method that relates the actions to entities in the scene. Important aspects of our method are classification of actions, detection of actors, and connecting the actions to the relevant actors in the video. An actor can be a person or an object. The proposed method is a combination of an action classifier and a description generator. This paper will introduce the system generating the video descriptions, including the action classifier and description generator, in section 2. In section 3 the experimental setup is discussed, followed in section 4 with the results. Finally our conclusions will be presented in section 5.

2

113 114 115 116 117

092 093 094 095 096 097 098 099 100 101 102 103 104 105 106 107 108

110

Method

111

111 112

091

109

109 110

090

The action classifier, and the description generator are part of our system. Our system is a video processing system using a pipeline to process the videos. It takes video data as input, and provides the action descriptions as output. An overview of the system components is depicted in figure 2. In subsection 2.1 an overview of the system is presented. The actual action classifier and description generator are described in more depth in subsections 2.2 and 2.3 respectively.

112 113 114 115 116 117

118

118

119

119

120

120

121

121

122

122

123

123

124

Fig. 2. The processing pipeline.

124

125

125

126

126

127

127

128 129 130 131 132 133 134

2.1

System Overview

Our method is part of a larger system in which objects in the scene are detected, tracked and their features are captured. This overall system is described in [12, 13] and it is summarized here. It consists of five building blocks (see figure 2): visual processing, fusion engine, event description, action classifier, and the description generator.

128 129 130 131 132 133 134

4 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166

ECCV-12 submission ID 369

The visual processing [14] incorporates three steps. First the extraction of meaningful objects and their properties from video by (1) detection of moving objects [15], (2) a trained object detector for specific classes like persons and cars [16, 17], and (3) computation of other features (e.g. description of pose and body part movements) [18]. After detection it combines items into tracks. The fusion engine [19] filters and fuses tracked objects in order to form entities. Only entities - a subset of the detected and tracked objects - are selected for further processing. The event description generates a more abstract description. From the low-level object features, information at situation level [20] is created. There are three types of event properties: 1. Single-entity event properties, which describe properties of one entity (e.g. “the entity is moving fast”). 2. Relations, properties about the relation between two entities (e.g. “the distance between two entities is decreasing”). 3. Global properties of the scene, which present information about the scene that is not exclusively related to one or more entities (e.g. “there is only one entity present”). A belief value is assigned to each property. The action classifier assigns to all 48 human actions a probability (see also [1]). We consider two types of action classifiers, which we will compare in our experiments. The first type is a discriminative bag-of-features classifier. The second type is a generative rule-based classifier. The two classifiers will be described in more detail in section 2.2. In section 2.3, we experimentally establish the best classifier to generate textual descriptions for video. The description generator uses the events from the Event Description to build hypothesis about what happened in the video. The most likely hypothesis are selected based on the classified actions combined with the information from the hypothesis. The selected hypothesis connect the actions to entities and objects. If there are entities or objects that can be connected with the action, then a textual description is generated.

169 170 171 172 173 174 175 176 177 178 179

136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167

167 168

135

2.2

Action Classifier

The aim of the action classifier is to recognize the verbs that are used for the description generator. The Random-Forest Tag-Propagation (RF TP) and multihypotheses Rule Based System (RBS) classifiers are considered. The choice is based on performance: the RF TP performs best [21] and the RBS performed second best as actions classifiers in previous research [13] on the DARPA dataset under investigation (see section 3 for details). The RF TP classifier [11] is a rule-based classifier which learns its rules from an abundant set of decision trees (i.e. a random forest) [22]. In order to deliver a list of actions with their probability, the similarity distributions over the actions is calculated [23]. The core of the RF TP method is that it models the probability

168 169 170 171 172 173 174 175 176 177 178 179

ECCV-12 submission ID 369 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199

5

of a verb (in a video) as a consequence of the similarities with all of the previously seen videos and the actions that are active in those. The training of the RF-TP is described in [11]. The RF TP outputs a vector containing a belief value for each action present in the video. The RBS classifier is a Rule Based System. World knowledge, coded in the rules, describes the actions. There are currently 73 rules describing 48 actions. The rules are essentially a set of conditions. The conditions are based on the beliefs and relations as generated by the event description (see example 1). In the example, E1, T1, etc. are placeholders for actual entity identifiers and timestamps. As more than one entity can be present at any time, and as actions might happen multiple times by one or different entities, the RBS builds multiple hypotheses. The belief value of the action is calculated by taking the sum of the beliefs of the triggered conditions (and if the condition is not triggered, it’s belief P is zero), divided by the maximum possible performance: B(conditions) B(hypothesis) = number of conditions . In this way an inexact match between the rules and noisy input data is allowed. For every action, the top hypothesis is selected. For each hypothesis, a belief value is calculated. There are 73 hypotheses in total, so we have 73 beliefs. These belief values are matched to the 48 human actions. We use a simple linear mapping obtained from a least-squares fit as a linear L2 norm optimization.

202 203 204 205 206 207 208

181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200

200 201

180

Example 1. Rule representing the catch action. action = chase condition(1) = object(E1) moving at time(T1) condition(2) = object(E2) moving at time(T2) condition(3) = object(E1) is a person condition(4) = object(E2) is a person condition(5) = time(T1) and time(T2) overlap Resulting sentence = “Person E1 chases Person E2”

201 202 203 204 205 206 207 208

209

209

210

210 211

211 212 213 214 215 216 217 218 219 220 221 222 223 224

2.3

Description Generator

The RBS is also used to generate descriptions. It can be applied as description generator due to the generative properties and the structure of the rules which connects entities and objects to actions. When applied as a description generator the RBS builds the hypotheses and selects for every rule the hypothesis with the highest belief value. Based on the actions classified by the action classifier, a selection is made among the rules (and their top hypothesis). For every action a rule is chosen that represents that action. Then, from the list of actions the description generator selects a number of actions based on: (1) the action probability, (2) the hypothesis score generated by the RBS, and (3) if an entity or object is present in the best hypothesis (which implies that the action is connected to an entity or object). For the selected actions, the hypothesis are used to extract the subject and objects (see example 1). The actions are used as

212 213 214 215 216 217 218 219 220 221 222 223 224

6 225 226 227 228 229 230 231 232

ECCV-12 submission ID 369

the verbs. A sentence is considered to at least contain a subject and a verb (e.g. person E1 catches). However, the rule can also provide the direct and indirect object (e.g. person E1 catches object E2 ). Additionally the hypothesis provides temporal information for the action, which can be used to order the actions in time. Finally, a compact sentence is generated for each action using a template filling approach. The template provides information about what prepositions are used in combination with specific verbs, the order of the words in the sentence, and the conjugation of the verbs.

225 226 227 228 229 230 231 232 233

233

234

234 Ground truth examples: A man catches a box that is flying through the air. The person caught the box. Man catches box flying through the air. Our system response: Entity 11 goes. Entity 11 catches person 6. Person 6 has.

235 236 237 238 239 240

235 236 237 238 239 240

241

241

242

242

243

243 244

244 245 246 247 248 249

Fig. 3. The image shows one person catching a box. Next to the image the ground truth provided by 3 different people is printed. Our system response provides the detected actions as verbs in a short sentence with their connected subject and object. It shows that our system response captures the essence of the action as described in the ground truth. However, it confuses the subject and the direct object.

245 246 247 248 249

250

250

251

251 252

252 253

3

Experimental Setup

255 256 257 258 259 260 261 262 263 264 265 266 267 268 269

253 254

254

The description generator is evaluated on 241 short videos (available at [1]). For all videos ground truth is available. The ground truth consist of 10 sentences per video, written by 10 different people. The ground truth can contain complex sentences (see the examples in figure 3, and note the confusion of the subject and object in the video) and therefore describe multiple actions. Per video at minimum 1, at maximum 10, and at average 4.9 different actions are present in the ground truth. For each ground truth sentence we extract, using the Stanford Natural Language Parser [24, 25], the verb, subject, and object(s). The subject and objects are labeled with one of the four categories: person, car, bike, or other. The description generator constructs a sentence containing a verb and subject, and (if detected) a direct and indirect object. Its subject and objects are also labeled with one of the four categories. The experiment will compare the following action classifier - description generator combinations: RBS + RBS, and the RF TP + RBS. Both setups of the

255 256 257 258 259 260 261 262 263 264 265 266 267 268 269

ECCV-12 submission ID 369 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285

7

description generator use the same event description data. The RBS + RBS uses the RBS both for action classification and description generation. The RF TP + RBS uses the RF TP to classify actions and the RBS to generate descriptions. For the RF TP + RBS the rule set was optimized to gather information about the subject and objects to generate a sentence. For the RBS + RBS setup the rule set was optimized for the classification of actions. We calculate two performance measures: a union and a percentage score. For each clip we compare the ground truth sentences to the generated sentences. The clip’s union score is the best match for all sentence pairs (i.e. the percentage of clips where there is at least one agreement between ground truth and generated sentences); its percentage score is the mean match corrected for the minimum number of the amount of ground truth sentences and the amount of generated sentences (i.e. the agreement between the sets of ground truth and generated sentences). We report the average over all clips, for verbs, subjects and objects as well as an overall score (the overall score is the mean of the verb, subject and object scores).

4

Experimental Results

290 291 292

297 298 299 300 301 302 303

Table 1. Performance of the description generator.

306 307 308 309 310 311 312 313 314

275 276 277 278 279 280 281 282 283 284 285

289 290 291 292

294 295

RBS + RBS Score Overall Verb Subject union 61.6% 86.1% 52.3% percentage 25.4% 38.6% 18.6% RF TP + RBS Score Overall Verb Subject union 68.3% 92.3% 62.0% percentage 40.4% 59.3% 30.5%

296

Objects 51.7% 18.9%

297 298 299 300

Objects 67.8% 31.5%

301 302 303 304

304 305

274

293

295 296

273

288

The performance for both the RBS + RBS and RF TP + RBS is given in table 1. Both on union and the percentage score we see the better performance for the RF TP + RBS compared to the RBS + RBS, supported by an increase for the descriptions’ Verb, Subject and Object components.

293 294

272

287

288 289

271

286

286 287

270

The performance gain for the verb classification on the union score is 6.2%, thus more correct verbs have been reported by the RF TP + RBS. For the percentage score the improvement is 20.7%, so we also have an improved accuracy of the classified verbs. The performance on the subjects increased as well for both the union and the percentage score, with resp. 9.7% and 11.9%. Every generated sentence does at least contains a verb and a subject. The performance gain of the subject score is less than the verbs performance gain, while it would be expected to be similar or higher. Both the object and the subject score suffer from too restrictive threshold on the person, car and bike detectors leading to many entities labeled ‘other’.

305 306 307 308 309 310 311 312 313 314

8 315 316 317 318 319 320 321 322 323 324

ECCV-12 submission ID 369

The performance on the objects increased for the union and the percentage score by 16.1% and 12.6%. It shows that the RF TP + RBS is better in detecting and connecting the direct and indirect objects in a video. The results show that the discriminative bag-of-features based RF TP is better used as verb classifier than the RBS when creating descriptions. Although [13] already showed that the RF TP is a good stand alone verb classifier, here we see it also performs well when applied to a description task. Even though the RF TP classifier is not optimized for the description generator (e.g. the set of frequently occurring actions may be different) we conclude that a dedicated action classifier improves the performance significantly.

327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345

5

Conclusions and Future Work

This paper shows that a dedicated action classifier in addition to a description generator improves the description performance significantly. Although not perfect, a percentage score of almost 60% on correctly reported verbs is quite good. The performance on the subject and objects classification is currently low. The issue is misclassification of the actors in the video and as a result reporting “other” as classification too often. Still, we showed a significant increase in the subject and object recognition scores. This increase can be attributed to a better understanding of the scene from the description generator. The percentage score of the current action classifier is expected to improve further if we can train the action classifier on the description ground truth. The classification of the subject and objects in the description generator should be improved by adjusting the classifiers in visual processing and by relying more on world knowledge coded in rules in the RBS. Furthermore, the number of track-breaks in the visual processing should be reduced, possibly by using multihypotheses tracking, as the current rule set is quite sensitive to track break errors. We expect that these latter two improvements will significantly boost the recognition performance for the subject and objects.

Acknowledgement

352 353

This work is supported by DARPA (Mind’s Eye program). The content of the information does not necessarily reflect the position or the policy of the US Government, and no official endorsement should be inferred. The authors acknowledge the Cortex team for their contributions.

356 357 358 359

320 321 322 323 324

326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345

348

350 351 352 353 354

354 355

319

349

349

351

318

347

347

350

317

346

346

348

316

325

325 326

315

References 1. DARPA: Hosting corpora suitable for research in visual activity recognition, in particular, the video corpora collected as part of DARPA’s Mind’s Eye program: http://www.visint.org (2011)

355 356 357 358 359

ECCV-12 submission ID 369 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404

9

2. Sch¨ uldt, C., Laptev, I., Caputo, B.: Recognizing human actions: A local svm approach. In: In Proc. ICPR. (2004) 32–36 3. Gorelick, L., Blank, M., Shechtman, E., Irani, M., Basri, R.: Actions as space-time shapes. IEEE Trans. Pattern Anal. Mach. Intell. 29 (2007) 2247–2253 4. Ali, S., Shah, M.: Floor fields for tracking in high density crowd scenes. In: In Proc. of European Conf on Computer Vision. (2008) 5. Marszalek, M., Laptev, I., Schmid, C.: Actions in context. In: CVPR. (2009) 6. Liu, J., Luo, J., Shah, M.: Recognizing realistic actions from videos “in the wild”. In: CVPR. (2009) 7. Gagnon, L.: Automatic detection of visual elements in films and description with a synthetic voice- application to video description. Proceedings of the 9th International Conference on Low Vision, 2008 (2008) 8. Gupta, A., Srinivasan, P., Shi, J., Davis, L.: Understanding videos, constructing plots learning a visually grounded storyline model from annotated videos. IEEE Conference on Computer Vision and Pattern Recognition, 2009. CVPR 2009 (2009) 2012–2019 9. Kojima, A., Tamura, T., Fukunaga, K.: Natural language description of human activities from video images based on concept hierarchy of actions. International Journal of Computer Vision 50 (2002) 171–184 10. Khan, M.U.G., Zhang, L., Gotoh, Y.: Towards coherent natural language description of video streams. In: ICCV Workshops, IEEE (2011) 664–671 11. Burghouts, G., Bouma, H., Hollander, R.d., Broek, S.v.d., Schutte, K.: Recognition of 48 human behaviors from video. Int. Symp. Optronics in Defense and Security OPTRO (2012) 12. Ditzel, M., Kester, L., van den Broek, S.: System design for distributed adaptive observation systems. IEEE Int. Conf. Information Fusion (2011) 13. Bouma, H., Hanckmann, P., Marck, J.W., Penning, L., den Hollander, R., ten Hove, J.M., van den Broek, S., Schutte, K., Burghouts, G.: Automatic human action recognition in a scene from visual inputs. Proc. SPIE 8388 (2012) 14. Burghouts, G., Hollander, R.d., Schutte, K., Marck, J., Landsmeer, S., Breejen, E.d.: Increasing the security at vital infrastructures: automated detection of deviant behaviors. Proc. SPIE 8019 (2011) 15. Withagen, P., Schutte, K., Groen, F.: Probabilistic classification between foreground objects and background. Proc. IEEE Int. Conf. Pattern Recognition (2004) 31–34 16. Laptev, I.: Improving object detection with boosted histograms. Image and Vision Computing (2009) 535–544 17. Felzenszwalb, P., Girshick, R., McAllester, D., Ramanan, D.: Object detection with discriminatively trained part based models. IEEE Trans. Pattern Analysis and Machine Intelligence 32(9) (2010) 1627–1645 18. Ferrari, V., Marin-Jimenez, M., Zisserman, A.: Progressive search space reduction for human pose estimation. IEEE Computer Vision and Pattern Recognition (2008) 19. van den Broek, S., Hanckmann, P., Ditzel, M.: Situation and threat assessment for urban scenarios in a distributed system. Proc. Int. Conf. Information Fusion (2011) 20. Steinberg, A.N., Bowman, C.L.: Rethinking the JDL data fusion levels. NSSDF Conference Proceedings (2004) 21. Burghouts, G., Schutte, K.: Correlations between 48 human actions improve their detection. ICPR2012 (2012) 22. Breiman, L.: Random forests. Machine Learning 45 (2001) 1

360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404

10 405 406 407 408 409 410

ECCV-12 submission ID 369

23. Guillaumin, M., Mensink, T., Verbeek, J., Schmid, C.: Tagprop: Discriminative metric learning in nearest neighbor models for image auto-annotation. ICCV (2009) 24. Klein, D., Manning, C.D.: Accurate unlexicalized parsing. In: In Proceedings Of The 41st Annual Meeting Of The Association For Computational Linguistics. (2003) 423–430 25. The Stanford Natural Language Processing Group: The Stanford parser: A statistical parser (2003) http://nlp.stanford.edu/software/lex-parser.shtml.

405 406 407 408 409 410

411

411

412

412

413

413

414

414

415

415

416

416

417

417

418

418

419

419

420

420

421

421

422

422

423

423

424

424

425

425

426

426

427

427

428

428

429

429

430

430

431

431

432

432

433

433

434

434

435

435

436

436

437

437

438

438

439

439

440

440

441

441

442

442

443

443

444

444

445

445

446

446

447

447

448

448

449

449

Automated Textual Descriptions for a Wide Range of ...

Abstract. Presented is a hybrid method to generate textual descrip- tions of video based on actions. The method includes an action classifier and a description generator. The aim for the action classifier is to detect and classify the actions in the video, such that they can be used as verbs for the description generator. The aim ...

355KB Sizes 5 Downloads 234 Views

Recommend Documents

Finding textures by textual descriptions, visual ... - Semantic Scholar
that LinХs system deals with regular textures only. Tamura et al. (1978) proposed a texture representation based on psychological studies of human perceptions.

A high-performance, energy-efficient node for a wide range of ... - EWSN
operating systems affordable. The transceiver is ... The design is released as open source hardware under a creative ... Systems and Networks (EWSN) 2016.

BFP opens a wide range of new opportunities for IDG. One of the ...
As a publisher, IDG sells advertising across its network of websites. ... traditional methods of targeting have been based exclusively on specific content.

BFP opens a wide range of new opportunities for IDG. One of the ...
As a publisher, IDG sells advertising across its network of websites. ... traditional methods of targeting have been based exclusively on specific content.

Wide-range tuning of polymer microring resonators by ...
Nov 15, 2004 - Department of Electrical Engineering and Department of Applied Physics, California Institute of Technology,. MC 136-93, Pasadena, California ...

Wide-range tuning of polymer microring resonators by ...
Nov 15, 2004 - In this Letter we use a simple and fast method to trim high-Q, critically coupled polymer microring reso- nators. Using a soft-lithography replica- ...

A Proposed Dataset of Event Descriptions for ...
Computer Laboratory. University of Cambridge .... longing to the activity or process classes as pos- itive, and nouns .... guistics, 30:95–101. C. J. Fillmore and C.

Wide Input Voltage Range Boost/Inverting/SEPIC ... - Linear Technology
of Linear Technology Corporation. All other trademarks are the property ... lithium-ion powered systems to automotive, industrial and telecommunications power ...

Range-wide phylogeography and gene zones ... - Wiley Online Library
*Istituto di Genetica Vegetale, Sezione di Firenze, Consiglio Nazionale delle Ricerche, via Madonna del Piano 10, 50019 Sesto Fiorentino. (FI), Italy, †Departamento de Sistemas y Recursos Forestales, CIFOR — INIA, Carretera de La Coruña km 7.5,

Development of a Machine Vision Application for Automated Tool ...
measuring and classifying cutting tools wear, in order to provide a good ...... already under monitoring, resulting in better performance of the whole system.

A Comparison of Visual and Textual Page ... - Research at Google
Apr 30, 2010 - thumbnails to support re-visitation, the participants must have seen the ... evidence supporting the usefulness of thumbnails as representations of ..... results? In Proc. SIGIR'02, 365-366. ... Human Computer. Interaction 2002) ...

Development of a fully automated system for delivering ... - Springer Link
Development of a fully automated system for delivering odors in an MRI environment. ISABEL CUEVAS, BENOÎT GÉRARD, PAULA PLAZA, ELODIE LERENS, ...

A methodology for the automated creation of fuzzy ...
+30 26510 98803; fax: +30 26510 98889. E-mail address: ...... [5] Zahan S. A fuzzy approach to computer-assisted myocardial ischemia diagnosis. Artif Intell ...

Hirundo: A Mechanism for Automated Production of ...
Apr 25, 2012 - ranked list of optimized stream programs that are tailored ..... Analyze. Program. Structure. Generate similar programs with different flow.

An Overview: A Wide Tuning Range Low Phase Noise LC ... - IJRIT
extraction of the layout has been performed by using full-wave electronic-magnetic (EM) simulation tools,. Sonnet (Sonnet Software Inc., North Syracuse, NY) and HFSS (Ansys Inc., Canonsburg, PA). As for characterizing the VCO performance, the widely

An Overview: A Wide Tuning Range Low Phase Noise LC ... - IJRIT
translation and channel selection in wireless transceivers and broadband cable tuners. It also plays an ..... Varactor C–V test circuit. To a large extent, the layout ...

GRAPHIC AND TEXTUAL STANDARDS FOR CORPORATE IDENTITY
Apr 12, 2016 - These procedures are developed in support of Board Policy #1. Procedures. 1. The district logo, or its namebar, including its typestyle, size and ...

A textual graph-based modeling framework for education ... - GitHub
Item 1 - 6 - Proc-2013-1-3-15.pdf (A visual DPF tool implemented by our students) ... Elements (in the signature) having the same name are considered ... {LOCAL PATH ON YOUR COMPUTER}/no.hib.dpf.text.updatesite/target/site. 8. Open the ...

Descriptions of PRISM Spatial Climate Datasets for the Conterminous ...
networks, modeling techniques, and spatial and temporal resolutions. .... 5. Table 4. Summary of the PRISM time series datasets. Methodological details are ...

Range extension f Range extension for Thomas' Mastiff bat ... - SciELO
E-mail: [email protected]; [email protected] ... two locations, all bats were roosting in palm leaves while in the later location, a single ...

Learning Comprehensible Descriptions of Multivariate ...
hoc, domain-specific techniques for "flatten- ing” the time series to a learner-friendly rep- resentation, this fails to take into account both the special problems and ...

Idukki Dt -Provisional High Range & Low range Seniority list of ...
Page 2 of 3. '),. -. 47. 3. 6 2012 MinimolChacko VH Chackkupalbm. 48. 6. 6 2012 Linta PA Sub Centre K.Chappathu. 49. 6 2012 Dilip Varqhese Sub Centre Sdnthanpara. 50. 8. 6 2012 sanitha S Nair Sub cent.e Kallar. 51 12. 5 2012 Johney Chacko RPC Kumily.