Metrics for Sensemaking in Enterprise Tag Management* Michael J. Muller, Casey Dugan, & David R. Millen IBM Research One Rogers Street, Cambridge, MA, USA 02142 {michael_muller, cadugan, david_r_millen}@us.ibm.com ABSTRACT

Measurements for Social Tagging

We analyze existing metrics for describing social-tagging phenomena of the tag collections associated with a URL or with a tagger, and we propose several new metrics based on normalized entropy and measurements of social agreement. We show how these new metrics can help to explain diverse phenomena in social tagging.

Social tagging has proved its value in anecdotal terms, in its rapid adoption in diverse internet applications, and in the growing number of commercial or enterprise offerings. Aside from these adoption measures, there are few databased measurements that can help organizations, groups, and individuals to assess what value or knowledge they are receiving from social-tagging services or applications. Golder and Huberman proposed the concept of Convergence or stabilization [6]. They examined the accumulation of tags associated with a URL in a public social-tagging system, using the concept of a tag assignment as their basic datum. {tag, user, resource, (date)} A tag assignment occurs each time a user creates or adds a tag to description of a URL or other resource (e.g., in a bookmark). A bookmark that contains a single tag would constitute a single tag assignment; a bookmark that contains three tags would constitute three tag assignments. Other, non-bookmark reference structures may also be used. Golder and Huberman categorized each tag into seven categories, and then examined the changes in the percentage distribution of tags among those seven categories [6]. Their data showed an asymptotic decrease in changes in the percentage distribution of tags as the number of tag assignments increased. They suggested that Convergence (asymptotically small changes in the distribution) occurs at about the 100th tag assignment. Using a different source of data (tags on movies) and a different categorization scheme (three categories), Sen and colleagues observed Convergence occurring at about the 75th tag assignment [20].

Author Keywords

Sensemaking, social-tagging, social software, audience, group, metric. INTRODUCTION

This position paper briefly introduces several research projects in the area of social-tagging, and then focuses on the topic of metrics to analyze and compare tag collections on resources or by taggers. Social tagging occurs when users save references (e.g., bookmarks) to resources (e.g., URLs) in a shared or collaborative setting, in which the users add one or more descriptive words or phrases to each reference. Socialtagging is a strong example of collaborative sensemaking: Individuals pool their knowledge through sharing tags, and use one another’s tags to understand the referenced objects. In this context, social tagging may also be interpreted as supporting sensemaking of the taggers, by analyzing the tags written by each person as indicators of that person’s interests or expertises. On the internet, social tagging has been used to share descriptions of URLs (see www.del.icio.us.com, www.furl.com), and of other diverse types of objects (e.g., www.flickr.com, www.librarything.com, etc.). Social tagging has been studied in these contexts by Ames and Naaman [1], Chi et al. [2], Golder and Huberman [6], Hammond et al. [7], Kipp and Campbell [9], Marlow et al. [11], Sen et al. [20], Szekely et al. [22] and others.

Chi and colleagues also started with tag assignments as their basic data, but then used information theory to describe the growth of collections of tags associated with one or more resources [2]. They showed an apparently infinite growth in the Entropy of a tag collection associated with specific resources.

Social tagging has also been studied within enterprises, by Damianos et al. [3], Dugan et al. [4], Farrell et al. [5], John et al. [8], Millen et al. [12, 13, 14], Muller [17], and ThomSantelli and Muller [32]. What distinguishes enterprise social tagging is, in general, full authentication for each user, and the ability to share references to confidential information.

STUDIES OF ENTERPRISE SOCIAL TAGGING

Our goal is to extend and refine these internet measurements into production-oriented metrics for enterprise social-tagging data and systems. We have studied the adoption of several social-tagging systems within IBM: the Dogear system for social-tagging of URLs and documents [13], the BlogCentral services for internal

* A longer version was submitted to CHI 2008. 1

A

B

Resource: w3.tap.ibm.com

Convergence for 137 High-Tagged Resources

1.2

1 0.95

1

0.9 0.85 0.8

0.6

Convergence

0.4

Convergence

Convergence

0.8

0.75 0.7

Convergence

0.65 0.6 0.55

0.2

0.5 0.45

0

0.4 1

4

7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 64

1

Tag Assignment

7

13 19 25

31 37 43 49 55 61 67 73 79 85 91 97 103 Tag Assignment

Figure 1. Convergence studies. A. Convergence for one representative intranet resource. B. Mean Convergence across 137 high-tagged intranet resources. blogging [17]; the MediaLibrary for sharing of podcasts [21]; the Activities product for sharing individual and team projects [15]; and the Bluepages+1 system for tagging of one employee by another employee [5]. These systems are used by thousands of IBM employees to perform their dayto-day work, creating several hundred thousand bookmarks and similar references to internal and external resources. Previous studies have shown patterns of adoption of these services [5, 13], use of these systems for sensemaking [12] and intranet search [14], issues in the shared vocabularies across services [17, 19], the emergence of groups or communities of users through their tagging behaviors [16], and emergent social roles in tagging for various corporate audiences ([21]; see also [1, 11] for audience-oriented tagging in an internet context). Convergence Studies

We began by creating a production-oriented version of the Convergence metrics. Unlike the cases of [6, 20], we needed a metric that could be applied without a conceptual and manual categorization of tags. For our Convergence metric, we simply calculated the percentage distribution across all tags at the nth tag assignment, and compared it with the percentage distribution of tags at the (n-1)th tag assignment. We observed that our enterprise tagging data

A

Proportion of 212 Resources Achieving Criterion Convergence for Multiple Alternative Criteria during 150 Tag Assignments

(collected on 137 intranet resources) tended to converge faster than the published estimates of 100 [6] or 75 [20] tag assignments. Our data showed Convergence as early as the 26th tag assignment in some cases (Figure 1A). An analysis of 137 high-tagged intranet resources revealed a steady asymptotic increase in Convergence (Figure 1B), prompting the question, how close to no-change must a resource come to be considered “converged?” We began to answer that question by adopting a number of candidate Convergence criteria, such as “xx% of resources achieve 1% or less change by the nth tag assignment.” We plotted these different criteria onto a common set of axes, for what we think was the first engineering-styled analysis of convergence over a relatively large dataset (Figure 2A). We used this structure of analysis in the design of a socialtagging game to generate additional tags on resources [4]; our analysis helped to establish the criterial number of tagassignments for each each resource, and thus the number of tag-producing game-rounds that would be required to achieve a criterial number of “converged” resources. We also considered whether the best unit of analysis was the nth tag assignment, vs. the mth user performing a tag assignment (Figure 2B). We were surprised to see that, with the addition of a simple multiplicative constant, the

B

Proportion of 137 Common Resources Achieving Criterion Convergence for Multiple Alternative Criteria during 62 User Contributions

1.2 1.2

80%Criterion

0.8

85%Criterion 90%Criterion 95%Criterion

0.6

97.5%Criterion 99%Criterion

0.4

100%Criterion

0.2

Proportion of Resources

Proportion of Resources

1 1 80%Criterion 85%Criterion

0.8

90%Criterion 0.6

95%Criterion 97.5%Criterion

0.4

99%Criterion 100%Criterion

0.2 0

0

1

10

19

28

37

46

55

64

73

82

91 100 109 118 127 136 145

Tag Assignment

1

4

7 10 13 16 19 22 25 28 31 34 37 40 43 46 49 52 55 58 61 Number of Users

Figure 2. Percentage of resources meeting different convergence criteria. A. Analysis based on tag-assignments. B. Analysis based on users.

curves in terms of tag assignments were nearly identical to the curves in terms of users. The multiplicative constant is derived from the simple observation that some users apply more than one tag in a bookmark, and by that calculation, each user provided an average of 1.56 tag assignments. Thus, for the early growth of a tag collection associated with each of the 137 resources in our study, we could use either tag assignments or users as our unit of analysis for Convergence analyses. Questioning the Adequacy of Convergence

In later work, we have taken a more critical stance with regard to the accepted Convergence concept. Using Entropy measures, Chi and colleagues found a rather different pattern from [6, 20] in the growth of tag collections over time [2]. We explored Entropy as a potential metric for the content of tag collections, but we realized that the upper bound for an estimate of Entropy depends upon the number of items in the collection. Thus, a tag collection consisting of 100 tags has a much larger upper bound than a tag collection consisting of 10 tags. Entropy appeared to be a size-biased measure. We recomputed Entropy by scaling it between the theoretical upper and lower bounds that could be achieved for a given number of tags, and called this scaled version NormInfo. Figure 3 (A-C) illustrates these issues, which occurred in similar ways with other resources. Figure 3A shows our production-oriented version of the Convergence measure for one intranet resource with over 305 tag assignments; Convergence appears to occur somewhere between the 30th and 70th tag assignments. Figure 3B shows the Entropy measure of Chi et al. [2] over the same data. Entropy achieves an initial stabilization, and then increases again, and does not appear to be approaching an asymptote even at the 305th tag assignment. However, when we calculated NormInfo for the same data, we found a rather different pattern (Figure 3C). After some initial high values within the first 30 or so tag assignments, the data exhibit a sharp downward spike beginning at the 33rd tag assignment, requiring another 53 tag assignments to achieve a recovery by the 86th tag assignment. Thereafter, growth in the NormInfo metric is relatively modest, with the suggestion of an asymptote around the 275th tag assignment. We note the following weaknesses of the previously proposed measurements: x

Figure 3. Metrics describing the growth of tag collection for one intranet resource. A. Convergence (range 0-1). B. Entropy (in bits). C. Normalized Information (scaled entropy, range 0-1). D. Summation of three metrics based on social agreement (range 0-1, sum to 1.0).

3

Convergence appeared to be complete by the 30th-70th tag assignment. However, Entropy showed a different pattern, failing to achieve stability even by the 305th tag assignment. This may have occurred (on this resource and on others) because Convergence measures only incremental change in the tag frequency distribution, whereas Entropy measures the structure within the tag distribution.

A

User: Product manager

C

UniqueRatio

0.6

0.4

PluralRatio

0.2

1

31

UniqueRatio

0.6

0.4

PluralRatio

0.2

ModalRatio

0

B

1

25

49

User: CIO organization manager

D

1

PluralRatio ModalRatio 1

15

29

43

57

71

85

30

59 88 117 146 175 204 233 262 291 320 349 378 407 436 465 494 523 552 581 610 639 668 Tag Assignment

User: CIO organization senior manager

F

99 113 127 141 155 169 183 197 211 225 239 253 267 281 295 309

User: Evangelist (web 2.0)

UniqueRatio

1

UniqueRatio

0.6

0.4

PluralRatio

0.2

ModalRatio

0 1

11

21

31

41

51

61

71

81

91 101 111 121 131 141 151 161 171 181 191 201 211 221

Tag Assignment

0.8 Social Metric

0.6

0

ModalRatio

0

73 97 121 145 169 193 217 241 265 289 313 337 361 385 409 433 457 481 505 529 553

0.8 Social Metric

UniqueRatio

0.2

PluralRatio

0.4

0.2

1

0.4

UniqueRatio

0.6

Tag Assignment

1

0.8

User: Evangelist (consulting resources)

0.8

ModalRatio

0

61 91 121 151 181 211 241 271 301 331 361 391 421 451 481 511 541 571 601 631 661 691 Tag Assignment

Social Metric

E 1

0.8 Social Metric

0.8 Social Metric

User: CIO organization staff member 1

Social Metric

1

0.6

PluralRatio

0.4

0.2

ModalRatio

0 1

Tag Assignment

55

109

163 217

271 325

379

433 487 541

595

649 703 757

811

865 919 973

Tag Assignment (first 1000 tags)

Figure 4. Social metrics for six users.

x

NormInfo showed phenomena that were invisible by the conventional Entropy metric. These were not small phenomena: the perturbation from the 33rd to the 86th tag assignments accounted for 40% of the NormInfo (scaled Entropy) of the tag collection. (We have also analyzed resources for which the Entropy and NormInfo metrics show opposite trends [16].)

We wanted to know more about what was going on during the dramatic events shown by the NormInfo metric, but invisible by the other metrics. We turned to concepts from social agreement or inter-rater reliability to define three additional metrics [18]. Each tag was classified into one of the following three metric classes. The Modal tag was the tag that occurred most frequently in the collection of tags on a resource; Modal tags represented the greatest agreement among users. Plural tags were tags that occurred at least twice in the collection of tags; Plural tags represented some extent of agreement among users, but less agreement than the modal tag. Finally, Unique tags were tags that occurred exactly once in the collection; Unique tags represented no agreement, or perhaps random “noise” in the tag collection. We divided the count of tags for each of these concepts by the total number of tags, resulting in three ratios which summed to 1.0 by definition: ModalRatio, PluralRatio, UniqueRatio. The social metrics tell the following story for our resource with 305 tags. There was initial strong agreement among taggers (shown by the relatively high ModalRatio beginning around the 6th tag-assignment, Figure 3D-d1), but this agreement decreased sharply toward the 52nd tagassignment (d3). During the early part of this decline (until the 27th tag-assignment), partial agreements increased (shown by the increasing PluralRatio, d2). Thereafter, there was a small decline in PluralRatio; in combination with the sharp decrease in ModalRatio, this led to the declining agreement boundary between PluralRatio and UniqueRatio from d2 to d3. From a different perspective, the number of Unique Tags increased strongly from the 27th to the 51st tag-

assignment (d3), overwhelming any growth in agreement among the Plural Tags or the Modal Tag. (There was also a brief period, from the 50th to the 54th tag-assignments, when there was no Modal Tag. This is an artifact of the statistical definition of Modality. If two or more Plural Tags have the same highest frequency, then there is no single tag that is most frequent, and therefore no Modal Tag. We will explore better ways of handling this situation in future work.) There were subtler changes in the later phases of the development of the tag collection. From the 51st to the 86th tag-assignment, there was a steady growth in agreement among taggers, reflected mildly in the ModalRatio and strongly in the PluralRatio (d4); the UniqueRatio showed a corresponding decline. After the 86th tag-assignment, the overall amount of agreement (ModalRatio + PluralRatio) remained fairly constant, reflected in the relatively flat agreement-boundary between PluralRatio and UniqueRatio (d5). However, during this period, there was a slow decline in holistic agreement (25% reduction in ModalRatio by the 310th tag-assignment, d6), and a corresponding gain in partial agreements (PluralRatio, d7). Thus, even with superficial stability of agreement after the 86th tagassignment (d5), there was a subtle shift away from generalized agreement (ModalRatio, d6), and toward the formation and strengthening of more partial agreements among smaller groups of users (PluralRatio, d7). Characterizing Users

The preceding studies focused on the characterization of resources, such as webpages. We have begun to apply the same metrics to understand the tagging practices of individual users. Because of the tripartite structure of tagging data [10], the tags written by each user may also be considered a tag collection. In this case, we interpret the tags as a description of the interests or expertises of the individual user. Figure 4 shows six “tagging signatures” for six employees, using the social agreement metrics that were introduced in relation to Figure 3D. Most users look like the Product

Manager (Figure 4A) and CIO Organization Manager (Figure 4B). These tagging patterns are characterized by (a) roughly equal proportions of Plural and Unique Tags, and (b) a smaller but sizeable proportion of Modal Tags. The Modal Tags presumably represent the primary interest or expertise of the worker, while the Plural Tags may represent additional interests or items to be shared with team members.

interms of tag-assignments or in terms of users. We took a more critical look at Convergence, showing that Entropy [2] provided a different perspective on the same data. We then analyzed weaknesses in the Entropy measure, replacing it with NormInfo, a scaled version. We showed that NormInfo was capable of showing phenomena that neither Convergence nor Entropy could detect. We then used metrics based on social agreement to make sense of those previously-undetected phenomena. We applied those metrics to both resources (URLs) and users, illustrating how the social-agreement metrics could show distinctive patterns or “tagging signatures” for users in different roles. Finally, we showed that the new metrics (NormInfo and the three social-agreement measures) provide a stable, reliable factor structure of high explanatory power, across different domains of resources, and across resources and users.

The two managers in Figures 4C and 4D show a somewhat different pattern. The managers have a relatively smaller proportion of Modal Tags, reflecting perhaps their need to spread their attention across many projects and topics. The pattern for the CIO organization Senior Manager in Figure 4D is particularly striking, with such a preponderance of Unique Tags. This individual has a reputation for breadth of interests as well as extraordinary organizational effectiveness. The large use of Unique Tags may reflect those broad interests.

There are many unsolved problems in analyzing and summarizing social tagging data. We hope to extend our metrics to analyze time-related data more directly, and we hope to add our metrics to clustering techniques so as to identify groups of resources, users, and tags that should be analyzed together as emergent units. We also look forward to providing more detailed accounts of the studies summarized here, in future papers.

The last two tagging patterns in Figures 4E and 4F are also striking. These two individuals – in very different parts of the company – describe themselves as “evangelists,” promoting employee and organizational interest in particular topics. The tagging patterns of these evangelists are distinctive by their extensive use of Plural Tags. We speculate that the Plural Tags may reflect a strategy of specializing their evangelism for different client groups.

REFERENCES

1. Ames, M. & Naaman, M. Why we tag: motivations for annotation in mobile and online media. Proc.CHI 2007.

Thus, the social-agreement metrics provide insights into the usage patterns of people in specific organizational roles.

2. Chi, E.H., Kittur, A., Mytkowicz, T., Pendleton, B., & Suh, B., “Augmented social cognition: Understanding social foraging and social sensemaking,” Plenary paper at HCIC 2007, Winter Park, CO, USA, February 2007.

Stability of the Metrics

We have conducted factor analyses of four metrics (NormInfo, ModalRatio, PluralRatio, and UniqueRatio) on the tag collections of 137 intranet resources, 51 internet resources, and 537 users (taggers). For all three groups, two-factor solutions accounted for a minimum of 96% of the variance. In brief, the same factor structure emerges for all three data sets. A first factor appears to reflect structure vs. lack-of-structure (high positive loadings on NormInfo, ModalRatio, and PluralRatio, high negative loading on UniqueRatio). The second factor appears to address what kind of structure or agreement is present (high positive loading on ModalRatio, high negative loading on PluralRatio).

3. Damianos, L., Griffith, J., & Cuomo, D., “Onomi: Social Bookmarking on a Corporate Intranet,” Position paper in WWW 2006 Tagging Workshop. 4. Dugan, C., Muller, M.J., Millen, D.R., Geyer, W., Brownholtz, B., & Moore, M., “The Dogear game: A social bookmark recommender system,” Proc GROUP 2007. 5. Farrell, S., Lau, T., Wilcox, E., & Muller, M., “Socially augmenting employee profiles with people-tagging,” Proc UIST 2007. 6. Golder, S.A., & Huberman, B.A., “Structure of collaborative tagging systems,” J. Info. Sci. 32, 2 (Apr., 2006)

Thus, the new metrics in this paper perform reliably, with an easily-explained factor structure, on different domains (intranet vs. internet) and on different entities (URLs vs. users).

7. Hammond, T., Hannay, T., Lund, B., & Scott, J., “Social bookmarking tools (I), a general review,” D-Lib Magazine 11 (4), 10 November 2006, http://www.dlib.org/dlib/april05/hammond/ 04hammond.html (verified 29 January 2007).

CONCLUSION

Our work has introduced new metrics for studying and summarizing the information in tag collections associated with resources or with users. We showed how the concept of Convergence [6, 20] could be used with large samples to generate useful engineering analyses. In those analyses, we showed little difference between calculating Convergence

8. John, A., & Seligmann, D., “Collaborative tagging and expertise in the enterprise,” in Proc WWW 2006. 9. Kipp, M.E.I. and Campbell, D.G., “Patterns and Inconsistencies in Collaborative Tagging Systems: An

5

Examination of Tagging Practices,” Proc. Am. Soc. for Info. Sci. & Tech., Austin, Texas (US), 2006.

17.Muller, M.J., “Comparing tagging vocabularies among four enterprise tag-based services.” Proc. GROUP 2007.

10.Lambiotta, R., & Ausloos, M., ”Collaborative tagging as a tripartite network,” ARXIV http://arxiv.org/PS_cache/ cs/pdf/0512/0512090v2.pdf, December 2006 (verified 18 September 2007).

18.Muller, M.J., & Dugan, C.,“Measuring the quality of tag collections in social tagging,” poster at GROUP‘07, Sanibel Island, FL, USA, November 2007.

11.Marlow, C., Naaman, M., Boyd, D., & Davis, M., “HT06, tagging, taxonomy, flickr, article, toread.” Proc. HT06.

19.Muller, M.J., Geyer, W., Brownholtz, B., Dugan, C., Millen, D.R., and Wilcox, E., “Tag-Based Metonymic Search in an Activity-Centric Aggregation Service.” Proc ECSCW 2007, Limerick, Ireland, September 2007.

12.Millen, D.R., & Feinberg, J., “Using social tagging to improve social navigation,” AH2006 workshop, Social navigation and community-based adaptation, Dublin, Ireland, 20 June 2006.

20.Sen, S., Lam, S.K., Rashid, A.M., Cosley, D., Frankowski, D., Osterhouse, J., Harper, F.M., & Riedl, J., “Tagging, communities, vocabulary, evolution,” Proc CSCW 2006.

13.Millen, D.R., Feinberg, J., & Kerr, B., “Dogear: Social bookmarking in the enterprise,” Proc CHI 2006.

21.Thom-Santelli, J., & Muller, M.J., “The wisdom of my crowd: Motivation and audience in enterprise social tagging,” Poster at GROUP 2007, Sanibel Island, FL, USA, Nov. 2007.

14.Millen, D.R., Yang, M., Whittaker, S., & Feinberg, J., “Social bookmarking and exploratory search,” Proc ECSCW 2007. 15.Moore, M., Estrada, M., Finley, T., Muller, M.J., & Geyer, W., “Next generation activity-centric computing. Demo at CSCW 2006.

22.Szekely, B, & Torres, E., Ranking bookmarks and bistros: Intelligent community and folksonomy development. May 2005. http://torrez.us/archives/2005 /07/13/tagrank.pdf (verified 1 September 2007).

16.Muller, M.J., “Anomalous tagging patterns can show communities among users,” Poster at ECSCW 2007, Limerick, Ireland, September 2007.

The columns on the last page should be of approximately equal length.

SIGCHI Conference Paper Format

rapid adoption in diverse internet applications, and in the growing number of .... B. Mean Convergence across 137 high-tagged intranet resources. An analysis of ...

453KB Sizes 2 Downloads 245 Views

Recommend Documents

SIGCHI Conference Paper Format - Microsoft
Backstory: A Search Tool for Software Developers. Supporting Scalable Sensemaking. Gina Venolia. Microsoft Research. One Microsoft Way, Redmond, WA ...

SIGCHI Conference Paper Format
Mar 22, 2013 - Author Keywords. Text visualization, topic-based, constrained clustering. .... Moreover, placing a word cloud representing the same topic across.

SIGCHI Conference Paper Format
Sep 24, 2008 - computing systems with rich context information. In the past ... In recent years, GPS-phone and GPS-enabled PDA become prevalent in .... Figure 2 depicts how we calculate features from GPS logs. Given two ..... Threshold Hc (degree). A

SIGCHI Conference Paper Format
the analyst to perceive structure, form and content within a given collection. ... business?” Or perhaps ... it is for researchers, designers, or intelligence analysts. It.

SIGCHI Conference Paper Format - Research at Google
the Google keyboard on Android corrects “thaml” to. “thank”, and completes ... A large amount of research [16, 8, 10] has been conducted to improve the qualities ...

SIGCHI Conference Paper Format
real time. In addition, it is not possible for a finite number of people to enumerate all the possible ... stream, the real time location information and any personal.

SIGCHI Conference Paper Format
taking into account the runtime characteristics of the .... the ability for Software to control the energy efficiency of ... Thus as software-assisted approach is used.

SIGCHI Conference Paper Format
trialled by Morris [21], that depicts one‟s social network in the form a solar system. ..... participants, this opposition extended to computers: “I don‟t think I should ..... They also felt that the composition of emails allows for a degree of

SIGCHI Conference Paper Format
Web Accessibility; Social Networking; Human Computer ..... REFERENCES. 1. Accessibility of Social Networking Services. Observatory on ICT Accessibility ...

SIGCHI Conference Paper Format - Research at Google
for gesture typing is approximately 5–10% higher than for touch typing. This problem ..... dimensions of the Nexus 5 [16] Android keyboard. Since most of today's ...

SIGCHI Conference Paper Format
Game, Design, Agile Development,Prototyping, Sketching. ACM Classification Keywords. K.8.0 [Games], D.2.2 ... game design many have mentioned Jesse Schells The Art of. Permission to make digital or hard copies of all or .... nation, ACM Siggraph Vide

SIGCHI Conference Paper Format - Research at Google
Murphy and Priebe [10] provide an ... adopted social media despite infrastructural challenges. .... social networking sites and rich communication systems such.

SIGCHI Conference Paper Format
fold by showing before and after structures, we call this an implicit representation of a fold because it ... seventh Annual Conference of the Cognitive Science.

SIGCHI Conference Paper Format
the Compendium knowledge mapping software [4], the ... they create with the software. Such work, often ..... improvisation to be helpful in analyzing practitioner.

SIGCHI Conference Paper Format
This applies to visualizations too. The better a visualization is, the more explicitly it encodes the .... symbolic features of a map – a building corner, a kiosk,.

SIGCHI Conference Paper Format
number of different angles, examining the expressive requirements ... This work has resulted in a number of projects .... download the Compendium database then gather in virtual meetings ... Participants met over a phone teleconference held.

SIGCHI Conference Paper Format
H.1.2 [Models and Principles]: User/Machine Systems. H.5.2 [Information ..... ogy mitigates against this risk: one can delete sentences before sending them to .... link it to playback of digital music files on a laptop com- puter, and return that ...

SIGCHI Conference Paper Format - Research at Google
Exploring Video Streaming in Public Settings: Shared .... The closest known activity is the online documented use of. Google+ .... recurring themes in our data.

SIGCHI Conference Paper Format
computer literacy to find cancer-related research clinical trials. One study found that 85% of ... Coding transcripts of explanations using grounded theory [7] .... matched their criteria to a greater degree than those in the. CONTROL group, 4.1 vs.

SIGCHI Conference Paper Format - Research at Google
awkward to hold for long video calls. Video Chat ... O'Brien and Mueller [24] explored social jogging and found that a key ... moved from forests and parks to everyday urban centers. .... geocaches each within an hour time period. The area.

SIGCHI Conference Paper Format - Institute for Computergraphics and ...
Aug 27, 2013 - characteristics and the social contexts in which the study ... increasingly popular for interacting with information situated ... more frequent system errors (due to tracking failures), ... usage duration or preferences between the two

SIGCHI Conference Paper Format - Institute for Computergraphics and ...
Aug 27, 2013 - not made or distributed for profit or commercial advantage and that copies bear this ... (and could not be transformed to normal distributed data),.

SIGCHI Conference Proceedings Format
One of the most critical needs for electron machines is to develop efficient codes for simulating collective effects that severely degrade beam quality, such as CSR and CSR-driven microbunching instability [4,10,18–20]. The aim of this pro- posal i

SIGCHI Conference Proceedings Format - Research at Google
based dialpad input found on traditional phones, which dates ..... such as the Android Open Source Project (AOSP) keyboard. ...... Japan, 2010), 2242–2245. 6.