Giving up Vindication in Favor of Application: Developing CognitivelyInspired Widgets for Human Performance Modeling Tools Walter Warwick ([email protected]) Micro Analysis and Design 4949 Pearl East Circle, Suite 300 Boulder, CO 80301

Amy Santamaria ([email protected]) Micro Analysis and Design 4949 Pearl East Circle, Suite 300 Boulder, CO 80301 Abstract Computational models of cognition are most often developed to explore or vindicate particular theoretical views in psychology. The computer provides a ready environment in which to develop models and generate quantitative predictions about cognitive performance which, in turn, can be compared against actual human performance. Validating the model with a good fit vindicates the theoretical view the model implements. The resulting view of the cognitive modeling enterprise is decidedly hypothetico deductive. For all its familiar appeal, however, this is not the only reason to develop a cognitive model. In this paper, we describe our efforts to develop computational models of decision making that can be applied within existing human performance modeling tools. Although our efforts are inspired theoretically, the goal is not to vindicate this or that theoretical view of cognition but, rather, to engender sufficiently human-like behavior in a modeling environment where it is otherwise lacking. We begin by briefly describing the computational nuts and bolts we’ve developed for representing ‘naturalistic’ decisions in a task network modeling environment. We then point to results from several applications to demonstrate the flexibility of our approach and the extent to which it does really produce human-like behaviors, both quantitatively and qualitatively. We conclude by discussing the alternative, more instrumentalist view of the cognitive modeling enterprise that results when efforts are focused on the development of applications rather than architectures.

Introduction We begin by preaching to the choir. Computational modeling is a boon to cognitive science. Nothing cuts through the tangle of intuitions, assumptions, and biases like the formal specification of a putative process. While the term “model” clearly means very different things to different people, an executable piece of software capable of generating concrete performance predictions sets the stage for empirical research where theories of cognition can yield unambiguous, testable commitments. Now, the apostasy. We are basically uninterested in cognitive modeling as a vehicle for theoretical

exploration or vindication. It’s not that we regard that kind of work as unimportant but, rather, we have found that a purely theoretical perspective can make it much harder to advance the state of the art in modeling and simulation generally. We came to this view when we began working on a project to develop a computational model of Klein’s recognition-primed decision (see, e.g., (Klein, 1995) for a discussion of the RPD model; our attempts to develop a computational analogue are described in (Warwick, McIlwaine, Hutton and McDermott, 2001)). As an exercise in cognitive science, that effort quickly descended into a theoretical snake pit; so-called naturalistic psychologists assumed that we were trying to establish the computational nature of a process they regarded as necessarily noncomputational while more traditional cognitive modelers tended to see the naturalistic foundations as either derivative or hopelessly descriptive and therefore uninteresting (or worse). As an exercise in modeling and simulation, however, we believe our work has extended the set of tools available to human performance modelers in a useful and potentially interesting way. In this paper we offer evidence for that claim. We begin by briefly describing the computational nuts and bolts we’ve developed for representing ‘naturalistic’ decisions in a task network modeling environment. We then point to results from several applications to demonstrate the flexibility of our approach and the extent to which it does really produce human-like behaviors, both quantitatively and qualitatively. We conclude by discussing the alternative, more instrumentalist view of the cognitive modeling enterprise that results when efforts are focused on the development of applications rather than architectures.

A “Naturalistic” Decision Widget for Task Network Modeling Tools Task network modeling environments (e.g., Micro Saint) provide a framework for representing human behavior as a decomposition of operator goals or functions into their component tasks, which themselves

might be further decomposed. Although there is no fixed level of abstraction imposed by the environment, models are often developed at a grain scale more useful to the human factors engineer than the cognitive scientist (e.g., task times of seconds, minutes or even hours rather than milliseconds). Moreover, the cognitive underpinnings of a behavior are not usually given explicit representation except for those places where a “decision” must be made by the human as to which task to pursue next in a branching sequence. But even there, cognition is reduced either to a probabilistic distribution or to a set of Boolean conditions that dictates the flow of control among the subsequent tasks. These relatively simplistic representations of decision making left us plenty of room for improvement while, at the same time, the overarching task network framework—choices among discrete alternatives, state information encoded in global variables, discrete event as opposed to real-time simulation—prevented us from getting mired in some of the more subtle aspects of decision making postulated by naturalistic psychologists. Klein’s model of the recognition-primed decision, and naturalistic models in general, are descriptive and are most often presented as a flow chart with associated intuition pumps (often given as narratives). Moreover, such models are often presented as alternatives to the more traditional rational choice models of decision making. We will neither explain nor defend the naturalistic approach to decision making here except to point out two salient features of the recognition primed model of decision making; namely, it is model of experienced decision making and courses of action are thought of as essentially emergent by-products of recognition rather than as the result of deliberative analysis or the application of rule-based knowledge. (We refer the reader to Zsambok & Klein, 1997, for a survey of naturalistic approaches to decision making) Accordingly, we implemented a computational model in which a similarity-based recognition routine is used to determine a course of action given a store of decision making episodes which is itself populated by experience. Our approach augments Hintzman’s multiple-trace memory model (1984; 1986a; 1986b). The basic idea is to represent a decision maker’s long term memory as a set of episodes, each of which represents the situation that prompted a decision (encoded as a cue vector), the course of action taken and an outcome measure (either successful or not). As a store of episode tokens, rather than types, there may be multiple copies of identical experiences in long-term memory. Recognition occurs when a new situation is presented. A dot product is computed between the vector representing the new situation and each “remembered” situation in long term memory. The resulting similarity value is then raised to a user-defined power and used to determine the proportionate contribution (either positive of negative,

according to the outcome of that episode) that each remembered episode makes to a composite recollection. The result is something like a distribution of recognition strengths across the available course of action, which can then be analyzed in any number of ways to produce a crisp output corresponding to a specific course of action, which is implemented, evaluated and stored as an new episode in long-term memory for use in the next decision. (The are a number of flourishes we do not describe here. We have also developed an interface that lets a user define and exercise a decision within a task network environment. User documentations, including specification of the underlying algorithms, an executable version of the software and the models discussed below are all available for download at: http://www.maad.com/index.pl/brims_models) This instance-based approach is hardly novel, but it does satisfy some basic naturalistic intuitions and it does so in a straightforward way. Moreover, the implementation is fairly lightweight, both in terms of the computational requirements and the burden it puts on the modeler—ours is not attempt to package an entire theory of cognition in a task network model, but rather, to sprinkle bits of disembodied cognition at select decision points in a task network.

Applications To demonstrate what these naturalistic widgets look like in practice, we now describe three applications. The first followed from a reality check we conducted to see whether we could emulate the results of a human performance experiment. The other two demonstrate more qualitative aspects of human performance.

Simulating the Results of a Resource Allocation Experiment The Experiment The human performance experiment we simulated was undertaken to determine how information presented on a graphical user interface might best support decision making in a resource allocation task. The task was inspired by a National Missile Defense scenario in which a subject must allocate a limited number of reserve Ground Based Interceptor (GBIs) missiles across notional cities under a ballistic missile attack. The interface displayed information—either graphically or numerically according to the experimental condition—about each city under attack, including the city population, the number of GBIs currently allocated to the city and the probability of intercepting the incoming missile. 48 students from New Mexico State University volunteered as subjects. Each subject completed 24 attack scenarios, which varied according to the number of cities under attack (four or five), the number of GBIs available for allocation (either three or four depending on the number of cities under attack), and, for each city, the population and number of GBIs already allocated to

it before the scenario began. A detailed description of the experiment can be found in (McDermott, Hutchins, Barnes, Koenecke, Gillan & Rothrock, 2003). The Naturalistic Model Among the 24 scenarios each subject completed, we were able to identify multiple scenarios where the experimental conditions were constant. We built a Micro Saint task network model to simulate four such scenarios. Each scenario consisted of either three or four decisions, namely, where to allocate each of the three or four available GBIs. (The Number of GBI available depended on the number of cities.) These decisions were made by a naturalistic model embedded in the task network model. The resulting integrated model worked as follows: the task network would set the initial conditions for one of the scenarios—i.e., size of cities and initial GBI allocation—and then pass some of this information as cues to the naturalistic model; the naturalistic model would then “recognize” where to allocate a GBI; this decision would be returned to the task network model, while the naturalistic model would receive feedback as to the appropriateness of its decision; finally, the task network model would update its allocation information and the naturalistic model would record the decision—including the feedback it received—in its “long term memory” so that the experience might influence subsequent decisions. The entire process would repeat until all the available GBIs had been allocated. Developing the naturalistic model was largely a matter of deciding how the allocation decision could be represented in high-level terms. In particular, we had to decide what cues should be passed to the naturalistic model and, on the back end, how to reinforce the model’s choices among the alternative courses of action. As with any symbolic model, it would have been possible here to trivialize the decision by providing an overly expressive set of cues (e.g., an explicit representation of expected number of survivors gained per GBI per city). For this reason, we tried to specify cues that related in an obvious way to the information available to the human subjects. During the human performance experiment, the subjects were exposed to various displays that combined city population and GBI allocation information to indicate graphically the expected value of survivors for each city. Accordingly, we provided city population and current GBI allocation for each city to the naturalistic model (we did not even begin to think about how the naturalistic model might base its decision on the “perception” of a graphical display). The numeric values for population were coerced into the discrete ranges that supported the “subjective” interpretation in the naturalistic model at run time— e.g., cities populations between 500,000 and 1,500,000 were considered as “medium” populations. GBI allocation values, on the other hand, were treated simply as enumerations—i.e., we specified a cue in the

naturalistic model that took on values of one, two and three depending on the actual number of GBIs allocated to each city. Our intention was to engender a tendency to maximize expected number of survivors in the naturalistic model by “training” to recognize when to allocate a GBI to each city given various combinations of populations and current GBI allocations. To do this, we reinforced allocation decisions that were normatively correct—i.e., those that maximized expected value—but we also included a “sympathy factor” that would reinforce the decision to allocate a missile to an uncovered city. (N.b., the reinforcement schedule itself was transparent to the naturalistic model). After we specified the high-level features of the decision (i.e., the cues that prompt recognition and the reinforcements that shape the model’s experience), it remained for us to define some of the low-level features of the decision. In particular, we hand-tuned two parameters—the first controls recall precision and the second sets a recognition threshold at different levels— and we selected a run-time setting that dictates which “fuzzy” selection strategy the model uses to choose among the courses of action it recognizes The Results Although the number of reserved GBIs allocated to each city was not an immediate measure of interest for the human performance experiment, it was recorded nonetheless (and used to determine the deviations from normatively optimal behavior under each experimental condition). This measure was the basis for our comparison between the naturalistic model and actual Human performance. The human performance data are given by scenario in terms of the average number of GBIs allocated to each city by 24 human subjects. Likewise, the naturalistic data represent the average number of GBIs allocated to each city by 25 “naturalistic subjects” (where the naturalistic subjects differed only by the random number seed used to drive the “fuzzy” course of action selection strategy). Unlike the human subjects, where the allocation results for each subject were drawn from a single trial conducted after a training scenario, the allocation results for each naturalistic subject represent an average across fifty trials. That is, we included the naturalistic model’s spin-up to proficiency in the analysis of its performance. For each scenario, we simply compared the mean number of GBIs allocated to each city by the human subjects and the naturalistic model. We quantified the comparison according to the methods of Schunn and Wallach (2001). Thus, each graph below indicates means, with associated standard errors, along with a coefficient of determination (r2) between model and human data and a measure of Root Mean Squared Scaled Deviation (RMSSD).

1.4

1.6

1.2

2

r = .89 RMSSD = 3.4

1.2

Mean GBI Allocation

Mean GBI Allocation

1.4

1 0.8 0.6 0.4 0.2 0

r2 = .88 RMSSD = 2.8

1 0.8 0.6 0.4 0.2 0

1

2

3

4

5

1

City

3

4

City

Humans

Model

2

Model

Figure 1. Model vs. Human performance on Scenario 1.

Humans

Figure 4. Model vs. Human performance on Scenario 4.

Probability Matching Mean GBI Allocation

1.2 1

r2 = .43 RMSSD = 2.7

0.8 0.6 0.4 0.2 0 1

2

3

4

City Model

Humans

Figure 2. Model vs. Human performance on Scenario 2.

Mean GBI Allocation

1.4 1.2

2

r = .81 RMSSD = 2.7

1 0.8 0.6 0.4 0.2 0 1

2

3

4

5

City Model

Humans

Figure 1. Model vs. Human performance on Scenario 3.

The Behavior We came to these results almost by accident. Working on an effort to model the control of an unmanned aerial vehicle (UAV) at the task level, we had to represent a task where targets are identified and classified as either friend or foe. Rather than use a probabilistic draw to represent the classification, we decided to see whether classification decisions could be learned by experience—that is, by repeatedly classifying targets in an uncertain environment. Although unintentional, this approached recapitulated the classical paradigm in experimental psychology used to study decision behaviors in binary choice tasks. Interestingly, where a normative model would predict that people would learn which option is more likely to be correct all of the time and choose it, optimizing their performance, robust empirical results1 suggest that people tend to choose each option with the same probability that it is correct. For example, if the foe classification is correct 70 percent of the time and the friendly classification is correct 30 percent of the time, people would choose foe 70 percent of the time and friend 30 percent of the time, even though doing this will result in fewer correct responses than choosing foe all of the time. This strategy is called probability matching. The Naturalistic Model We built a stand-alone Micro Saint model to generate a series of friend (blue) and foe (red) targets in a random order but in accord with some user-definable distribution. That construction of such a model in Micro Saint is trivial. We then integrated a naturalistic model that decided target-by-target how to classify it. Decisions were prompted by a degenerate cue vector that indicated only that a decision was needed and the outcome of the episode was determined 1

For reviews see: (Vulkan, 2000), (Meyers, 1976), (Shanks, Tunney & McCarthy 2002).

according to the whether the classification was in fact correct. This ensured that performance was dictated entirely by experience. In addition, we were able to specify differential reinforcements from the outcomes, such that a wrong decision might be represented by a more than just a single episode in long-term memory. The Results We exercised the naturalistic model under a variety of probabilities distributions and reinforcement schedules. First, we examined environments with 35 percent red targets and 65 percent blue targets (35-65), then 65-35, and 50-50, with between 1 and 25 episodes added to long-term memory for each incorrect decision. Each of these sets of parameters was run with 10 different random seeds, with 500 decisions per run. The results for each set of parameters are based on the average of the 10 runs. We found that when the model had no reinforcement (i.e., a single episode added for each unsuccessful outcome), after the first 50 decisions, it always picked the more likely option, which is maximizing behavior. For instance, if the target was red on 65 percent of the trials, then the model always picked red. When the number of episode traces added for an unsuccessful outcome was set to the 5, 10, or 25 the model roughly did probability matching, for instance, choosing red about two-thirds of the time when the target was red on 65 percent of the trials. To see if probability matching held for other percentages, we tested 70-30, 80-20, 90-10, and 95-5, all in the 25-episode condition. Again, each of these sets of parameters was run with 10 different random seeds, with 500 decisions per run. The model did probability matching for all of these conditions, and they were clearly differentiable from each other. To determine where and how the model’s behavior switched from maximizing to probability matching, we looked at when the model added just 1 or 2 traces to memory under a variety of target distributions. The probability matching behavior was less reliable when there was 1 trace or 2 traces added. Sometimes the model simply maximized, and most of the time it overmatched, or picked the more likely option more often than would be predicted by probability matching while still sampling the less likely option. The switch from maximizing to probability matching is not very smooth.

Monitoring Behaviors The Behavior Another aspect of the UAV operator modeling effort concerned monitoring behaviors. Such behaviors were explicitly represented as tasks at several points in the UAV model, and implicitly represented at several over points in the model (e.g., when checking to see if a UAV has arrived at its waypoint, looking for failure conditions). Despite the potentially significant impact of these tasks on overall performance, the vigilance required to monitor at various points in the UAV environment was represented at a very abstract

level (e.g., as a task of given duration) or not at all. To deepen the fidelity of the UAV operator model we investigated whether the vigilance required to perform a particular monitoring task could be learned dynamically from experience, so that a task that happened to require more constant attention would be attended more frequently than a task that happened to require less attention The Naturalistic Model To stimulate this kind of behavior, we developed a stand-alone Micro Saint model in which we could simulate a periodic “square wave” of activity of varying duration and frequency Given some activity so defined, a naturalistic model would then “recognize” moment-by-moment whether to attend to the activity or not, with the decision reinforced accordingly (i.e., according to whether the model decided correctly to attend to activity or ignore inactivity). We developed several different versions of the model, with each using a different set of cues to prompt the decision. In the first version versions, the decision was prompted by a single cue that represented the “subjective” passage of time in terms of how long activity had been observed. This model led to some intuitively satisfying performance. In a later version we added a second cue that represented the “objective” passage of time (using a cyclical clock of a much longer period) independent of the observation of the activity. Although these two cues led to better performance over the single cue, we realized that relations between the duration of the activity and period of the absolute clock could lead to artificially simple learning, so we eliminated the “objective” cue in favor of a second subjective cue that represented the time since activity had not been observed. In this way we ensured that a uniform model would be able to attune itself to arbitrary ratios of duration to frequency. The Results We ran the model under various conditions (e.g., specifying different square waves of “activity”) and under different settings (e.g., different reinforcement schedules and recognition thresholds— which turn out to affect learning rates). We also compiled and analyzed results using many different measures. The two most revealing measures turned out to be a measure of periodicity and a measure of weighted error (total errors turned out to be too noisy). Periodicity measures how frequently the model switches between checking and ignoring activity. It is calculated by dividing the number of decisions in a block by the number of switches between checking and ignoring. Ideally, the period of the model’s checking behavior will matches the period of the activity it is checking. However, periodicity does not capture how well the model’s behavior aligns with the activity. Thus, the measure of weighted error attempts to capture the degree of this offset. Errors that occur near the time that activity switches on or switches off, which would indicate only a small offset, are weighted less than

errors that occur in the middle of activity being on or off, which would indicate a larger offset. In the interest of space, we simply summarize these results graphically.

probabilistic decision, specified in advance, would fail to capture the dynamic nature of these learned behaviors. That is the lens through which we view our efforts. These kinds of results are far from vindicating a naturalistic of decision making. Having expended a great deal of effort in the past trying to justify the “naturalistic” nature of our models, we have come to despair at such prospects. But we believe these applications do establish prima facie plausibility that can engender a variety of reasonable, human-like behaviors using only a lightweight cognitive widget. Letting go of a perfect fit, or an airtight mapping between theory and computational model has allowed us to extend proven human performance modeling tools.

References

Figure 2. Results for a naturalistic model learning a vigilance task under different reinforcement schedules and recognition thresholds.

Conclusions We argue that each of the three foregoing applications reflects some degree of human like behavior: The resource allocation model exhibits the variability of the human behavior and a tendency toward non-optimal allocation. Likewise for the probability matching model. The monitoring model show smooth learning and the ability to affect learning rates and, by extension, some individual differences in a straightforward manner. In isolation, these results might seem underwhelming, but in the context of a larger, tasknetwork model of human performance it easy to see how such simple component behaviors might significant ramifications for overall performance. Moreover, we argue that the alternative representation of such decisions in terms of a “tactical” rule set or a

Hintzman, D. L. (1984). MINERVA 2: A simulation model of human memory. Behavior Research Methods, Instruments & Computers, 16, 96-101. Hintzman, D. L. (1986a). Judgments of Frequency and Recognition Memory in a Multiple-Trace Memory Model. Eugene, OR: Institute of Cognitive and Decision Sciences. Hintzman, D. L. (1986b). "Schema Abstraction" in a Multiple-Trace Memory Model. Psychological Review, 93(4), 411-428. Klein, G. (1998). Sources of Power: How People Make Decisions. Cambridge, MA: The MIT Press. McDermott, P., Hutchins, S., Barnes, M., Koenecke, C., Gillan, D., & Rothrock, L. (2003). The Presentation of Risk and Uncertainty in the Context of National Missiles Defense Simulations. Paper presented at the Proceedings of the Human factors and Ergonomics Society 47th Annual Meeting, Denver, CO. Myers, J. L. (1976).Probability learning and sequence learning. In Handbook of Learning and Cognitive Processes: Approaches to Human Learning and Motivation (171-205), Estes, W. K. (ed.). Erlbaum: Hillsdale, NJ. Warwick, W., McIlwaine, S., Hutton, R. J. B., & McDermott, P. (2001). Developing Computational Models of Recognition-Primed Decision Making. Proceedings for at the Tenth Conference on Computer Generated Forces, Norfolk, VA. SISO. Shanks, D. R., Tunney, R. J., and McCarthy, J. D. (2002). A re-examination of probability matching and rational choice. Journal of Behavioral Decision Making, 15, 233-250. Schunn, C. D. & Wallach, D. (2001) Evaluating goodness-of-fit in comparisons of models to data. Online manuscript. http://lrdc.pitt.edu/schunn/gof/index.html Vulkan, N. (2000). An economist’s perspective on probability matching. Journal of Economic Surveys, 14, 101-118. Zsambok, C. E., & Klein, G. (Eds.). (1997). Naturalistic Decision Making. Mahwah, New Jersey: Lawrence Erlbaum Associates.

Giving up Vindication in Favor of Application ...

Inspired Widgets for Human Performance Modeling Tools. Walter Warwick ... Network Modeling Tools ..... Society 47th Annual Meeting, Denver, CO. Myers, J. L. (1976). ... goodness-of-fit in comparisons of models to data. Online manuscript.

114KB Sizes 1 Downloads 146 Views

Recommend Documents

ease-merchandising-benefits-on-steroid-in-favor-of-wart-in ...
Whoops! There was a problem loading more pages. Retrying... ease-merchandising-benefits-on-steroid-in-favor-of-wart-in-foodstuffs-1499608386263.pdf.

A Nonpragmatic Vindication of Probabilism - Kenny Easwaran
egate partial beliefs to a second-class status. I mean to alter this situation by ..... The accuracies of full beliefs are evaluated on a cate-. 8. One might be tempted ...

A Nonpragmatic Vindication of Probabilism - Kenny Easwaran
systems of partial beliefs can be measured on a gradational scale that satisfies a small set of formal ... bility contributes to the basic epistemic goal of accuracy. This strategy ..... to each ticket in a countably infinite lottery. As a number of

Network Patterns of Favor Exchange
Labels/reputations/social norms, social contagion. – Raub & Weesie (1990), Ali & Miller (2009), Lippert &. Spagnolo (2009), Mihm, Toth, Lang (2009)... prisoners dilemma played on a network: cliques shorten information travel and quicken punishment.

whats up application for android.pdf
... below to open or edit this item. whats up application for android.pdf. whats up application for android.pdf. Open. Extract. Open with. Sign In. Main menu.

Excusing Selfishness in Charitable Giving: The Role of ...
Sep 7, 2015 - motives for such giving; for instance, people feel good about themselves when they help others. (Andreoni ..... Participants complete this study on laptops, via an online survey platform called ... 11Implementing payoffs from one row of

The restorative logic of punishment: Another argument in favor of weak ...
should be used to deter crimes and maximize the good of society (Polinsky & Shavell. 2000 .... evolution and development in the courts established by the Sudan government. ... Malinowski, B. (1926) Crime and custom in savage society.

100-Days-Of-Favor-Daily-Readings-From-Unmerited-Favor.pdf
Page 1 of 3. Download ~~~~~~!!eBook PDF 100 Days Of Favor: Daily Readings From Unmerited Favor. (-PDF-) 100 Days Of Favor: Daily Readings From ...

Lemonade Gumball Party Favor Printables.pdf
... commercial use is strictly prohibited. Page 1 of 1. Lemonade Gumball Party Favor Printables.pdf. Lemonade Gumball Party Favor Printables.pdf. Open. Extract.

McCord's (Raymond) Application - In the Matter of an Application by ...
... of the EU and the opinion of the. Page 3 of 39. McCord's (Raymond) Application - In the Matter of an Application by McCord (Raymond) for Leave to Ap.pdf.

100 Days Of Favor .pdf - size 1.3 MB.
Page 1 of 381. Page 1 of 381. Page 2 of 381. Page 2 of 381. Page 3 of 381. Page 3 of 381. 100 Days Of Favor .pdf - size 1.3 MB. 100 Days Of Favor .pdf - size ...

bridal shower favor tags.pdf
Sign in. Loading… Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying.

ANC 5C Resolution in favor of Good Food ABRA.PDF
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. ANC 5C ...

A Culture of Giving - Washington Campus Compact
a life of helping others. Strozyk was ... service at the Second Annual Salute to ... service. “This event is an extension of the. Mariners long-time commitment to the.

The Charity of Giving Handjobs.pdf
There was a problem loading this page. Whoops! There was a problem loading this page. The Charity of Giving Handjobs.pdf. The Charity of Giving Handjobs.

giving the of time
astrocytoma tumor; later it took his sight. Through illness, surgeries, chemotherapy, and blindness, Andrew retained his sunny disposition. Family and friends loved to be around him. Andrew approached each day as a gift and filled our home with laugh

A Culture of Giving - Washington Campus Compact
the Mason County Journal Christmas. Fund, a beloved program that provided. 954 individuals and families with gift baskets of food, clothing and toys last.

Application of fluoroform in trifluoromethylation - Arkivoc
Mar 12, 2017 - Department of Architecture and Environment, Chongqing Vocational ... development in the trifluoromethylation and difluoromethylation of ...

giving voice
playing guitar, and in the coming years he attended, he says, .... tennis players get tears in the tendons of their elbows. "Singers are élite .... 'Wow," Zeite1s said. "He's really boggy. Singers have a really e1as- tic layer." He touched the cord