Pathways for Theoretical Advances in Visualization IEEE VIS 2016 Panel Min Chen∗

Georges Grinstein†

Chris R. Johnson‡

University of Oxford, UK

University of Massachusetts Lowell, USA

University of Utah, USA

Jessie Kennedy§

Tamara Munzner¶

Melanie Toryk

Edinburgh Napier University, UK

University of British Columbia, Canada

Tableau Software, USA

A BSTRACT There is little doubt that having a theoretic foundation will benefit the field of visualization, including its main subfields: information visualization, scientific visualization and visual analytics, as well as many domain-specific applications such as software visualization, biomedical visualization, and so on. Since there has been a substantial amount of work on taxonomies and conceptual models in the visualization literature, and some recent work on theoretic frameworks, such a theoretic foundation is not merely an airy-fairy ambition. In this panel, the panellists will focus on the question “How can we build a theoretic foundation for visualization collectively as a community?” In particular, the panellists will envision the pathways in four different aspects of a theoretic foundation, namely (i) taxonomies and ontologies, (ii) principles and guidelines, (iii) conceptual models and theoretic frameworks, and (iv) quantitative laws and theoretic systems. Keywords. Theory of visualization, information visualization, scientific visualization, visual analytics, theoretical foundation, concept, taxonomy, ontology, principle, guideline, model, measurement, framework, quantitative law, theoretic system. 1

I NTRODUCTION

AND

M OTIVATION

More than a decade ago, Johnson proposed “Theory of Visualization” as one of the top research problems in visualization [7]. Since then there have been several focused events, including two panels in 2010 and 2011 respectively [18, 1], two workshops in 2010 (https://eagereyes.org/blog/2010/infovis-theory-workshop), and 2016 (https://sites.google.com/site/drminchen/home/events) respectively. However, it is still a common notion that “Theory of Visualization” is a problem for a few individual researchers, and its solution, perhaps in the forms of some theorems or laws, may be too distant from practice to be useful. During the recent ATI Symposium on Theoretical Foundation of Visual Analytics, the discussion on the need for building a theoretical foundation attracted a wide range of opinions, from “Visualization should not be physics-envy” to “It is irresponsible for academics not to try.” Gradually, the attendees converged to a common understanding that a theoretical foundation consisted of several aspects, and every visualization researcher should be able to make direct contributions to the theoretical foundation of visualization. In particular, the event identified four major aspects (cf. [8]): ∗ e-mail: † e-mail: ‡ e-mail: § e-mail: ¶ e-mail: k e-mail:

[email protected] [email protected] [email protected] [email protected] [email protected] [email protected]

• Taxonomies and Ontologies: In scientific and scholarly disciplines, a collection of concepts are commonly organized into a taxonomy or ontology. In the former, concepts are known as taxa, which are typically arranged hierarchically using a tree structure. In the latter, concepts, often in conjunction with their instances, attributes, and other entities, are organized into a schematic network, where edges represent various relations and rules. • Principles and Guidelines: A principle is a law or rule that has to be followed, and is usually expressed in a qualitative description. A guideline describes a process or a set of actions that may lead to a desired outcome, or actions to be avoided in order to prevent an undesired outcome. The former usually implies a high degree of generality and certainty of the causality concerned, while the latter suggests that a causal relation may be subject to specific conditions. • Conceptual Models and Theoretic Frameworks: The terms of frameworks and models have broad interpretations. Here we consider that a conceptual model is an abstract representation of a real-world phenomenon, process, or system, featuring different functional components and their interactions. A theoretic framework provides a collection of measurements and basic operators and functions for working with these measurements. The former provides a description of complex causal relations in real world in a tentative manner, while the latter provides a basis for evaluating different models quantitatively. • Quantitative Laws and Theoretic Systems: A quantitative law describes a causal relation of concepts using a set of measurements and a computable function, which is confirmed under a theoretic framework. Under a theoretic framework, a conceptual model can be transformed to a theoretic system through axioms (postulated quantitative principles) and theorems (confirmed quantitative laws). Unconfirmed guidelines are thus conjectures, and contradictory guidelines are paradoxes. In the field of visualization, it is estimated that there are currently some thirty papers on taxonomies and ontologies; several hundreds of principles and guidelines recommended by various books, research papers, and online media; more than ten conceptual models; a few theoretic frameworks; and a few quantitative laws that are mathematically confirmed guidelines. There is little doubt that much more effort will be required to address the problem of “Theory of Visualization” [7]. Using the two questions (as shown in Figure 1) posed in the ATI Symposium as an example, one may ask (i) what taxonomy would encompass all concepts relevant to the two questions; (ii) what guideline could be followed by users in answering such a question in practice; (iii) what model could be used to represent and simulate a real world problem involving statistics, visualization, computation, and interaction; or (iv) what quantitative measurement would

£11.00 £10.50 £10.00

Share Price of Company X

When do we pause statistics and start visualization?

£9.50 £9.00 £8.50

£8.00 £7.50 £7.00 £6.50 £6.00 £5.50 £5.00 £4.50

2002

2003

2004

down sampling 100

101

102

103

104

105

106

107 108 number of data points in a time series

109

When do we pause algorithmic computing and start interaction? adding complexity 100 101 102 103 104 105 106 107 108 109 number of lines (or other units of complexity measurement) Figure 1: Two questions posed at the beginning of the ATI Symposium on Theoretical Foundation of Visual Analytics.

enable us to estimate an optimal point of “when” in the two questions. These questions clearly suggest that a theoretical foundation needs more than a few theorems or laws, and the problem of “Theory of Visualization” should be solved through a collective effort of the VIS community, instead of a few individuals’ endeavour. Main Topic. In this panel, we bring together visualization scientists with expertise in these four aspects. They will provide their assessment of the state of the art in each aspect, identify gaps and challenges, and outline opportunities and pathways. In comparison with the two previous panels (in 2010 and 2011), the panel will move on from the question about “why theory”, and will not dwell on the question of “which theory”. Instead, the panel will focus on “how to make new theoretical advances as a community”. 2 PANEL F ORMAT AND S CHEDULE The panel will adopt the traditional format of panels in VIS conferences. It consists of a convener and five panellists, each of whom will champion for one aspect of the theoretic foundation while exploring its connections with other aspects. We plan to allocate 50 minutes for interactions between the audience and the panellists. We expect a very lively Q&A session. The outline schedule is given below: • Introduction [Chen (convener), 5 min.] • Taxonomies and ontologies [Kennedy, 7 min.] • Principles and guidelines [Munzner, 7 min.] • Conceptual models and theoretic frameworks [Johnson, 7 min.] • Conceptual models and theoretic frameworks [Tory, 7 min.] • Quantitative laws and theoretic systems [Grinstein, 7 min.] • Q&A session [audience and panellists, 50 min.] • Concluding remarks [panellists, 10 min. (2 min. each)] 3 P OSITION S TATEMENTS Jessie Kennedy on Taxonomies and Ontologies For millennia humans have been classifying things in the world around them into concepts, which they describe and name in order to communicate them. The most significant and enduring effort is the classification of life on Earth, which commenced in Aristotle’s time, became mainstream through the work of Linnaeus and continues to this day with new species being identified and alternative classifications of existing species being proposed. Alternative

classifications (taxonomies) arise over time due to differing opinions of the importance of differentiating characteristics used in creating the concepts (taxa). This is usually as a result of new information becoming available, often through technological advances. This can result in the same organism being classified according to different taxonomic opinions and subsequently have several alternative names, leading to miscommunication. Newer classifications are usually improvements on previous ones, but sometimes the existence of alternative classifications reflects the fact that there is disagreement as to how to interpret the data on which the classification is based. Taxonomies play an increasing role in understanding in the field of visualization and provide a useful means for bringing order to the range of existing visualization systems, tools, and techniques and are frequently adopted in literature surveys to categorise existing work. There have been many taxonomies proposed in the visualization field focusing on different criteria, including taxonomies by data type/model, task (visual/user), visual representation/encoding, interaction mechanism, user characteristics and combinations of these often specialized by domain. The taxonomies have been developed for aiding the design, selection or evaluation of visualization tools and techniques. I will present an overview of taxonomies in visualization and argue that we should consider taxonomy as an investigative science, with taxonomic classifications representing partial and evolving hypotheses rather than static identifications of absolute taxa. Taxonomies are therefore fundamental tools to aid our understanding and assist in the communication and development of the field of visualization. Tamara Munzner on Principles and Guidelines The depth and richness of the set of principles and guidelines that have been articulated within the visualization literature continues to grow as the community matures. Writing a book allows a large scope for synthesizing and codifying knowledge [9, 12, 16, 17], and despite a profusion of many recent efforts we still need far more of them as a field. However, the time commitment required for a project of that scale can be a bit daunting. A more manageable way to extend the theoretical foundations of visualization is through the Theory paper type, where the papers are directly aimed at extending existing theories as their core contribution. The number of individual researchers writing these papers is slowly increasing, but the proportion of authors providing this kind of contribution is still small compared to the more popular paper types. I do strongly encourage more people to try their hand at this kind of contribution: writing “meta-papers” that address the question of how to write papers, which can be very rewarding [10, 11]. An even more accessible route that should be well within the reach of most authors is to contribute what I’ll call “micro-theory”: guidelines or principles as secondary contributions within a paper whose main contribution is not purely theoretical. For example, our proposal for design study methodology [15] argues that it’s reasonable to expect that part of the contribution of a Design Study paper could be to confirm, refute, extend, or refine the previous guidelines. In another example, an Evaluation paper with an empirical study in a laboratory setting at its core may result in guidelines for what conditions should trigger a switch from one visual to another [5]. I particularly call for more work along these lines by a broad cross-section of people within the field! Chris R. Johnson on Conceptual Models and Theoretic Frameworks In science, a model can be a representation of an idea, a process or a system that is used to describe and explain phenomena that sometimes cannot be experienced directly. Models are a visual way of linking theory with observation, and they guide research by being

simplified representations of an imagined reality that enable predictions to be developed and tested by experiment. Models are central to what scientists do, both in their research, as well as, when communicating their explanations [3]. Together with theory and experimentation, computational science now constitutes the “third pillar” of scientific inquiry, enabling researchers to build and test models of complex phenomena. Advances in computing and connectivity make it possible to develop computational models and capture and analyze unprecedented amounts of experimental, observational, and simulation data to address problems previously deemed intractable or beyond imagination [14]. My assertion is that models have played – and continue to play – an important role in visualization research as a guide in developing new techniques and systems. In their paper on Verifying Volume Rendering Using Discretization Error Analysis, Etiene et al. [2] “derive the theoretical foundations necessary for verifying volume rendering with order-of-accuracy and convergence analysis ... and ... explain how to exploit these theoretical foundations to perform a practical verification of implemented volume rendering algorithms, such that it can be easily used for the verification of existing volume rendering frameworks.” This is a wonderful example of theoretical models being used to verify the accuracy of visualization algorithms and software. Melanie Tory on Conceptual Models and Theoretic Frameworks Models of human behavior help us to describe, understand, and predict what people will do in given circumstances. As interface designers, such models help us to reason about people’s workflows and explain why some workflows are more efficient or effective than others. Behavioural models thereby enable us to predict what information and interaction tools are likely to best support human activities. For example, in [6], we evaluated a personal visualization approach (specifically, a visualization of fitness tracker data embedded within one’s Google calendar) for providing feedback to people about their fitness-related activities. Based on the information gathered through a field trial, we developed a model of the behaviour feedback process and the role of feedback tools (like our on-calendar visualization) in helping people to understand and change their activities. This model helped us to explain why the on-calendar visualization approach was more effective than a traditional fitness feedback tool, but more importantly, it provided a theoretical foundation from which to derive design guidelines and principles for behavior feedback tools in general. Similarly, sense making models (e.g., [13]) have played an important role in supporting the design of interactive analysis tools. Behaviour models are most often developed by observing natural human activities in the wild. Moreover, the techniques to do so are well established in other disciplines. Social scientists have established research methods for collecting and analyzing qualitative data to build models or theories. For example, Grounded Theory [4] involves observing human practices in the wild (to “ground” the theory in real world data) and then building a model to describe observed practices. The model is built in a bottom-up fashion by “coding” the data (iteratively assigning keywords to events or statements of interest and then consolidating and organizing those codes). I argue that our field needs more pre-design empirical work to understand human behaviour, and that Grounded Theory and related methods are an appropriate way to extract foundational models from such studies. Georges Grinstein on Quantitative Laws and Theoretic Systems There is no debate that we need to measure. A search of the web and science papers for “visualization theory” for example returned a very wide variety of topics. All returned items contained the

term “measure” and it variants. All included measures of some of reliability, validation, accuracy, correctness, limits, optimality; some discussed these in the context of a framework, a model, or even a theory, and of course most included the term “quantify” and its variants. Most aspects of visualizations that were being measured involved human insight, understanding, performance, creativity, knowledge, load, learning, and many other attributes. What does it mean to have a theory? 1. There are many kinds of theories. So which ones are we referring to? There are generative, prescriptive, algorithmic or explanatory theories. What are the values of each? 2. Most papers in visualization tend to be focused on engineering as opposed to science. What is the difference? 3. There have been a few papers, panels and workshops on theories of visualization (visualization in its broadest sense), but not many. Is there a reluctance to use the term theory? I will discuss the first two but will not try to answer the third as that is a more philosophical and psychological question. I will argue that a theory, no matter which kind of theory, is there as a challenge to be disproved thereby begging for a correction or replacement. In my short presentation I will show in fact that we have been and are still building theories of visualization. I will show examples of such theories. I will describe conclusions that such theories could lead to highlighting the theories’ values. 4 B IOGRAPHIES Min Chen developed his academic career in Wales between 1984 and 2011. He is currently the professor of scientific visualization at Oxford University and a fellow of Pembroke College. His research interests include visualization, computer graphics and human-computer interaction. He has co-authored some 190 publications, including his recent contributions in areas of volume graphics, video visualization, face modelling, automated visualization and theory of visualization. He has been awarded over 11M research grants from EPSRC, JISC (AHRC), TSB (NERC), Royal Academy, Welsh Assembly Government, HEFCW, Industry, and several UK and US Government Agencies. He is currently leading visualization activities at Oxford e-Research Centre, working on a broad spectrum of interdisciplinary research topics, ranging from the sciences to sports, and from digital humanities to cybersecurity. His services to the research community include papers co-chair of IEEE Visualization 2007 and 2008, Eurographics 2011, IEEE VAST 2014 and 2015; co-chair of Volume Graphics 1999 and 2006, EuroVis 2014; associate editor-in-chief of IEEE Transactions on Visualization and Computer Graphics; editor-in-chief of Wiley Computer Graphics Forum, and co-director of Wales Research Institute of Visual Computing. He is a fellow of British Computer Society, European Computer Graphics Association, and Learned Society of Wales. URL: https://sites.google.com/site/drminchen/ Georges Grinstein is Emeritus Professor of Computer Science at UMass Lowell, was head of its Bioinformatics Program, Director of its Institute for Visualization and Perception Research, and is now Chief Science Officer of Weave Visual Analytics. He received his Ph.D. in Mathematics from the University of Rochester in 1978. His work is broad and interdisciplinary, covering the perceptual and cognitive foundations of visualization, very high-dimensional data visualization, visual analytics and applications. He has over 40 years in academia with extensive consulting, over 300 research grants, products in use nationally and internationally, several patents, numerous publications in journals and conferences, a book on interactive data visualization, founded several companies, been the organizer or chair of national and international conferences and workshops in Computer Graphics, in Visualization, in Bioinformatics and in Data Mining. He has given numerous

keynotes and mentored over 40 doctoral students and hundreds of graduate students. He has been on the editorial boards of several journals, a member of ANSI and ISO, a NATO Expert, and a technology consultant for various government agencies and commercial organizations. For the last ten years he has co-chaired the IEEE VAST Challenges in visual analytics, and directed the development of Weave, an open source web-based interactive collaborative visual analytics system incorporating numerous innovations. Chris R. Johnson is a Distinguished Professor of Computer Science and founding director of the Scientific Computing and Imaging (SCI) Institute at the University of Utah. He also holds faculty appointments in the Departments of Physics and Bioengineering. He holds appointments in the Departments of Physics and Bioengineering. His research interests are in the areas of scientific computing and scientific visualization. In 1992, Dr. Johnson founded the SCI research group, now the SCI Institute, which has grown to to employ over 200 faculty, staff and students. Professor Johnson serves on a number of international journal editorial and advisory boards to national and international research centers. He is a Fellow of AIMBE (2004), AAAS (2005), SIAM (2009), and IEEE (2014). He received a Young Investigator’s (FIRST) Award from the NIH in 1992, the NSF National Young Investigator (NYI) Award in 1994, the NSF Presidential Faculty Fellow (PFF) award from President Clinton in 1995, a DOE Computational Science Award (1996), the Presidential Teaching Scholar Award (1997), the Governor’s Medal for Science and Technology from Utah Governor Michael Levitt, the Utah Cyber Pioneer Award, the IEEE Visualization Career Award, IEEE IPDPS Charles Babbage Award and the IEEE Sidney Fernbach Award, and the University of Utah’s most prestigious faculty award, the Rosenblatt Prize. Jessie Kennedy has been with Edinburgh Napier University since 1986, where she has held the post of professor since 2000. She was Director of the Institute for Informatics and Digital Innovation from 2010-2014 and is currently Dean of Research and Innovation at Edinburgh Napier. She has published widely with over 100 peer reviewed publications, has had over £2 million in research funding, 12 PhD students complete, has been programme chair, committee member and organiser of many international conferences and acts as reviewer for many national computer science funding bodies. She was Keynote speaker at VIZBI in 2012 and IV2012, was Programme Chair for BioVis in 2011, General Chair for BioVis 2012 and 2013 and Steering committee member since 2014. She has been a Member of EPSRC Peer Review College since 1996 and is a Fellow of the British Computer Society. Tamara Munzner is a professor at the University of British Columbia Department of Computer Science, and holds a PhD from Stanford. She has been active in visualization research since 1991 and has published over sixty papers and book chapters. Her book Visualization Analysis and Design appeared in 2014. She cochaired InfoVis in 2003 and 2004, co-chaired EuroVis in 2009 and 2010, and is chair of the InfoVis Steering Committee and the VIS Executive Committee. Her research interests include the development, evaluation, and characterization of information visualization systems and techniques. She has worked on problem-driven visualization in a broad range of application domains, including genomics, evolutionary biology, geometric topology, computational linguistics, large-scale system administration, web log analysis, and journalism. Her technique-driven interests include graph drawing and dimensionality reduction. Her evaluation interests include both controlled experiments in a laboratory setting and qualitative studies in the field. Melanie Tory is a Senior Research Scientist at Tableau Software. Before joining Tableau, Melanie was an Associate Professor in vi-

sualization at the University of Victoria. She earned her PhD in Computer Science from Simon Fraser University in 2004 and her BSc from the University of British Columbia in 1999. Melanie is Associate Editor of IEEE Computer Graphics and Applications and has served as Papers Co-chair for the IEEE InfoVis and ACM Interactive Surfaces and Spaces conferences. Melanie’s research focuses on understanding interactive visual data analysis and exploring techniques to help people analyze data more effectively. This includes intuitive interactions with visualizations and the design and evaluation of tools that support the holistic data analysis process, including sensemaking, analytical guidance, and collaboration. Much of her empirical work involves building models to describe and explain human interactions with data artifacts. R EFERENCES [1] C ¸ . Demiralp, D. Laidlaw, C. Ware, and J. V. Wijk. Theories of visualization - are there any? In IEEE VisWeek Panel, 2011. [2] T. Etiene, D. J¨onsson, T. Ropinski, C. Scheidegger, J. L. D. Comba, L. G. Nonato, R. M. Kirby, A. Ynnerman, and C. T. Silva. Verifying volume rendering using discretization error analysis. IEEE Transactions on Visualization and Computer Graphics, 20(1):140–154, 2014. [3] Faculty of Education, University of Waikato. The science learning hub. accessed in June 2016. [4] B. Glaser and A. L. Strauss. The Discovery of Grounded Theory: Strategies for Qualitative Research. Aldine Transactio, nChicago, 1967. [5] J. Heer, N. Kong, and M. Agrawala. Sizing the horizon: The effects of chart size and layering on the graphical perception of time series visualizations. In Proceedings of the SIGCHI Conference on Human Factors in Computing Systems (CHI), pages 1303–1312. ACM, 2009. [6] D. Huang, M. Tory, , and L. Bertram. A field study of on-calendar visualizations. In Proc. Graphics Interface 2016, May 2016. [7] C. Johnson. Top scientific visualization research problems. IEEE Computer Graphics and Applications, 24(4):13–17, 2004. [8] Z. Liu and J. Stasko. Theories in information visualization: What, why and how. In Workshop on the Role of Theory in Information, 2010. [9] I. Meirelles. Design for Information. Rockport, 2013. [10] T. Munzner. Process and pitfalls in writing infovis research papers. In A. Kerren, J. T. Stasko, J.-D. Fekete, and C. North, editors, Information Visualization: Human-Centered Issues and Perspectives, volume 4950 of LNCS, pages 133–153. Springer-Verlag, 2008. [11] T. Munzner. A nested model for visualization design and validation. IEEE Trans. Visualization and Computer Graphics (Proc. InfoVis 09), 15(6):921–928, 2009. [12] T. Munzner. Visualization Analysis and Design. AK Peters Visualization Series, CRC Press, 2014. [13] P. Pirolli and S. Card. The sensemaking process and leverage points for analyst technology as identified through cognitive task analysis. In Proc. Int. Conf. on Intelligence Analysis, volume 5, pages 2–4, 2005. [14] D. Reed, R. Bajcsy, J. M. Griffiths, J. Dongarra, and C. R. Johnson. Computational Science: Ensuring America’s Competitiveness. President’s Information Technology Advisory Committee (PITAC), June 2005. www.nitrd.gov/pitac/reports. [15] M. Sedlmair, M. Meyer, and T. Munzner. Design Study Methodology: Reflections from the Trenches and the Stacks. IEEE Transactions on Visualization and Computer Graphics (Proc. InfoVis 12), 18(12):2431–2440, 2012. [16] M. O. Ward, G. Grinstein, and D. Keim. Interactive Data Visualization: Foundations, Techniques, and Applications. A K Peters, 2 edition, 2015. [17] C. Ware. Information Visualization: Perception for Design. Morgan Kaufmann, 3 edition, 2013. [18] C. Ziemkiewicz, P. Kinnaird, R. Kosara, J. Mackinlay, B. Rogowitz, and J. S. Yi. Visualization theory: Putting the pieces together. In IEEE VisWeek Panel, 2010.

Pathways for Theoretical Advances in Visualization ...

†e-mail: ggrinstein@gmail.com. ‡e-mail: .... reach of most authors is to contribute what I'll call “micro-theory”: ... other [5]. I particularly call for more work along these lines by a ... information and interaction tools are likely to best support human.

448KB Sizes 0 Downloads 99 Views

Recommend Documents

HPV guided object tracking: Theoretical advances on ...
Jul 1, 2016 - Diamond Harbour Rd., Sarisha Hat, Sarisha 743368, West Bengal, India ... to achieve pattern matching on sliding window of the image scene.

Advances in Macroeconomics
age population—will be associated to high saving rates, faster capital ... take into account the general equilibrium interactions between factor prices and capital ... these interest rates differentials will command large capital flows. There-. 3.

Allen Lowry_Recent Technology Advances in Soft Landing for ...
This experience includes airbag based landing systems for space ... Allen Lowry_Recent Technology Advances in Soft Landing for Balloon Payloads.pdf.

Theoretical Foundations for Learning Kernels in ... - Research at Google
an unsupervised procedure, mostly used for exploratory data analysis or visual ... work is in analysing the learning kernel problem in the context of coupled ...

pdf-1267\advances-in-global-leadership-vol-3-advances-in ...
... apps below to open or edit this item. pdf-1267\advances-in-global-leadership-vol-3-advance ... ces-in-global-leadership-by-wh-mobley-pw-dorfman.pdf.

Recent Advances in Dependency Parsing
Jun 1, 2010 - auto-parsed data (W. Chen et al. 09) ... Extract subtrees from the auto-parsed data ... Directly use linguistic prior knowledge as a training signal.

Advances in Biochemical Engineering.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Advances in ...

1281 ADVANCES IN ENVIRONMENTAL REACTION ...
Trivedi, P., Dyer, J., Sparks, D. L. and Pandya, K. (2004). Mechanistic and thermodynamic interpretations of zinc sorption onto ferrihydrite, J. Colloid Interface. Sci., 270, 77-85. 7. Pan, G., Krom, M.D. and Herut, B. (2002). Adsorption-desorption o

Advances in Business Management: Value ... -
technology and their applications in manufacturing, infrastructure ... and communication technologies, resource management and many other ... inventions.

Recent Advances in Surfactant EOR.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Recent Advances in Surfactant EOR.pdf. Recent Advances in Surfactant EOR.pdf. Open. Extract. Open with. Sign

Recent advances in ketene chemistry - Arkivoc
Flash vacuum pyrolysis of N-(2-pyridyl)acetamides 78 generates ketenes 79 by elimination, and these are ...... Sun, L.; Liang, Z.; Ye, S. Acta Chim. Sinica 2014 ...

advances in importance sampling
Jun 29, 2003 - in Microsoft Word. .... Fortunately, extensions of both methods converge to produce the same ..... Speeding up the rate of convergence is a mo-.

Advances in Neonatal Pulmonary Hypertension.pdf
Page 1 of 11. E-Mail [email protected]. Review. Neonatology 2016;109:334–344. DOI: 10.1159/000444895. Advances in Neonatal Pulmonary. Hypertension.