AlgoVis: A Tool for Learning Computer Science Concepts Using Algorithm Visualization Sadia Afroz Department of Computer Science Drexel University Philadelphia, PA 19104 [email protected] ABSTRACT

In general, Algorithm visualization(AV) with animation helps to construct a mental model of the dynamic behavior of an algorithm in action. In this project we proposed, an AV system for elementary algorithms that would allow both active and passive learning. Our AV system, AlgoVis, illustrates an algorithm in the form of animation, code and plain english explanation. Informal evaluation with users suggested that viewing the same algorithm in many forms helps to build a robust conceptual knowledge about algorithms in general. New ways of improving AV systems are also discussed. INTRODUCTION

Algorithm visualizations (AV) are graphical illustrations of the dynamic behavior of computer algorithms. Visual and animated representation of theoretical algorithmic concepts is a powerful alternative to static presentation as it helps to better understand how the algorithms work. AV also helps in the process of designing and debugging algorithms. For computer science students, AV helps to quickly learn algorithms in greater depth [4]. Although many visual tools are available, their effectiveness in algorithm learning is still questionable [18]. Major shortcomings of current approaches are: 1. Lack of resources to interpret the graphical representation: Most tools do not provide proper explanation of the activity going on and/or do not explain relation between visual presentation with the actual algorithm [14]. 2. Lack of active involvement with the animations: Studies on current algorithm visualization tools revealed that learner who are actively engaged with the visualization technology have better understanding of the concepts then those who passively view visualizations [11]. 3. Lack of explanation of necessary concepts: Proper understanding of an algorithm requires understanding its performance analysis , worst case/best case scenario and also comparison with other similar algorithms. 4. Time requirement: Complex AV systems require too much time and effort to learn and use effectively. Effectiveness of an AV system depends on the manner and degree to which the learner becomes engaged in activities [7]. In this research, a new AV system is proposed that ensures

active learning of algorithms through active interaction with the visualization. Goal of this system is to provide multiple levels of understanding about an algorithm. With our system a student would be able to 1. Understand what happens in each step of an algorithm, 2. Identify worst case and best case behavior, 3. Compare an algorithm’s performance with other similar algorithms. Our system allows both passive and active learning so that learners can choose an animation to run on a predefined data set or execute the algorithm using dynamic data. The system also shows pseudocode for currently executing step of the algorithm along with the animation. Animation with these conceptual information improves understanding of the algorithm shown. BACKGROUND

Current algorithm visualization tools fall into the following categories based on the level of learner interaction with the visualization: 1. Fully passive: Students observe expert’s version of an algorithm animation as an audience. 2. Partially active: Student can choose input data for algorithm to run, give quizzes, make prediction of next step. 3. Fully active: Student constructs animations. Fully passive: Fully passive tools are mostly used in early empirical studies to understand effectiveness of animation over textual information. No significant learning improvement are noticed in these experiments [18, 7, 11, 4]. Partially active: Wolfe’s TERA system [17] is an interactive tool that allows students to compare effects of different rendering algorithms on a graphical object. TERA has two modes – explore mode and quiz mode. In quiz mode, students are presented with a scene and must determine the rendering algorithm used to produce that scene. Success of TERA system inspired more

researchers to explore AV with active involvement and instant feedback . JHAVE´ [13] is another such system that allows students to explore algorithms by viewing visual representations of data, controlling movement and by responding to pop-up questions. Mostly, visualization systems were designed for visualizing special kind of algorithm or system that were built for a new architecture. Among them, we can list the clientserver architecture that presented the animation-server idea (Mocha [1]), running animations of parallel algorithms (NESL [2]), geometric algorithms (GASP [20]), animation of data structures (SKA [8], JAWAA [15]), or systems that animated code in specic programming language (BALSA [3], and an early version of Leonardo [5]). Fully active: Douglas, Hundhausen, and McKeown [6] found that the visual depictions commonly used in algorithm visualizations do not accord well with student-generated conceptualizations of the algorithms. To get rid to this non-congruence and to create more involvement with the visualization system, many framework support creation of animations. The Tango framework contains several implementations Xtango, Polka and Samba. XTango [19] (the X11 Window implementation of Tango), allows users to develop their own animations. It requires implementing the algorithm in C and point out the important events to be portrayed during the execution of the algorithm. Xtango also has a large collection of built-in animations. Polka is particularly well-suited for animating parallel programs. POLKA provides its own highlevel abstractions to make the creation of animations easier and faster than with many other systems. In Animal system, animations are built using a graphical editor [16]. Like Tango, Animal can be used to visualize many representations; Animal animations can be generated without actually implementing the algorithm in code. Animal has specialized support for displaying code or pseudocode in the animation. CAROUSEL [10, 9] allow students build their own animations, share with community, evaluate each others animation and commenting on other’s visualizations. What You See Is What You Code (WYSIWYC) system on ALVIS Live [12] gives dynamic feedback on each line’s correctness while students are coding their animation. Although creating own animation provides better involvement with the AV but it is much time consuming and how much it actually helps the students to understand concepts is still need to be studied. Our system is an active learning system that could be used by both intermediate and novice learners. It allows learners can learn about algorithm by watching the pre-defined animations and run the animation with own user set. It shows the pseudocode of the algorithm and provide explanation of codes which gives the learner a holistic picture of an algorithm.

Figure 1. AlgoVis: Algorithm visualization system showing animation of insertion sort

data structure used. Our system is implemented using java swing framework. Java framework is used to achieve platform independence. Scope

The project is mainly targeted for students/learners who are beginning algorithm learning. Our goal is to build a conceptual base for algorithm learning using basic algorithms animations. This project considered elementary algorithms (for example, sorting ) for visualization. One dimensional array data structure is assumed for the algorithms. The system may not be suitable for algorithms with complex data structure, for example, graph algorithms. User Interface

The system’s user interface consists of a main menu and four panels namely: 1. Visualization panel where the animation is shown. 2. Code panel where pseudocode of the currently animated algorithm is shown and highlighted with the animation. 3. Control panel that has the control buttons to control the animation. 4. Explanation panel that shows explanation of the line of code currently executing. Figure-1 and Figure-2 show the use of four panels to visualize the animation of insertion sort and quick sort, respectively.

SYSTEM OVERVIEW

Main Menu

Goal of the algorithm visualization system is to demonstrate how an algorithm works and how it changes the state of the

The main menu bar contains submenu for exit, for choosing algorithms to visualize and for finding help.

mode of animation. In passive mode, animation is shown on a predefined data set. Best case and worst case situations of an algorithm are shown upon user’s choice. In active mode, user can choose own data set. Currently only integer type data are supported. As main purpose of the system is to provide visualization, we limited our data size to 15 which is adequate to provide sufficient information about the algorithms.

Figure 2. AlgoVis : Algorithm visualization system: showing animation of quick sort

Visualization panel:

This panel is used to show the change of the data after the execution of each step of an algorithm. The data structure used in the algorithm are displayed in the bottom. With each step, the elements considered are highlighted. If any additional variables are required to explain the algorithm, those are also displayed. For example, the temporary variable and previous element for the insertion sort algorithm are required to show at each step where these two variables are compared.

Currently two modes of operation are supported, one is full execution mode and another is step wise execution mode. In the full execution mode, the whole algorithm runs at once. User can start full execution by pressing the “Execute” button and stop execution at any time using the “Pause” button. While the algorithm is running, the “Apply” and “Execute” buttons are disabled. In the step wise execution mode, visualization stops after executing one line. By pressing the “Next Step” button user can run the algorithm stepwise. This system also allows user to run the portion of the algorithm in stepwise mode and portion of it in full execution mode. This helps the learners to debug the algorithm. Explanation panel:

Explanation panel shows brief description of the line of code currently executing. Explanations are predefined that is these do not change with the data set. For a new algorithm learner pseudocode provided with the tool might be difficult to understand, explanation panel can be used in those cases to clear out any misunderstanding. Synchronization of the explanation panel with the visualization panel is maintained by the controller thread.

Code panel:

In the code panel, pseudocode for the algorithm currently displayed are shown. Pseudocode is chosen instead of actual programming language code mainly because of its simplicity. Pseudocode makes the code simple enough to be understandable by a new learner and at the same time provide the core ideas of an algorithm. The line that is currently executing is highlighted at each step. Synchronization of the code panel with the visualization panel is maintained using a controller thread. Each algorithm in the system are executed by its own controller thread. After executing each line of the algorithm this thread does the following: 1. Highlight the line of code currently executing, 2. Make changes in the data structure and change the visualization panel accordingly, 3. Change the explanation panel to show the explanation of currently executed step, 4. Enable/disable the control panel depending on the current algorithmic state.

EVALUATION

To truly assess the system’s utility, evaluation should be performed with students who have little computer science background ( who do not know about the algorithms shown in the system) and verify their understanding of the algorithm before and after viewing the visualization. For time limitation, we could not do evaluation on such users. Instead we performed “Hot Heuristic” evaluation with users who are already well known about the algorithms to ensure that the system maintains the general usability principles and illustrates sufficient information about an algorithm. We performed usability study of the system with 5 participants. 4 of them were graduate level student with computer science major and 1 completed her undergraduate with noncs major. All the users were given the tool and asked to use the system for at least 15 minutes. After that they were given several questions about how well the visualization worked. 4 of the users were well known about the algorithms shown in the tool and one user did not know about the algorithms. Questions asked to the participants were: 1. Did the visualization showed the key information about the algorithm?

Control panel:

Using the control panel, user can control and interact with the animation. The system supports both passive and active

2. Do you think the explanation provided was understandable for a student with no/little algorithm background?

3. Do you think explanation should be put in the visualization panel? 4. Do you think the pseudocode provided was clear? 5. Was the navigation of the system easy? 6. Was there too much/too little information? 7. What other information would be useful? 8. Do you think anything redundant? 9. Rate the system in 1-5 with 1 being very bad and 5 being very good Participants were also asked some algorithm specific questions: 1. Did you know the algorithms before? 2. Which algorithm is faster? 3. When quick sort do not work well? 4. When insertion sort performs better than quick sort? Results

Since most of the users were well known about the algorithms, they did well in the algorithm specific questions. The participant with non-cs background answered 1 question correctly. All the users felt the visualization showed the main idea of the algorithm properly. All the participants understand the explanations provided and considered them helpful. Two participants commented that explanation should be provided in the visualization panel as they felt it was hard to read the explanations and watch the visualization at the same time. On the contrary, one user commented that too much information on the visualization panel might distract students to understand the key idea of the algorithm. The pseudocode provided was well understood by all the participants. 40% users commented that same variable notation should be used in both the pseudocode and the visualization. None of users considered any information as redundant. All users liked the step-by-step algorithm execution as it helps user to debug the algorithmic steps. One user suggested that there should be a “Back step” button if anyone wants to review the previous steps. 80% users rated the system as good enough (3 and higher). Overall the users provide positive feedback about the current system with many useful suggestions for improving usability. CONCLUSION AND FUTURE WORKS

In this project, we outlined a general algorithm visualization system that allows user to learn algorithmic concepts through visualization and interaction. The system provides

different levels of information about an algorithm, allows user to interact with the algorithm using her data and also allows debugging. To make a more general visualization system, several drawbacks should be addressed. Future work in this project would include but not limited to the following concerns: 1. Provide multiple level of interaction: Learning of a concept improves with the level of interaction. A generic AV system should support multiple level of interaction for the users. Our system only supports a limited form of interaction. AV system should be designed to meet the learning need of both beginners and experts learners. Beginner learns from passive viewing and minimal interaction whereas experts learn by testing their own algorithm visualization. 2. Allow visualization of different data structures: Generic algorithm visualization system also should not be limited to specific data structure. Our system currently supports limited classes of algorithm that use array as data structure. Algorithms with complex data structures, graph algorithms would be difficult to visualize with this system. 3. Demonstrate comparison of algorithms: An AV system should allow the visualization of the comparison of algorithms. Comparing among two or more algorithms would be helpful in understanding the difference and similarity among different algorithms of same type. This would aid learners to design their own algorithm and understand situations where a particular algorithm would be more appropriate than the others. REFERENCES

1. James E. Baker, Isabel F. Cruz, Giuseppe Liotta, and Roberto Tamassia. Algorithm animation over the world wide web. In AVI ’96: Proceedings of the workshop on Advanced visual interfaces, pages 203–212, 1996. 2. Guy E. Blelloch. Nesl: A nested data-parallel language (version 2.6). Technical report, Pittsburgh, PA, USA, 1993. 3. Marc H. Brown and Robert Sedgewick. A system for algorithm animation. SIGGRAPH Comput. Graph., 18(3):177–186, 1984. 4. Michael D. Byrne, Richard Catrambone, and John T. Stasko. Evaluating animations as student aids in learning computer algorithms. Comput. Educ., 33(4):253–278, 1999. 5. Camil Demetrescu and Irene Finocchi. A general-purpose logic-based visualization framework. Proceedings of the 7th International Conference in Central Europe on Computer Graphics, Visualization and Interactive Digital Media., page 5562., 1999. 6. Sarah Douglas, Christopher Hundhausen, and Donna McKeown. Exploring human visualization of computer algorithms. In GI ’96: Proceedings of the conference on Graphics interface ’96, pages 9–16, 1996.

7. Scott Grissom, Myles F. McNally, and Tom Naps. Algorithm visualization in cs education: comparing levels of student engagement. In SoftVis ’03: Proceedings of the 2003 ACM symposium on Software visualization, pages 87–94, 2003. 8. Ashley George Hamilton-Taylor and Eileen Kraemer. Ska: supporting algorithm and data structure discussion. In SIGCSE ’02: Proceedings of the 33rd SIGCSE technical symposium on Computer science education, pages 58–62, 2002. 9. Teresa H¨ubscher-Younger and N. Hari Narayanan. Constructive and collaborative learning of algorithms. In SIGCSE ’03: Proceedings of the 34th SIGCSE technical symposium on Computer science education, pages 6–10, 2003. 10. Teresa H¨ubscher-Younger and N. Hari Narayanan. Dancing hamsters and marble statues: characterizing student visualizations of algorithms. In SoftVis ’03: Proceedings of the 2003 ACM symposium on Software visualization, pages 95–104, 2003. 11. Christopher Hundhausen, S. A. Douglas, and John T. Stasko. A meta-study of algorithm visualization effectiveness. Journal of Visual Languages and Computing, 13:259–290, 2002. 12. Christopher D. Hundhausen and Jonathan Lee Brown. What you see is what you code: A radically dynamic algorithm visualization development model for novice learners. In VLHCC ’05: Proceedings of the 2005 IEEE Symposium on Visual Languages and Human-Centric Computing, pages 163–170, 2005. 13. Thomas L. Naps, James R. Eagan, and Laura L. ´ Norton. JhavE—an environment to actively engage students in web-based algorithm visualizations. In SIGCSE ’00: Proceedings of the thirty-first SIGCSE technical symposium on Computer science education, pages 109–113, 2000. 14. Thomas L. Naps, Guido R¨o, Vicki Almstrum, Wanda Dann, Rudolf Fleischer, Chris Hundhausen, Ari Korhonen, Lauri Malmi, Myles McNally, Susan ´ Rodger, and J. Angel Vel´azquez-Iturbide. Exploring the role of visualization and engagement in computer science education. In ITiCSE-WGR ’02: Working group reports from ITiCSE on Innovation and technology in computer science education, pages 131–152, 2002. 15. Willard C. Pierson and Susan H. Rodger. Web-based animation of data structures using jawaa. SIGCSE Bull., 30(1):267–271, 1998. 16. G. R¨oßling and Bernd Freisleben. Animal: A system for supporting multiple roles in algorithm animation. Journal of Visual Languages and Computing, 13:341–354, 2002. 17. Andrew Sears and Rosalee Wolfe. Visual analysis: adding breadth to a computer graphics course. In SIGCSE ’95: Proceedings of the twenty-sixth SIGCSE

technical symposium on Computer science education, pages 195–198, 1995. 18. John Stasko, Albert Badre, and Clayton Lewis. Do algorithm animations assist learning?: an empirical study and analysis. In INTERCHI ’93: Proceedings of the INTERCHI ’93 conference on Human factors in computing systems, pages 61–66, 1993. 19. John T. Stasko. Tango: A framework and system for algorithm animation. SIGCHI Bull., 21(3):59–60, 1990. 20. Ayellet Tal and David Dobkin. Visualization of geometric algorithms. IEEE Transactions on Visualization and Computer Graphics, 1(2):194–204, 1995.

AlgoVis: A Tool for Learning Computer Science ...

ies on current algorithm visualization tools revealed that learner who are actively ..... Proceedings of the 7th International Conference in. Central Europe on ...

214KB Sizes 0 Downloads 262 Views

Recommend Documents

AlgoVis: A Tool for Learning Computer Science ...
AlgoVis: A Tool for Learning Computer Science Concepts. Using Algorithm ... In this project we proposed an AV sys- ... (Mocha [1]), running animations of parallel algorithms (NESL [2]), geometric ... core ideas of an algorithm. The line that is ...

Machine Learning - UBC Computer Science
10. 1.3.2. Discovering latent factors. 11. 1.3.3. Discovering graph structure. 13. 1.3.4 ..... Application: Google's PageRank algorithm for web page ranking *. 600.

Learning of Tool Affordances for Autonomous Tool ...
plan a strategy for target object manipulation by a tool via ... through its motor actions using different tools and learning ..... Robotics and Automation (ICRA).

Learning of Tool Affordances for Autonomous Tool ...
But at the same time it is an infinitely open challenge and demands to be ... Problem1 is addressed by learning tool affordances using random ..... The BN's are implemented using the open source .... Pattern Recognition and Machine Learning.

Generalization Bounds for Learning Kernels - NYU Computer Science
and the hypothesis defined based on that kernel. There is a ... ing bounds are based on a combinatorial analysis of the .... By the definition of the dual norm, sup.

Kernel Methods for Learning Languages - NYU Computer Science
Dec 28, 2007 - cCourant Institute of Mathematical Sciences,. 251 Mercer Street, New ...... for providing hosting and guidance at the Hebrew University. Thanks also to .... Science, pages 349–364, San Diego, California, June 2007. Springer ...

L2 Regularization for Learning Kernels - NYU Computer Science
via costly cross-validation. However, our experiments also confirm the findings by Lanckriet et al. (2004) that kernel- learning algorithms for this setting never do ...

Learning with Weighted Transducers - NYU Computer Science
parsing and language modeling, image processing [1], and computational biology [15, 6]. This paper outlines the use of weighted transducers in machine ...

A Tool for Text Comparison
The data to be processed was a comparative corpus, the. METER ..... where xk denotes the mean value of the kth variables of all the entries within a cluster.

A new tool for teachers
Items 11 - 20 - Note: The authors wish to express their sincere thanks to Jim Davis .... of the American population) to allow confident generalizations. Children were ..... available to them and (b) whether they currently had a library card. Those to

A Tool for All Seasons
variation. Moreover, museum curators are often reluctant to allow researchers to drill deep grooves into rare hominin teeth. In contrast to conventional methods, ...

A Collaborative Tool for Synchronous Distance Education
application in a simulated distance education setting. The application combines video-conference with a networked virtual environment in which the instructor and the students can experiment ..... Virtual Campus: Trends for Higher Education and. Train