Controlling Heterogeneous Semi-autonomous Rescue Robot Teams M. Waleed Kadous, Raymond Ka-Man Sheh and Claude Sammut

Abstract— Robot-assisted Urban Search and Rescue (USAR) operations benefit from having multiple robots search an area, especially if doing so does not require additional operators. However, designing a user interface that facilitates a single operator controlling many robots is challenging. In particular, the problems of situation awareness and cognitive load are amplified. This is especially the case when the robots concerned have a large number of degrees of freedom. We present a preliminary design and implementation of a user interface for a team of heterogeneous, potentially autonomous USAR robots with many degrees of freedom for both sequential and parallel operation. It extends our earlier design for a successfully deployed single-robot interface. Our design is inspired by Real-Time Strategy computer games, which must address many similar issues. The design seeks to maximise situational awareness and reduce cognitive load while allowing the operator to monitor and, if necessary, control all of the robots. Our user interface was deployed during the 2006 RoboCup Rescue Robot League where it played an important role in achieving the highest single-run scores in the preliminary rounds of the competition.

I. I NTRODUCTION Current work in the use of robots in urban search-andrescue tasks has focused on one or more operators for each robot. Clearly there are significant advantages if one operator can effectively manage a team of robots. This is particularly the case when some of the robots are partially or fully autonomous, thus enabling parallel operation. These include reduced risk of catastrophic failure if a single robot becomes disabled; greater or more detailed search; and the potential for specialised robots to increase the efficacy of the search. However, designing an interface for multiple high-DOF (degree of freedom) semi-autonomous robots presents significant challenges compared to doing so for a single, relatively simple robot. The usual issues of situational awareness and cognitive load are even more important as the operator must now keep track of several robots and switch contexts between them. This is especially the case when some of the robots may be under autonomous control – the operator must continue to be mindful of their progress even when not directly controlling them. Finally, techniques for supporting the operator in integrating maps and information from diverse robots are not well developed. Failure to address these issues can result in poorer performance compared to single-robot solutions due to factors such as operator confusion and excessive context switching. This work was supported by the Australian Research Council Centre of Excellence for Autonomous Systems M. W. Kadous, R. K. Sheh and C. Sammut are with the School of Computer Science and Engineering, The University of New South Wales, Sydney, Australia [waleed|rsheh|claude]@cse.unsw.edu.au

Our earlier work [1] focused on a single-user, single-robot interface whose design was partially based on first-person shooter games. Human trials showed that there was some positive transfer between experience in computer games and success in operating the user interface. We extend this work and bring in experiences from another genre of computer games, that of real-time strategy games. Such games are similar to our current task – one operator must control a team of potentially semi-autonomous agents which may be heterogeneous in nature. The environment is often not completely known and must be explored. There is ample opportunity for agents to become stuck or otherwise not accomplish the intended task and the operator must be kept informed of their progress. Finally, the user interfaces involved in such games must be intuitive, efficient and easy to learn if they are to succeed in the computer games market. Of course, there are major differences as well. The game environment is under the direct control of the programmer so issues such as localisation and mapping uncertainty, mapping and sensing errors and communication issues are minimised. II. BACKGROUND A. Single robot interface In our previous work, we combined certain important design principles from human-computer interaction and existing human-robot interaction [2] and chose a number of issues that we considered to be most important – Situational Awareness, Efficiency, Familiarity and Responsiveness. We also drew inspiration from first person shooter games such as “Quake”, “Half-Life” and “Unreal Tournament”, and metaphors from domains such as cars, planes and mobile phones. From these, we developed a number of guidelines – keep the interface simple, keep the interface familiar and do not require the operator to ask for information if it can be presented readily. For a more detailed description of these guidelines and issues, please refer to our prior work [1]. Our new interface includes components of our old interface for the purpose of teleoperating robots. However, there are significant extensions to be made in order to build an interface suitable for managing several high-DOF robots, maintaining the operator’s situational awareness and minimising confusion. B. Other work on human-robot interfaces There has been considerable work performed in developing and studying user interfaces for single robots. Perhaps not surprisingly, many of the more successful ones appear similar to computer games. A more complete survey and

discussion of the various alternative single user interfaces appears in our previous work [1]. Considerably less work has been performed in multi-robot semi-autonomous interfaces, especially where the robots under control have a large number of degrees of freedom, such as mobility flippers or robot arms. There has been some work in using computer games as inspiration for related fields. For instance, [3] used an interface inspired by Starcraft for managing a power network, where the focus was on multiple agents of known identity, in a known environment but with unknown state. [4] was also inspired by some of the concepts of Starcraft for the purpose of co-ordinating a team of robotic ground and air vehicles, but the work was more oriented towards interaction with objects without a global map. It did not explore navigation or semi-autonomous behaviour.

Fig. 1. An example of a RoboCup Rescue Robot League standard arena, this one used during the 2005 competition in Osaka.

III. TASK D ESCRIPTION A. RoboCup Rescue Robot League

C. Computer games Popular real-time strategy games such as “Starcraft” and “Dungeon Keeper ” involve a player commanding a number of units with a 2D or 3D view of the environment. Vision is limited to explored areas only and until recently autonomy was limited to point navigation within explored areas. “Dungeon Keeper” in particular presents an interesting model of interaction: in addition to the typical top-down view used for strategy games; players can “possess” any agent and “teleoperate” them before returning to the top-down view to interact with agents at a higher level. In particular, the games have a number of consistent characteristics: • • •

• •

A global map is shown of the entire area, and the user can focus their attention on a sub-part of the map. Users highlight selected agents with the mouse. Once highlighted, agents can be given high level commands, including: following another agent, traverse a path specified by the user or to hold still. The agents continue executing these tasks until they are completed, fail or the user modifies their instructions. The visual display of all agents is updated in real time, and drawn over the currently drawn map.

There are several advantages to borrowing design elements and principles from such video games [5]. Video games are designed with the explicit need for users to quickly learn the interface, thus for the most part they are designed to be intuitive and efficient. Borrowing these principles also improves the familiarity of our interface if the operator is already familiar with such games. There are a number of other techniques we use that are derived from video games, for the purpose of maintaining situational awareness and reducing confusion. These include placing the controlled character or vehicle in the middle of the field of view through a behind-the-head view, changing the appearance of the user interface to reflect different agents, and rendering relevant features, such as a vehicle’s wings, windscreen or dashboard in the player’s field of view.

The RoboCup Rescue Robot League (RRL) aims to provides a standardised and relatively objective measure of performance for research related to USAR. Doing so means that very early in the development cycle, new technologies can be practically evaluated RRL Standard Arenas are intended to be different, but comparable, real-world environments where robots may be tested, much like how various golf courses are different but comparable environments [6]. An example of an RRL Standard Arena is shown in Figure 1. They are practical approximations to “real world” disaster sites thus debris, loose material, low clearances, victims with varying states of consciousness, multiple levels, variable lighting and radio blackspots are replicated. Competitors receive points for identifying victims, detecting victim state (such as movement, temperature, sounds etc) and placing them in a map. B. Our Robots Our team of robots consists of three distinct classes. The first is a single, sensor carrying robot, CASTER Scorpion. It is based on our previous robot, CASTER [7] which came 3rd in the 2005 RoboCup Rescue Robot League competition and is shown in Figure 2. It is now equipped with dual 2D laser scanners and a 3DOF arm with a CSEM SwissRanger SR-2 range imager, thermal camera, ultra-wide angle camera and narrow angle (tele) camera. These are mounted at the end of the robot arm, with the last two joints of the arm functioning as a pan-tilt unit. It moves and steers by skid-steering and the arm is powerful enough to be lowered and used as a prop to support the robot when overcoming stairs and unstructured terrain. It is particularly challenging to operate because it is quite large and therefore sensing shadows become an issue. The arm’s 3 degrees of freedom also present a significant operator control challenge. The large number of sensors aboard the robot must also be presented to the operator in an intuitive fashion. The second class of robots is a number of smaller platforms called Redbacks [8], shown in Figure 2. These robots are based on MGA Tarantula remote control toys and have

may be commanded to perform 3D scans and these can also be incorporated into the map. D. Autonomy Currently only HOMER runs autonomously although additional autonomous and semi-autonomous behaviours for the other two robots, for advanced mobility as well as autonomous exploration, are being developed. These include semi-autonomous point navigation and semi-autonomous stair traversal. Fig. 2. CASTER Scorpion, left, our 2006 entry into the RoboCup Rescue Robot League competition. Redback, right, one of several identical robots that, along with CASTER Scorpion form our robot team.

been equipped with a 3D laser scanner and two cameras, one omnidirectional and one forward-facing. Although these robots have fewer sensing capabilities than CASTER Scorpion, they feature very good advanced mobility, with four moveable flippers that allow them to climb stairs and traverse very difficult terrain. Being considerably lighter and cheaper, there were also up to two of them operating in the arena. The final robot, HOMER1 , is a basic floor robot equipped with a 2D laser rangefinder and optical and thermal cameras on a pan-tilt unit. This platform is designed to operate in both autonomous and teleoperated modes. In autonomous mode, it builds maps, explores the environment using frontier exploration and attempts to find victims using its thermal camera. Although not capable of advanced mobility, the HOMER platform is designed to assist in the investigation of autonomous behaviours that may be applied to the other two robots. C. Mapping All robots in the team are equipped with at least 2D laser range scanners, which may be used on flat areas of the arena to autonomously produce maps [9]. The robots also have capabilities for producing 3D maps. In the case of CASTER Scorpion, this is via a CSEM SwissRanger SR-2 range camera, in the case of the Redback robots it is in the form of a rolling mount for the 2D laser. When on flat ground, the map building process is largely autonomous; since the 2D lasers run continuously in this mode, the robots’ movements between scans are small, and thus laser-based odometry is likely to be effective. Maps from different robots may also be registered with one another although the higher placement of CASTER Scorpion’s scanner relative to the Redbacks means there may be discrepancies. In 3D mode, scans of the environment may be made on request. In the case of CASTER, the operator can either request a 360◦ scan which rotates the sensor head through a full circle, or request “snaps” of victims and landmarks which incorporate 3D data from the SR-2 imager into the map. Likewise, the laser range scanner on the Redback robots 1 HOMER was developed by our team partners at the Mechatronics and Intelligent Systems Group at the University of Technology Sydney.

IV. D ESIGN OF THE H UMAN -ROBOT I NTERFACE In designing our user interface, we first developed a set of guidelines to be followed that would assist us in producing an interface that addressed the issues highlighted in [2] and section II. These are similar to the ones in our previous work [1] and are described below, but in the context of multiple, potentially semi-autonomous, high-DOF robots. We will then present the design of our user interface itself and describe its individual components. A. Design Guidelines 1) User shouldn’t have to ask for information: The operator should be provided with all the essential information without needing to ask for it by flipping modes, changing pages or the like. However, this should be done with care – presenting a lot of information at the same time can easily lead to information overload. Instead, this information should be fused into a representation that does not require a significant amount of attention to interpret. For instance, simply tiling all the video streams and providing a large panel of gauges would not suffice, nor would requiring the operator to switch between video displays in a way that may allow some to become “forgotten”2 . 2) Keep the interface simple: Keeping the interface simple is important in reducing cognitive load. This doesn’t necessarily mean having as few elements as possible on the screen – too little processed information may require the operator to mentally “fill in the gaps” and make operation even harder. Instead, principles such as keeping text on screen to a minimum, using informative icons, colours or symbols as appropriate and minimising mode switching help to simplify operation of the user interface. 3) Keep the interface familiar: By re-using metaphors that the operator will already be familiar with, the learning time for the interface can be reduced and operator confusion and cognitive load minimised. In the context of a heterogeneous multi robot team, this also extends to operations between robots – tasks between robots that are similar should operate in similar ways across all modes, if there are multiple modes. For instance, the process the operator must go through to drive forward or backward, climb or move the camera should be the same across all robots that have these 2 During the 2006 Rescue Robot League competition, many teams had multiple cameras and presented them to the operator in tiled form as opposed to a fused display. There were cases where very obvious details, such as heat blooms in a thermal camera, were missed by operators because they were looking in another part of the tiled display.

capabilities and, where possible, across all modes. Video displays should be presented in a similar way for all robots and in a way that makes sense in terms of the sensors’ actual placement on the robot. For instance, if the robot has three cameras facing in the same direction such as a wide, tele and thermal camera, they should be rendered in close proximity, with the use of picture-in-picture displays or overlays. In contrast, tasks that are distinctly different because of robot geometries or capabilities should have distinctly different controls.

Fig. 4. Closeup of the arm model from Figure 3. The left figure shows the arm (dark blue) pointing (yellow line) forward and slightly down. The right figure shows the model after a command to point to the left – the blue model shows the commanded position and the red model, which is animated in realtime based on information from the robot, shows the actual position. The red model only appears when the commanded and actual positions differ.

B. Preliminary Implementation Our preliminary user interface design is shown in Figure 3 and was used during the 2006 RoboCup Rescue Robot League competition. The interface has two main components. On the right hand side is the driving interface. This interface is similar to our previous interface and resembles first-person shooter games but has some additional components. On the left is the map display. This map is generated in realtime by our 2D/3D mapping algorithms and also shows the locations of the robots. This display is similar to map displays in various real-time strategy games. The widespread availability of wide-screen (16:9) displays allows us to combine both the teleoperation interface for direct control, and the map-oriented interface for multi-robot control and situational awareness, in the one screen without needing separate modes or hiding information. There is one mode switch – the operator can either be in mapping or driving mode, this switch is accomplished by clicking in “dead” areas on either side of the screen. Although unique keystrokes could have been used so that no mode switch would be required, it was found that the overhead of remembering potentially similar keystrokes (one can “drive” a set of map data around the map in the same way as one can drive the robot itself around) was greater than the overhead of remembering which side of the screen was active. To assist in this, the active side of the screen is highlighted with a much lighter background. 1) Driving Interface: The driving interface has several characteristics that seek to address the guidelines in section IV-A. As Figure 3 shows, there is very little text – information is presented as symbols such as the artificial horizon, signal strength meter, selected speed indicator, robot model and set-point menu. Whilst a lot of information about the robot is encoded in these icons, they are simple and easy to interpret. The two video displays take up most of the driving display area. The omnidirectional camera is presented as a de-warped image on the lower part of the display, corresponding with the fact that the camera is physically low on the robot. Above it is the fused display from the sensor head aboard the robot arm. The main view is the wide-angle camera which is presented at full frame. The thermal camera is presented as an overlaid image and is invisible for areas of the view that are below a preset temperature. Areas hotter than this appear in varying shades of semi-transparent red (warm) to yellow (very hot), allowing objects behind them to also be visible.

This ensures that when the environment is scanned for signs of life, the operator does not miss heat sources. The narrow angle (tele) camera is presented as a selectable picture-inpicture as shown in the right half of Figure 3. The ‘Z’ key toggles this display on and off. To help the operator target this camera, when it is hidden a blue outlined box appears in the view, corresponding to the area of the view that will appear magnified. Of particular interest is the robot model just outside the video screen. It is a transparent rendering of the robot currently being controlled including, for the robots that are otherwise identical, its identifying colour. This figure also displays the current configuration of the robot – in the left side Figure 4 for instance it is easily apparent that the robot arm is up and that the camera is pointed slightly downward. This becomes especially important when the robot itself is not visible in the camera view. The right half of Figure 4 in contrast shows the robot model when the arm has been commanded to point to the left but is still facing forward, most likely because it has only just been given the command to move. This serves to keep the operator informed, both of the fact that their command to move the arm has been received, and of the current position of the arm. This use of multiple indicators is similar to that in [10]. Despite carrying a lot of information, this figure takes up little screen area and is easier to interpret than a panel of numbers or gauges. This rendering could easily be moved to the center of the screen; in our tests however for the purpose of the competition it was found that the benefit of positioning the rendering in the center of the operator’s vision was not worth the partial occlusion of objects in the center of the view, despite the rendering being semi-transparent. Like our previous interface, the robots are controlled via keys on the keyboard – in the standard “inverted T” W,A,S,D configuration common to many first-person-shooter computer games. For CASTER, which is equipped with a moveable camera on the end of a robot arm, the mouse controls the pan and tilt of the camera by dragging on the user interface whilst the ‘” and ‘/’ keys move the base of the arm forward and backward. It is also possible to move the camera by clicking on either the main camera view or the omnidirectional view, in both cases the camera is repositioned to point to the location that the user clicked on. These three intuitive methods for

Fig. 3. Left: Screenshot of our main GUI. The map display is on the left and is updated in realtime with the locations of the robots and with incoming mapping data. The driving display is on the right. Right: The driving display when the narrow (tele) camera is selected.

moving the camera do not require mode switches. The omnidirectional cameras are presented as an “unwrapped” image in order to take advantage of the wide field of view of the camera although according to operator tastes it could also be presented as a virtual pan-tilt unit. The lack of interface buttons is also deliberate and minimises the need for excessively fine mouse control which would be required for clicking on buttons, dragging sliders or the like. This is especially the case when the operator can touchtype. The right section of the driving display shows 3D renderings of the arm at various set points that can be selected with the 12 keys along the top of the keyboard (‘1’ to ‘=’). These positions can be operator-defined, much like in strategy games or car radios. For instance, during the 2006 RRL competition a new element was introduced that placed victims at different heights in small holes. To create set points that matched these heights, the operator simply moved the arm to an appropriate position and press Ctrl-9 to save the arm position to set point 9. Whenever the operator again needed the arm in that position, pressing ‘9’ would immediately move the arm to the appropriate position. Victims and landmarks are indicated by centering the desired object in the robot’s view and pressing a key to denote its identity – a small text box which minimally occludes the driving window appears in which the operator may type a description. This object is marked on the 3D map automatically, with its distance obtained from the 3D SwissRanger range camera. On the lighter weight Redback robots, this distance is fixed at 1m, the practical distance at which victim detections are possible, and can be adjusted later by the user. When the user marks a victim or landmark, images from all cameras and sensors are captured and the results placed on a list (shown in the GUI in the “Snaps”

tab) for future reference. 2) Map Interface: The critical interface for maintaining multi-robot situational awareness is the mapping interface shown on the left of Figure 3. The collaboratively generated map is updated in realtime, as information comes in from each robot. By keeping this up in the operator’s view, issues with map scan registration can be picked up at a glance, rather than hidden away in a panel that can be forgotten. If the system detects an error in the automated scan registration process, such as a sudden drop in the ratio of matched to unmatched map points, the relevant scan can be highlighted for the operator’s attention. However the system is robust to mismatches so the operator may correct the error at their leisure, so long as the mismatch does not affect running autonomous actions. Scans, victims and landmarks may be selected and edited in the map interface to correct errors resulting from automatic scan alignment and placement. As there are potentially several elements at any one point in the map, the type of object – scan, victim, landmark or robot – must be chosen first. Moving the maps can be accomplished using either the mouse or the W,A,S,D keys. Finally, to minimise clutter, groups of elements can be hidden from the map view. For instance, all landmarks, victims or robots may be hidden. As these elements are distinctive, their absence is obvious, minimising the risk of the operator forgetting about them. V. E VALUATION The user interface described was deployed during the 2006 RoboCup Rescue Robot League competition. Subjective analysis of the interface by the robot operator and analysis of over-the-shoulder views seem to indicate that the user interface was able to help the operator maintain a mental

model of the robots’ environments to a reasonable degree of accuracy. In other words, it was rare for the operator’s belief of the state of the robots to differ from the robots’ actual location and state. This was particularly apparent when the arm was being used for advanced mobility, and where there were multiple robots that had been operated sequentially and had been spread out across the arena. In the former case, the fact that all cameras were presented and that the arm’s position could be observed iconically, without interpreting numbers or gauges was useful. In the latter case, the availability of the combined map view at a glance was particularly useful. We plan to do further experiments based on the same technique employed for our earlier work [1]. In that work, 12 novice operators were given a short training session and then asked to complete a task. Empirical task completion and subjective experience were measured that indicated our interface was easy to use. In this case, we plan to conduct a similar evaluation. However, there are two additional questions that result from the presence of multiple semi-autonomous robots which are worthy of exploration. • What is the quality of the operator’s awareness of robots that are not currently under direct control? • How is the operator’s situational awareness and ability to perform a task affected by the need to tend to another robot that requires operator intervention? It is difficult to design experiments for measuring these values directly. For the first question, we are considering modifying and instrumenting the user interface during experiments. At random points in the experiment, a particular robot would disappear from the map. The operator would then be asked to estimate the state of the robot that had just disappeared. Answering the second question is also difficult. The approach we plan to take is to have two scenarios that are almost identical as part of the experiment. In this “Wizard of Oz” style experiment, the first scenario would be allowed to proceed uninterrupted. In the second scenario, the user would be interrupted at a critical time in executing the task by the need to assist one of the other robots. The difference in time required to complete the tasks would offer guidance as to the impact of distractions. We have also conducted some preliminary experiments with gaze tracking to evaluate the operator’s focus. Early results indicate that the operators periodically check telemetry, but spend most of their time looking at the area immediately in front of the robot. We plan to extend this work further. VI. C ONCLUSIONS AND F UTURE W ORK There are some very serious HRI issues involved in designing user interfaces that allow a single operator to control a team of semi-autonomous robots. However, computer games have considered such issues in the past and may offer inspiration. We have designed an interface that draws on real-time strategy games and first-person shooters in order to combat two significant challenges in such a

scenario: situational awareness of semi-autonomous robot teams, especially when context switching is needed, and the need for robots to interrupt the operator for assistance. We are also considering an extension to this user interface that integrates the driving and mapping displays, similar to [11]. The map display is rendered in true 3D fashion and the robot’s camera view is rendered as a virtual “projector screen” in front of the robot. This improves the operator’s registration between what the robot sees and the location of the robot in the map. Doing so also allows the operator to observe prior 3D scans, thus enabling them to drive the robot in a “true” 3rd person perspective, as if they were standing behind the robot. Although such an interface would have obvious benefits, several issues, both from a technical perspective as well as a user interface perspective must be addressed for it to succeed in a multi-robot, advanced mobility environment. VII. ACKNOWLEDGEMENTS This work was funded by the Australian Research Council’s Centre of Excellence for Autonomous Systems. We would also like to thank our fellow RoboCup Rescue Robot League Team CASualty team members for their support. Finally, we would like to thank Jean Scholtz of NIST for her helpful advice and input during the development of our user interfaces. R EFERENCES [1] M. W. Kadous, R. Sheh, and C. Sammut, “Effective User Interface Design for Rescue Robotics,” in Proceedings of the 2006 ACM Conference on Human-Robot Interaction, March 2006. [2] J. Scholtz, “Human-Robot Interaction,” Presented at the RoboCup Rescue Camp, October-November 2004, Rome, 2004. [Online]. Available: http://www.dis.uniroma1.it/ multirob/camp04/pres/scholtz.ppt [3] A. Weisscher, “Applying computer game techniques to process visualization,” Information Design Journal, no. 10(1), 2001. [4] H. Jones and M. Snyder, “Supervisory control of multiple robots based on a real-time strategy game interaction paradigm,” in IEEE International Conference on Systems, Man, and Cybernetics, 2001, pp. 383–388. [5] J. Richer and J. L. Drury, “A Video Game-Based Framework for Analyzing Human-Robot Interaction: Characterizing Interface Design in Real-Time Interactive Multimedia Applications,” in Proceedings of the 2006 Conference on Human-Robot Interaction, 2006. [6] A. Jacoff, E. Messina, and J. Evans, “A standard test course for urban search and rescue robots,” in Proceedings of the Performance Metrics for Intelligent Systems Workshop, August 2004. [7] M. W. Kadous, R. Sheh, and C. Sammut, “CASTER: A Robot for Urban Search and Rescue,” in Proceedings of the 2005 Australasian Conference on Robotics and Automation, 2005. [8] R. Sheh, “Redback: A Low-Cost Advanced Mobility Robot,” School of Computer Science and Engineering, The University of New South Wales, Tech. Rep. UNSW-CSE-TR-0523, 2005. [9] S. Thrun, “Robotic Mapping: A Survey,” School of Computer Science, Carnegie Mellon University, Tech. Rep. CMU-CS-02-111, 2002. [10] M. Quigley, M. A. Goodrich, and R. W. Beard, “Semi-Autonomous Human-UAV Interfaces for Fixed-Wing Mini-UAVs,” in Proceedings of the 2004 IEEE International Conference on Intelligent Robots and Systems, 2004. [11] C. W. Nielsen and M. A. Goodrich, “Comparing the Usefulness of Video and Map Information in Navigation Tasks,” in Proceedings of the 2006 ACM Conference on Human-Robot Interaction, March 2006, pp. 95–101.

Controlling Heterogeneous Semi-autonomous Rescue ...

interaction: in addition to the typical top-down view used for strategy games; players .... request a 360◦ scan which rotates the sensor head through a full circle, or ...

2MB Sizes 0 Downloads 143 Views

Recommend Documents

Controlling Heterogeneous Semi-autonomous Rescue ...
maximise situational awareness and reduce cognitive load while allowing the operator .... towards interaction with objects without a global map. It did not explore ...

Semiautonomous Vehicular Control Using Driver Modeling
we describe a real-time semiautonomous system that utilizes em- pirical observations of a driver's pose to inform an autonomous controller that corrects a ...

Semiautonomous Vehicular Control Using Driver Modeling
Apr 13, 2014 - steering or the actual steering for fully autonomous control. If the autonomous controller is unable to construct a control that renders the vehicle safe over ...... Table I. In the precontroller survey, 54% of subjects admitted re- sp

Heterogeneous Parallel Programming - GitHub
The course covers data parallel execution models, memory ... PLEASE NOTE: THE ONLINE COURSERA OFFERING OF THIS CLASS DOES NOT ... DOES NOT CONFER AN ILLINOIS DEGREE; AND IT DOES NOT VERIFY THE IDENTITY OF ...

Controlling Foucault
locate, differentiate, compare, cluster and isolate individuals, groups and populations. .... erty, contract or conquest; the opposition between interested and disin-.

Fire & Rescue Helicopter - GitHub
Fire & Rescue Helicopter. Page 2. 1. 2x. 1x. 2x. 1x. 1x. 2x. 3x. 1x. 2x. 2x. 4x. 11x. 3x. 2x. 2x. 2x. 1x. 1x. 1x. 2x. 2x. 1x. 4x. 1x. 2x. 2x. 3x. 2x. 2x. 2x. 2x. 4x. 2x. 1x. 2x.

Controlling the View
This tutorial is designed to teach the user how to manipulate the view of the globe. Screen shots ... in any direction through a fully 360°. Additionally, GPlates ...

Rescue Me.pdf
Whoops! There was a problem loading more pages. Rescue Me.pdf. Rescue Me.pdf. Open. Extract. Open with. Sign In. Main menu. Displaying Rescue Me.pdf.

Controlling the View
See ​www.earthbyte.org/Resources/earthbyte_gplates.html​ for EarthByte data sets. ... 2. Click the 'Drag Globe' icon → Click and drag the globe – i.e. click.

Controlling the Salesforce -
Owners, Managers, Employees, and Suppliers, acronym: CCCGOMES. • Ethical responsibilities. Deal with fairness, equity, impartiality. • Legal responsibilities.

heterogeneous computation grid
C/o Department of Computer Science and Engineering, ... accomplishes these goals by maintaining a database of projects in the grid and a list of registered users. ... the system of Local Area Networks (LAN) and the Internet, there can be ...

Spatial Competition with Heterogeneous Firms
I know of no analytic solutions with heterogeneous firms and convex costs of transportation. This content downloaded from 140.109.160.120 on Tue, 4 Feb 2014 ...

The Rescue - Ryan A. Keeton
Said the voice of Lieutenant Hatfield in his ear piece. “Roger that .... I am a quiet professional and man of honor. I am a Black Beret. ...... Five, message sent.” He said as he sent the coded text message. High above in the Prometheus C.I.C., M

The Rescue - Ryan A. Keeton
ROLLING THUNDER AND POURING RAIN. John Connors stood in his office and looked outside the window at the rain falling on the Special Operations Training Center. It was a brutal storm outside, but par for the course for this small island in the souther

rescue robot pdf
Whoops! There was a problem loading more pages. rescue robot pdf. rescue robot pdf. Open. Extract. Open with. Sign In. Main menu. Displaying rescue robot ...

Spatial Competition with Heterogeneous Firms
I know of no analytic solutions with heterogeneous firms and convex costs of .... A producer must design its product and a retailer must build a store. ...... 20 The proofs of proposition 2 and of all subsequent propositions are relegated to App. A.

Heterogeneous variances and weighting - GitHub
Page 1. Heterogeneous variances and weighting. Facundo Muñoz. 2017-04-14 breedR version: 0.12.1. Contents. Using weights. 1. Estimating residual ...

Controlling Speech Anxiety
The communication education literature offers a variety of means for coping with speech anxiety. Desensitization and .... Planning and Administration (Goodnight and Zarefsky, 1980) ex- cluded the topic of speech anxiety. Buys and .... Needham Heights

Heteroorganic molecules and bacterial biofilms: Controlling ... - Arkivoc
Sep 13, 2016 - ... Eating Away at Pieces of History The New York Times [Online], 2008. ..... Hong, C. S.; Kuroda, A.; Takiguchi, N.; Ohtake, H.; Kato, J. J. Bacteriol. 2005 ...... Synthesis and Heritage and Cultural Conservation and a Masters of.

Indonesian coastlines controlling global climate.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. Indonesian ...