A Constraint-Based Behavior Fusion Mechanism on Mobile Manipulator Shu Huang, Erwin Aertbeli¨en, Hendrik Van Brussel PMA, Department of Mechanical Engineering Katholieke Universiteit Leuven Celestijnenlaan 300b, B-3001 Heverlee, Belgium Email: [email protected]

Abstract— A constraint-based behavior fusion mechanism is proposed. The behavior-based concept is suitable for robots in a dynamic changing environment. However, this concept suffers from two major restrictions: re-usability and deliberation. Because of this, complex tasks are still challenging for behavior-based systems. In the control of sensor-based robots, the constraint-based task specification method provides a geometrically flexible representation for a task. In this paper, these constraint equations are used as a common interface for behavioral fusion. Due to this fusion ability, the “intelligent behaviors” intelligently concentrate on performing their own goals without worrying about the coordination with other behaviors. Thus, the complexity of the deliberate network can be simplified and becomes easier to learn. A door opening example is implemented for this method on a mobile manipulator.

I. I NTRODUCTION A large group of new applications, such as household services or assistance to elderly people, require autonomous robots to function in cluttered environments with human interaction. A mobile manipulator, i.e. a manipulator on a mobile platform, e.g., a wheelchair with an arm, is one of the most complex and difficult examples, since both manipulability of the arm and navigation of the platform are required during task executions. It has more flexibility on dealing with extra degrees of freedom (DOF), and requires more control theories to interact physically with human within a 3-D environment. Behavior-based concepts have proved to be suitable for operating under open and dynamically changing environments in mobile robots. By defining several elementary behaviors referring to certain relationship between sensor inputs and actuator outputs and proper fusing mechanisms which are guidelines about how behaviors can work together, the system can perform certain tasks and react within its environment. Hence, the behavior-based methodology can be extended to mobile manipulator with more comprehensive fusion mechanisms. Different behaviors are fused together and each behavior is responsible for a specific functional activity which can be achieved by one or more control algorithms. Then, the actual output signals will be generated and sent to the hardware. The force-controlled robotics in the task frame formalism. [1] [2] [3] This paper is organized as follows: A behavior-based mobile manipulator is introduced in Section II The behavioral fusion and constraint-based behavior fusion mechanism are explained

in Section III The software architecture of constraint-based fusion mechanism is described in Section IV. A door-opening example is implemented in Section V. Finally, the future works are in Section VI, followed by the conclusion in Section VII. II. A B EHAVIOR - BASED M OBILE M ANIPULATOR The behavior-based methodology was proposed by R. Brooks [4], where he separated different perception-and-action units based on their functional characteristics. A behavior is a modular decomposition of intelligence which consists of a certain relationship between sensor inputs and actuator outputs. Unlike traditional centralized or sequential control systems, a behavior-based system can separate a complex task into several elementary components with specific meanings. By properly combining these individual behaviors in different ways, a behavior-based system can perform various and complex tasks. Behavior-based systems are wildly used in the fields of mobile robots. Due to its distributed nature and simplicity, a behaviorbased mobile robot can perform dynamic navigation or maze routing tasks. A mobile manipulator is a mobile platform with a robotic arm on top. Both the platform and the arm have sensors to perceive the environment. Recently, some researches applied behavior-based concepts onto a more complex robotics domain, such as mobile manipulator. Z. Wasik uses Fuzzy network and vision servoing techniques to a behavior-based mobile manipulator [5]. It is a successfully integration of vision ability and the behavior-based concept on a mobile manipulator. However, the mobile part and the manipulating part are separately controlled during the task execution. R. Waarsing proposed an agent-based behavioral system for mobile manipulator [6]. He semi-combined a mobile platform and a manipulating arm and successfully expressed each behavior in mathematical equations. But pre-defined sequential behavioral executions and limited behavioral cooperating mechanisms restrict its applications in other domains. D. MacKenzie proposed another behavior-based mobile manipulator for drum sampling tasks [7]. He aims at integrating arm and base as a cohesive unit to perform the task. In fact, it is not easy to combine manipulation tasks with a behavior-based architecture. A manipulation task requires a hard real-time environment and precision control of each joint, while a distributed behaviorbased system prefers independently execution. It’s a big chal-

lenge to properly combine overall precise motion controls and decentralized behavioral units. Therefore, some modifications from architectural perspectives are needed such that behaviorbased systems can upgrade to more complex applications. III. B EHAVIOR F USION Behavior units are usually defined to interact directly with its environment including perceive data from sensors and react to actuators or other behaviors. This low-level behaviorbased deployment limits the possibility to deal with competing output signals. In order to apply behavior-based architecture on a more complex robot, such as a mobile manipulator, the outputs of behaviors should be re-evaluated by a behavioral coordinating mechanism before sending to the hardware. We believe a better behavioral cooperation mechanism is the key to developing a more modular behavior-based system for complex applications. The behavior coordination mechanism from prior arts can be divided into two groups: arbitration method and command fusion method [8]. Since a mobile manipulator has more complex kinematics control for different joints than mobile robots, it is convincing to use command fusion methods rather than arbitration methods. In order to properly fuse the outputs of behavior units, a unified interface is needed. We borrowed the concept of constraints from the field of robot motion control [9] and defined a intelligent behavior to have more knowledge to perform motion control. This intelligent behavior is in charge of a particular aspect of the mobile manipulating tasks. Based on its given goal the intelligent behavior generates a unified constraint specification to the fusion mechanism. The constraint specification of each behavior provides necessary control-oriented information of a desired movement. It provides an approach to properly control all joints and treat mobile manipulator as a whole. Afterwards, the fusion component collects all these constraints, re-evaluates their appropriateness and finally determines final output signals. Either approach of classical behavioral coordination mechanisms can still be used to calculate the weights during fusion procedures. Therefore, this method unifies two different concepts: integrated control of mobile manipulator and distributed specification of behaviors. A. Constraints Our approach is inspired on the work in [9], but in contrast to [9], and according to the behavior-based design philosophy, this approach tries to minimize the amount of modeling involved. In general, a constraint can be expressed as : fi (q) = d

(1)

which expresses that fi (q) achieves the desired value d and q is a vector containing the joint angles of the mobile manipulator. This constraint can express a geometric relationship (e.g. the end-effector moves on a line), a force relationship (e.g. the end-effector exerts a given force in a given direction), or some other relationships involving other sensors. Equation (1) can also be used to describe a force relationship when part of the

Fig. 1.

Different coordinates of mobile manipulator (LiAS)

environment is also modeled. The combined stiffness of the environment and robot has to be known. The constraints can be expressed in different coordinate frames, as shown in Fig 1. Sometimes, a particular coordinate frame can better and easily describe a constraint expression for certain control. • Platform Frame {Fpf } is located in the contact point of the wheel of the mobile platform under the moving axis. • Base Frame {Fbase } is parallel to {Fpf } but has its origin on the mounting point of the manipulator arm. • End-effector Frame {Fee } is located in the end-effector of the gripper with its Z-direction outwards. The force sensor also has the same coordinate as {Fee }. • World Frame {Fw } is the absolute world coordinate. Generally speaking, a constraint equation only needs to define force, position or velocity conditions, and leaves the rests to don’t care condition. Therefore, there might be two results for the final condition: over-constraint where more constraints are specified than its degrees of freedom, or under-constraint where there are still free degrees of freedom remained. For over-constraint, some constraints will influence one another and might lead to competing consequences. The constraint resolver will take the weight of each constraint into account and find an approximate solution. For under-constraint, there are more combinations of joints for a required position of endeffector. The constraint resolver will consider additional null space criteria to determine the best solution. The constraint expression for each behavior will be introduced in Section V. B. Resolving Constraints For each constraint given by equation (1), a first order controller can be designed that has equation (1) as a steady state solution: ∂fi .q˙ = K(d − fi (q)) (2) ∂t The behavior fusion module takes together the constraints given by different behaviors, and computes an optimal value

for q, ˙ attempting to maintain as much as possible all constraints. When all constraints are taken together, equation (2) becomes: J(q).q˙ = D

(3)

where J(q) is a m × n manipulator analytical Jacobian Matrix of a kinematically redundant mobile manipulators, which has larger number of degrees of freedom n than the dimension of the workspace m, and D is the vector of constraints. In (3), each row of J(q) represents a contribution from a certain joint to the end-effector. Each row of J(q) corresponds to one constraint, and each constraint is a specification for the relative motion of a feature of an object with respect to other features or objects. Constraints are not always equally important. Therefore, a constraint weight matrix Wc is introduced. This weight matrix defines a norm in constraint space: kD − D0 k2Wc = (D − D0 )T Wc (D − D0 )

(4)

Similarly, not all joints are equally important. Some joints are translational, others are rotational. Sometimes we want the platform to move less with respect to the manipulator. For these reasons a joint weight matrix Wj is introduced. This weight matrix defines a norm in joint space: kq˙ − q˙0 k2Wj = (q˙ − q˙0 )T Wj (q˙ − q˙0 )

(5)

With the above definitions, the specification of the behavior fusion module can be formalized as the following optimization problem: min kJ(q).q˙ − dkWc q˙

(6)

and for all solutions q˙ that satisfy equation (7), choose: argmixkq˙ − q˙d kWj

(7)



This optimization problem can be solved by using the weighted pseudo-inverse [10]: q˙ = J # V + (I − J # J)q˙d

(8)

J # = Wj−1 (Wj JWc−1 )Wc

(9)

where q˙d means desired constraints and (I − J # J)q˙d specifies a null space solution. This is a component of the solution that does not change the norm in equation (7). Sometimes a behavior does not relate to the end-effector, but directly to the joint angles of the mobile platform. E.g. a behavior could express the wish that the joints of the robot arm approach a given value, as long as this does not conflict with the given constraints. This can be expressed as a null space criterion and given to the behavior fusion module.

Fig. 2. Software architecture of constraint-based behavior fusion mechanism

IV. S OFTWARE A RCHITECTURE Pure behavior-based systems, however, suffer from the lacking of planning abilities and decision making mechanisms. Each behavioral unit performs its own reactive actions based on the perceptional signals and works simultaneously. In order to extend its usages to achieve more complex tasks, a hybrid architecture was proposed by adding a deliberate layer above the existing behavior-based systems [11]. In hybrid architecture, the deliberation layer decides which and when behaviors are activated according to the system missions. In fact, the hybrid architecture combines the aspects from two different philosophies: natural reactive abilities and social purposes. Maja J. Mataric proposed the idea of Abstract Behavior Layer (ABL) to systematically group the pre-conditions and the post-conditions from behavioral units into a separate layer, leaving only pure primitive behaviors to be activated by this flexible decision-making network: Abstract Behavior Network (ABN) [12]. In this method, primitive behaviors can easily be reused by re-defining the ABL networks. Besides, this network can theoretically be learned by human robot interactions through proper learning techniques. This network could be too complex to be learned in our case. This is because there are only pure primitive behaviors operating in this architecture and there is no proper cooperation mechanism among behaviors. If any two competing behaviors are activated together, either one will be completely inhibited by ABN instead of finding proper solution between them. This limitation restricts the flexibility of behaviors and makes ABN to make difficult decisions. We propose a constraint-base behavior fusion method to properly fuse different behavioral units. Every behavioral unit can maintain its own integrity and concentrates on performing its behavior only. Due to this fusion mechanism, programmers can increase the intelligence of a behavior and results in a decrease of the complexity of decision-making network. Therefore, the network will be easier to learn. This software architecture also supports various behavior fusion mechanisms.

Behavior activities

Perceive environment

Fusion activities

combining these sub-tasks. The network among behaviors and transitions can be learned by natural training approaches.

Collect all constraints Calculate contradict constraints

V. A D OOR O PENING E XAMPLE AC activities

Apply control algorithm Apply joint space weights Generate constraint

Apply null space criteria Constraint solver

Perceive environment

Calculate SelfX properties

Output commands

Fig. 3.

The activity diagram of constraint-based behavior fusion

It is based on the Hybrid architecture and divided into three layers: Deliberate Layer, Behavior Coordination Layer, and Hardware Abstraction Layer as shown in Fig. 2. • The“Hardware Abstraction Layer” (HAL) isolates the hardware dependent drivers into a virtual device. Devices with similar functionalities are grouped together and share the same interface. It provides the freedom to use components from different vendors and supports the allocation of redundant sensors. This allocation makes hardware fault-tolerance possible, since signals from one particular sensor can be subscoordinatingtituted by another similar one. For example, the distance measurement can be obtained from either laser range sensor or from a vision acquisition device. • The “Behavior Coordination Layer” (BCL) contains intelligent behaviors and a fusion component. This is the place where behaviors are interacting. Activated behaviors send out constraints and desired weights to the fusion component based on their goals. The fusion component uses constraint-based behavior fusion mechanism to fuse different constraints. Joint space weights and null space criteria are taken into account before fusion component calculates final weights. These weights and criteria can be further fine tuned by other agents in order to perform different requirements. For example, a Self-Healing can disable a broken joint by setting its joint space weight to zero. At last, it produces final output signals to HAL. The activity diagram is in Fig. 3 • The “Deliberate Layer” (DL) is for higher level commands, such as decision-making or task-planning. It contains Task Manager, Event Monitor and Natural Training Agent. The task manager activates required behaviors and chooses relevant transitions to complete a sub-task of a task. After particular transitions are reached, the task manager will de-activate behaviors and switch to another sub-task. A complex task can be achieved by sequential

The experiment is performed on our mobile manipulator (LiAS: Leuven intelligent Autonomous System) which comprises a CRS A465 industrial manipulator and a mobile platform. The end-effector is equipped with a 6 degree-offreedom force sensor and and a gripper. The arm controller is a real-time Linux computer running RTAI/lxrt on OROCOS [13]. The platform is controlled by another Linux computer running MoRE (Mobile Robot Environment) and connects to arm controller via network communication. OROCOS also provides convenient library called Kinematics and Dynamics Library (KDL) to perform the kinematic calculations. In order to perform tasks, some intelligent elementary behaviors should be designed to perform basic actions. Each behavioral instance is inherited from a behavior class written as an OROCOS component. The output of each behavior is a constraint with a desired constraint space weight (Wd,c ). Below are the intelligent elementary behaviors. The task space frames are specified in Fig. 1. • Force Following behavior (FF): It controls the endeffector to freely move alone with one or more directions according to the user interactions. It can be in {Fee } or in {Fw } with one direction or multi-directions movements. Equation (10) and (11) specify force following in X, Y, Z directions and keep all rotational angels fixed in world frame.





Fx,{Fw } = Fy,{Fw } = Fz,{Fw } = 0

(10)

ωx,{Fw } = ωy,{Fw } = ωz,{Fw } = 0

(11)

Turn behavior (TU): It controls the end-effector to rotate with respect to a certain axis with a desired torque. The axis and direction of rotation can be specified according to different application. Note that the rotation in clockwise or counterclockwise will be automatically tried out by this intelligent behavior. Tz,{Fee } = const

(12)

Vz,{Fee } = 0

(13)

Pull/Push behavior (PU): It controls the end-effector to move alone with a certain axis with a desired force. It can also be in {Fee } or in {Fw } with one direction. The pull or push direction can also be tried out by this intelligent behavior. Fz,{Fee } = const (14) ωx,{Fee } = 0

(15)

The behavioral diagram is shown in Fig. 4. Initially, the robot is equipped with several elementary behaviors which represent different abilities it has. Then, the network connections are established based on the task-dependent mission. In

TI

SI VR S2

TU

FF

S

:SubTask

Fig. 4.

S3

B

: Behavior

S4

PU

T

: Transition

The behavior diagram

order to make terminations of activating signals for behaviors, a task manager needs to combine relevant transitional signals to complete a sub-task. These transitional signals are general conditions that can be physically monitored. They are all implemented as OROCOS components. Below are some transitional signals for this door-opening example: • Value Reached transition (VR): Certain threshold values for a force or torque in one or several directions are met. • Time Interval transition (TI): A delay of time interval needed for a certain condition. • Relative Position transition (RP): Since we don’t have complete model of the environment, a relative relationship between end-effector and the robot itself can be a transition condition. For example, when the end-effector moves outside of the platform range. • Sensor Input transition (SI): An input signal from a sensor. To combine these elementary behaviors and transitional signals, the robot has the ability to perform a task. For example, in Fig. 4, Sub-Task 1 (S1) activates Force Following (FF) behavior, and switch to Sub-Task 2 (S2) when a Sensor Input (SI) is detected. Due to the constraint-based behavior fusion mechanism which handles low-level behavioral coordinations, the task-dependent behavioral network does not need to worry about activating destructed behaviors at the same time. In the implementation, the joint number two is set to maintain 45 degrees as a null space criteria during the task execution. And higher joint space weights are set to decrease the jerk of the platform. The experimental photo is shown in Fig. 5. A door-opening example performed by a mobile manipulator with constraint-based behavior fusion mechanism is presented. After being guided to the door handle by users, the robot can autonomously open the door. The fusion mechanism affects the actual output commands by monitoring current environmental conditions. It gathers all constraint equations and corresponding desired weights from active behaviors during each servo cycle and recalculates these weights to decide final task space weights. It also includes null space criteria and joint space weights before constraint resolver. One can, for instance, set a desired joint value to a certain joint during task execution. Or, the null space criteria can be set to make arm movements faster than the platform, results in an increase of overall response time. For example, during the experiment, the

Fig. 5.

z

S1

RP

LiAS opening a door

1 0.95 −0.1 −0.2 −0.3 −0.4 −0.5 −0.6 −0.7 −0.8 −0.9 y −1

2 1.8 1.6 1.4 x

Fig. 6.

Trajectory of the gripper during the door-opening example

joint number two always maintain in its preferred position of 45 degrees. One important characteristic of this example is the system does not contain a complete mathematical model about the environment. The robot has enough knowledge about itself and it is possible to use a compelte relative mapping from itself to the environment in order to sense the environment. Since there is no complete model for the environment, the intelligent behaviors are in charged of the local interaction with the environment with proper description of constraint equations. In this experiment, a description of pull alone with the end-effector in the constraint equation results in a nearly circular trajectory of opening a door (see Fig. 6). This method may not be the optimal approach to open a door but it is a more general way, with less specifications. It can tolerate a large ranges of uncertainties in the environment. VI. F UTURE W ORKS An intelligent and efficient human robot interaction (HRI) is considered as the key element to integrate robotic applications into human daily life while programming by demonstration (PbD) is a good approach for robots to learn tasks from human. It is a challenging to combine PbD techniques and

behavior-based architecture, which can select and sequence corresponding behaviors needed for a demonstrated task. One advantage of constraint-based behavior fusion method is the reduction of complexity of behavioral network. In this context, a complex task can be achieved by setting sequential combination of these behaviors and transitions. Although currently the behavioral diagram is pre-hardwired for this example, these connections and transitions can be learned by learning techniques. Due to the fusion ability, the intelligent behaviors concentrate on performing their own goals without worrying about the coordination with other behaviors. Thus, the complexity of the deliberate network can be simplified and becomes easier to learn. Instead, it can focus more on defining the correct sequential combination of the network by HRI, such as by demonstration. The learning techniques can be applied on learning the behavior or learning the network respectively. The behavior learning focuses on how a behavior unit makes a better performance under its original goal. The network learning focuses on the extraction features from user’s interactions to form a possible sequential combination of the network. Theoretically, it is feasible to recognize a certain behavior by some significant features since there exists certain relationships between sensor data and user’s guidance. This method can be used to determine proper candidates among behaviors from a demonstration process. This may require not only action selection or behavioral sequencing algorithms, but also pattern recognition or learning techniques to improve the efficiency while learning a task. Nevertheless, good features still remain the most important role for learning mechanisms. Q. Wang proposed an approach using a 6-Dimensional force controlled robot to perform tasks by human demonstration [14]. By defining all possible combinations among velocity commands, force readings, torque readings, velocity and position profile, a robot can eliminate impossible cases during the demonstrating process and further combine segmental subtasks into a complex task. A. Skoglund generates a pick-andplace trajectory by human finger indication together with the comparison of velocity and command profiles [15]. VII. C ONCLUSION A unified constraint interface for fusing behaviors is proposed, providing an approach to solve kinematic redundancy and at the same time fuse the constraints of different behaviors. This allows us to use behavior-based systems for a more complex range of applications, such as a mobile manipulator opening a door. This method can not only maintain the design concept for creating intelligent behaviors as an individual perceive-act unit, but also extend the opportunity and flexibility to further handle competing behaviors to achieve a more comprehensive result. Secondly, the complexity of deliberate network for activating necessary behaviors to execute a given task can be reduced. This method provides a bridge to the wellstructured decision-making network and the control-oriented intelligent behaviors. In future works, natural training method will be developed to automatically conduct the network from a human demonstration.

ACKNOWLEDGMENT This work has been sponsored by the concerted research action project (ACDPS) of K.U.Leuven in Belgium. Also thanks to the Taiwan Merit Scholarship (NSC-095-SAF-I-564802-TMS) in 2007. R EFERENCES [1] H. Bruyninckx and J. De Schutter, “Specification of force-controlled actions in the task frameformalism - a synthesis,” Robotics and Automation, IEEE Transactions on, vol. 12, no. 4, pp. 581–589, 1996. [2] J. De Schutter and H. Van Brussel, “Compliant Robot Motion I. A Formalism for Specifying Compliant Motion Tasks,” The International Journal of Robotics Research, vol. 7, no. 4, p. 3, 1988. [3] C. de Wit, G. Bastin, G. Bastin, B. Siciliano, and B. Siciliano, Theory of Robot Control. Springer-Verlag New York, Inc. Secaucus, NJ, USA, 1996. [4] R. Brooks, “A robust layered control system for a mobile robot,” Robotics and Automation, IEEE Journal of [legacy, pre - 1988], vol. 2, no. 1, pp. 14–23, Mar 1986. [5] Z. Wasik and A. Saffiotti, “A hierarchical behavior-based approach to manipulation tasks,” Robotics and Automation, 2003. Proceedings. ICRA ’03. IEEE International Conference on, vol. 2, pp. 2780–2785 vol.2, 1419 Sept. 2003. [6] B. J. W. Waarsing, “Behaviour-based mobile manipulation,” Ph.D. dissertation, KULeuven, Jun. 2004. [7] D. MacKenzie and R. Arkin, “Behavior-based mobile manipulation for drum sampling,” Robotics and Automation, 1996. Proceedings., 1996 IEEE International Conference on, vol. 3, 1996. [8] P. Pirjanian, “Behavior coordination mechanisms-state-of-the-art,” Institute Robotics Intelligent Systems, Sch. Engineering, Los Angeles, Univ. Southern California, Tech. Rep. IRIS-99–375, 1999. [9] J. De Schutter, T. De Laet, J. Rutgeerts, W. Decre, R. Smits, E. Aertbelien, K. Claes, and H. Bruyninckx, “Constraint-based Task Specification and Estimation for Sensor-Based Robot Systems in the Presence of Geometric Uncertainty,” The International Journal of Robotics Research, vol. 26, no. 5, p. 433, 2007. [10] Y. Nakamura, Advanced Robotics: Redundancy and Optimization. Addison-Wesley Longman Publishing Co., Inc. Boston, MA, USA, 1990. [11] L. Petersson, M. Egerstedt, and H. Christensen, “A hybrid control architecture for mobile manipulation,” Intelligent Robots and Systems, 1999. IROS’99. Proceedings. 1999 IEEE/RSJ International Conference on, vol. 3, 1999. [12] M. N. Nicolescu and M. J. Matari´c, “A hierarchical architecture for behavior-based robots,” in AAMAS ’02: Proceedings of the first international joint conference on Autonomous agents and multiagent systems. New York, NY, USA: ACM, 2002, pp. 227–233. [13] H. Bruyninckx, “Open robot control software: the orocos project,” Robotics and Automation, 2001. Proceedings 2001 ICRA. IEEE International Conference on, vol. 3, pp. 2523–2528 vol.3, 2001. [Online]. Available: http://www.orocos.org/ [14] Q. Wang and J. De Schutter, “Towards real-time robot programming by human demonstration for 6d force controlled actions,” Robotics and Automation, 1998. Proceedings. 1998 IEEE International Conference on, vol. 3, pp. 2256–2261 vol.3, 16-20 May 1998. [15] A. Skoglund, B. Iliev, B. Kadmiry, and R. Palm, “Programming by demonstration of pick-and-place tasks for industrial manipulators using task primitives,” Computational Intelligence in Robotics and Automation, 2007. CIRA 2007. International Symposium on, pp. 368–373, 20-23 June 2007.

A Constraint-Based Behavior Fusion Mechanism on ...

and might lead to competing consequences. The constraint resolver ..... Robotics and Automation, IEEE Journal of [legacy, pre - 1988], vol. 2, no. 1, pp. 14–23 ...

2MB Sizes 1 Downloads 244 Views

Recommend Documents

Selecting Source Behavior in Information Fusion on the ...
Abstract. Combining pieces of information provided by several sources without prior knowledge about the behavior of the sources is an old yet still important and rather open problem in belief function theory. In this paper, we propose a general appro

Market Mechanism Refinement on a Continuous Limit ...
a continuous limit order book with price-time priority matching. ... participants race for prices on it [Melton 2017]. ... The specific goal of the refinement to TRM can be stated more precisely than .... What is arguably the main innovation of the d

Market Mechanism Refinement on a Continuous Limit ...
When participants are price-making on an instrument, for instance, a re- ... Views expressed herein do not constitute legal or investment advice, and do not ... From its launch until June 2016 it implemented ... during course of the mechanism's desig

A Mechanism for Action of Oscillating Electric Fields on Cells
*Department of Cell Biology and Biophysics, Faculty of Biology, †Department of Nuclear and Particle Physics, Faculty of Physics; and ‡Department of Solid State ...

A paper on Mechanism of Liponecrosis_from Cell Cycle.pdf ...
Whoops! There was a problem loading more pages. Retrying... A paper on Mechanism of Liponecrosis_from Cell Cycle.pdf. A paper on Mechanism of ...

Real-time exposure fusion on a mobile computer - prasa
component in desktop and laptop computers. It is used ... fusion algorithm that uses the GPU and CPU on a laptop com- .... vided by the OpenGL driver [10].

Real-time exposure fusion on a mobile computer
bile computer and its graphics processing unit (GPU) are able to fuse three greyscale videos ... software platform is a mobile computer that can be deployed quite easily into the field .... Nvidia Quadro 140M NVS graphics card. • Linux operating ..

Fusion and Summarization of Behavior for Intrusion ...
Department of Computer Science, LI67A ... of the users, hosts, and networks under the administrator's .... gram representing local host connectivity information.

Simulation and Research on Data Fusion Algorithm of the Wireless ...
Nov 27, 2009 - The Wireless Sensor Network technology has been used widely; however the limited energy resource is one of the bottlenecks for its ...

On the mechanism of Wolbachia- induced cytoplasmic ...
complex ones such as multiple infections, asymmetrical and partial compatibility relationships and the existence of Wolbachia variants that can rescue the host from CI but ..... model. Indeed, one can envision the existence of a virtual infinity of .

Investigation on image fusion of remotely sensed ...
In this paper the investigation on image fusion applied to data with significantly different spectral properties is presented. The necessity of placing high emphasis on the underscored problem is demonstrated with the use of simulated and real data.

Mechanism for action of electromagnetic fields on ... - Semantic Scholar
a Department of Cell Biology and Biophysics, Faculty of Biology, University of .... they form aqueous pores [12], or ''condensed state pathways'' [13]. .... use the corresponding value for k that we have also cal- ..... San Francisco Press, 1993, pp.

Focus Fusion - Agrion
Focus Fusion: Transformative Energy Technology ... Competition illustrates key advantages for LPP. Aneutronic fusion. D-T fusion ... This beam means business ...

Sheet reversing mechanism
Nov 25, 1974 - With the de?ector 32 in the intercept position, the sheets are fed into the nip of a ?rst roll pair formed by a drive roll 34 and an idler roll 36.

Dynamic Mechanism Design:
May 8, 2009 - Incentive Compatibility, Profit Maximization and Information Disclosure". Alessandro .... a solution to the profit%maximizing Relaxed Program.

Contourlet based Fusion Contourlet based Fusion for Change ...
Contourlet based Fusion for Change Detection on for Change Detection on for Change Detection on SAR. Images. Remya B Nair1, Mary Linda P A2 and Vineetha K V3. 1,2M.Tech Student, Department of Computer Engineering, Model Engineering College. Ernakulam

How fusion happens
Technology Development Phases. Financing Phases. Phase 1. Phase 2. Full Scale. Prototype. Commercial. Operations. Alpha and Beta Plants. Core Physics.

Fusion Knot.pdf
Dress the knot, tighten down and make any adjustments to loop length you need. Here we have the Fusion aka Karash Knot. Page 3 of 4. Fusion Knot.pdf.

vmware fusion pdf
Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. vmware fusion pdf. vmware fusion pdf. Open. Extract. Open with.