Proceedings of the Sixth International Conference on Machine Learning and Cybernetics, Hong Kong, 19-22 August 2007

A MODIFIED BINARY PARTICLE SWARM OPTIMIZATION ALGORITHM FOR PERMUTATION FLOW SHOP PROBLEM LEI YUAN1, ZHEN-DONG ZHAO2 1

School of Communication and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210046, China 2 Institute of information and networking, Nanjing University of Posts and Telecommunications, Nanjing 210046, China E-MAIL: [email protected], [email protected]

Abstract: In this paper, we proposed a modified version of binary particle swarm optimization algorithm (MBPSO) to solve combinatorial optimization problems. All particles are initialized as random binary vectors, and the Smallest Position Value (SPV) rule is used to construct a mapping from binary space to the permutation space. We also propose new formula to update the particles' velocities and positions. The algorithm is then applied to the permutation flow shop problem (PFSP). To avoid the stagnation, local search and perturbation are employed to improve the performance. Performance of the proposed algorithm is evaluated using the benchmarks of flow shop scheduling problems given by Taillard [1]. Experimental results show that the algorithm with local search and perturbation is more effective.

1.1. Permutation flow shop problem The Permutation Flow Shop Problem (PFSP) is a typical combinatorial optimization problem, which represents a particular case of the flow shop scheduling problem. The goal of PFSP is finding an optimal schedule for n jobs on m machines. Each job need to be processed using m machines in the same, fixed order, that means a job can be processed on the jth machine only if it is finished on the j-1th machine and the jth machine is free. And the time required to complete all jobs is called makespan. As a consequence, for the permutation flow shop problem, considering the makespan as objective function to be minimized, solving the problem means determining the permutation which gives the smallest makespan value.

Keywords: Binary particle swarm optimization; Flow shop problem; Local search; Perturbation; SPV rule

1.

1.2. Particle swarm optimization

Introduction

There are two different types of optimization problems, namely, the numerical optimization and the combinatorial optimization. The solution of the former is a set of numbers while the solution for the latter is a permutation or a combination. In this paper, we focus on the latter one. Sometimes, the solution space for a given problem is so huge that the determined methods like dynamic programming [2-3] and branch-and-bound algorithms [4] may become too time-consuming. Thus, heuristic methods are proposed, such as simulated annealing (SA) [5], tabu search (TS) [6], genetic algorithm (GA) [7], and ant colony optimization (ACO) [8], which offer a compromise between computational expense and solution quality.

1-4244-0973-X/07/$25.00 ©2007 IEEE 902

Particle Swarm Optimization was first proposed by Kennedy and Eberhart [9] in 1995. PSO is a novel nature-inspired evolutionary algorithm based on iterations. In the description of PSO algorithm, the swarm is made up of a certain number of particles. In each iterations, all the particles move in the N-dimensional search space to find the global optimum. And in 1997, Kennedy and Eberhart proposed a binary version PSO algorithm [10], in which each dimension of a particle is set to 0 or 1. Whether continuous or discrete, the original and most essential idea of PSO is: difference in position leads to velocity and velocity leads to search. This can be represented in the original formula to update a particle’s velocity and position: vit +1 = w ⋅ vit + C1 ⋅ r1 ⋅ ( pit − xit ) + C2 ⋅ r2 ⋅ ( pgt − xit ) (1)

Proceedings of the Sixth International Conference on Machine Learning and Cybernetics, Hong Kong, 19-22 August 2007

xit +1 = xit + vit +1

(2)

Where variable i denotes the ith particle in the swarm, t represents the iteration number, vi is the velocity vector of the i-th particle, xi is the position vector of the ith particle. Pi, is the personal best position that the ith particle had reached, and Pg, is the group best position that all the particles had reached. r1 , r2 are two random numbers, w is the inertia weight [11]. C1 and C2 are two constant numbers, which are often called the acceleration coefficients. The following paper is organized as follows: Section 2 gives a brief review of different PSO algorithms applied to combinatorial optimizations and original version of binary PSO. In section 3, methodology of the modified binary PSO algorithm is discussed, and experimental results of test problems are shown in section 4. Finally, section 5 summarizes the paper. 2.

Previous works

hypercube by flipping various numbers of bits, and the original position updating formula (2) becomes:

if (rand () < S (vid )) then xid = 1

(3)

else xid = 0 Where S(v) is a logistic transformation. 3.

MBPSO algorithm for PFSP

In this section, we will introduce the evaluation function for a typical permutation flow shop problem first, and then we will review the SPV rule proposed by M. F. Tasgetiren et al. [14], after that our modified version of binary PSO (MBPSO) with the SPV rule is introduced. 3.1. Objective function for PFSP In PFSP, the solution can be any possible permutation scheduling for n jobs. Given the processing time t jk for job

j

on

machine k ,

and

a

job

permutation

2.1. PSO applied to combinatorial optimization

π = {π 1 , π 2 ,…, π n }

PSO was mainly applied to numerical optimization and performed quite well. As PSO has the advantages such as fast convergence and easy to implement, several approaches have been made to adjust PSO to solve combinatorial optimization problems. Till now there are two major types of adjustments: the discrete version and the continuous version. Take flow shop problem as example, the data structure of a particle in the discrete version PSO (DPSO) proposed in [12-13] is a permutation of numbers, such as [4,3,2,1,5], which can be used directly as a solution. In this case, the definition of velocity and so on must as well be adjusted to make the updated solution feasible. Those literatures proposed different kinds of discrete operators to construct new, feasible formula to update a particle's velocity and position. M. F. Tasgetiren et al. [14] proposed a continuous version of PSO (CPSO), which keeps the original updating formula (1) and (2), and the data structure for each particle is an N-dimensional vector. To represent the solution, Tasgetiren also proposed the SPV rule, which is efficient and easy to implement.

will be sequenced through m machines (k = 1,2,…, m) ,

2.2 Binary PSO

where n jobs ( j = 1,2,…, n)

C (π j , m) denotes the completion time of job π j on Given the job permutation machine m . π = {π 1 , π 2 ,…, π n } , the calculation of completion time for the n-job m-machine problem is given as follows:

C (π 1 ,1) = tπ1 ,1 C (π j ,1) = C (π j −1 ,1) + tπ j ,1

j = 2,… , n

C (π 1 , k ) = C (π 1 , k − 1) + tπ1 ,k

k = 2, …, m

C (π j , k ) = max{C (π j , k ), C (π j , k )} + tπ j ,k

j = 2, … , n k = 2, … , m

Then the objective function value (the makespan) can be defined as: Cmax (π ) = C (π n , m) (4) So, the PFSP with the makespan criterion is to find a permutation minimizes

π*

Cmax

in the set of all permutations

Π

which

defined by (4).

3.2. Smallest position value rule

Kennedy and Eberhart in [10] proposed the first version of binary PSO, as the position is fixed to 0 or 1, the velocity becomes the probability to be 1 or 0. Then particle may be seen to move to nearer and farther corners of a

The SPV rule proposed in [14] is actually a mapping from an N-dimensional continuous space to a sequence with N numbers. For a given continuous position like

903

Proceedings of the Sixth International Conference on Machine Learning and Cybernetics, Hong Kong, 19-22 August 2007 [-1.2,0,-0.9,1.4], the corresponding sequence is represented by the order of the vector. For example, in decent order, -1.2 is the smallest, so the sequence in this dimension will be 4, thus, we can obtain the corresponding sequence: [4, 2, 3, 1]. Thus, in Tasgetiren’s PSO algorithm, after each movement, a particle’s position is transformed into a sequence and then be evaluated. But as the position is continuous, one problem occurred which will affect the convergence rate of PSO. Figure 1 represents a simple example: Position 1 [1.5

-1.3

0

1.3]

[1.4

-1.2

1

1.2]

Position 2 Continuous Space

1, and 7 respectively. According to the SPV rule, as 7 is the largest in value, the corresponding permutation value will be 1, thus, the permutation sequence represented by the particle is acquired as shown: [2 3 4 1]. We can also note that, the actual dimension of a position vector for each particle is 3×4 = 12. And we can easily conclude that in order to make the decoded vector feasible for an n-job PFSP, the dimension for a position vector should be at least ⎡n ⋅ log 2 n ⎤ . We can also see that, the inefficiency we just discussed still exists. But this so called “inefficiency” will also prevent the particle from moving too fast to the personal best it had reached, which will cause premature convergence. The velocity in the BPSO proposed in [10] is the probability for a bit to be 0 or 1, as defined in formula (3). However, when this formula is applied to resolve our problem, we find that the results are not so preferable. Here, we follow the basic idea of velocity, and propose a new formula to update the velocity, which is more suitable for our problem here: vid = w ⋅ vid + C1 ⋅ r1 ⋅ ( pid ⊕ xid ) + C2 ⋅ r2 ⋅ ( p gd ⊕ xid ) (5)

Figure 1. Positions in continues space In figure 1, we can see that although the two positions seem to be quite different from each other, they represent the same sequence [1, 4, 3, 2]. As we know, in PSO, it is the difference in positions that causes velocity and evolution, so this kind of inefficiency of the continuous space will result in the decrease in convergence rate. A convenient way to overcome this inefficiency is to simply quantize the space, and then the movement will become a number of “skips”. Thus, we introduce the modified binary PSO. 3.3. MBPSO with SPV rule In our modified binary version PSO, the position for each particle is a vector of 0 or 1. To solve the PFSP, the data structure of each particle is given in table 1: Table 1.Data structure of a particle Data structure (Actual Vector) Decoded Vector Corresponding Sequence

1 0 1 5 2

0 1 1 3 3

0 0 1 1 4

1 1 1 7 1

Where vid , xid , pid , p gd denote the dth bit (dimension) for the velocity, position, personal best and group best vector of the ith particle. And ⊕ is the “exclusive OR” operator. The corresponding formula used by Kennedy and Eberhart is slightly different:

vid = w ⋅ vid + C1 ⋅ r1 ⋅ ( pid − xid ) + C2 ⋅ r2 ⋅ ( p gd − xid ) d

d

d

as part 2. Each part can be -1, 0 or 1, because each dimension of a vector can only be 0 or 1, if part 1 is 1, part 2 can not be -1. That means the sign here is unnecessary. Thus, in our formula (5), the “exclusive OR” operator is used instead of minus. Thus, as part 1 and 2 in our formula (5) denote if one bit is "different" from the personal or group best, our velocity become the possibility for a bit to reverse. A new formula used for position updating is proposed too:

if (rand () < S (vid )) then xid = xid else xid = xid

As shown in table 1, all columns of a particle's position (a binary matrix) could be converted to decimal representation. For instance, the first column is 101, then in decoded vector, it will be 5. So the columns represent 5, 3,

d

We denote ( pi − xi ) as part 1, and ( p g − xi )

(6)

Although the modification here results in a higher convergence rate, experiments show that plain MBPSO stagnate very easily. By reviewing the data structure of each particle, we find out that if one bit is identical with the one in personal

904

Proceedings of the Sixth International Conference on Machine Learning and Cybernetics, Hong Kong, 19-22 August 2007 best or global best, according to our definition of velocity, it will unlikely to be reversed through the entire evolution, which may lead to a considerable chance of premature convergence. Though the randomly initialized velocity and inertia weight will to some extent prevent this, we should investigate some mechanisms to improve the performance. Thus, local search and random perturbation are introduced to our algorithm and bring better performance. The proposed MBPSO algorithm is presented in figure 2: t := 0; for (i = 1,…, N ){ Generate xit ; pit := xit ; Derive the corresponding permutation π it ; Evaluate π it by formula (4);

π it → pit ;

e

} p gt ← pit having min{e π it , k = 1,…, N }; for (i = 1, N ){

to restart at a random point with a random velocity. Experiments show this mechanism improves performance further. 4.

Experimental testing

By trial and error method, MBPSO parameters are fixed as follows: The size of the population for initial solution is 50. The number of iterations is 100. The number of iterations for a stagnated particle to restart is 10. The other parameters are as follows w=0.33; C1=0.33 and C2 =0.33. The proposed MBPSO is used to solve benchmark PFSPs proposed by [1]. Quality of the solutions generated by the proposed algorithm is compared with the HDPSO algorithm of Chandrasekaran. S. et al. [13] using a quality measure called relative percent deviation (RPD) of the makespan value, which is denoted by ∆ , and computed using formula (7):

Cmin − C * ∆= × 100 (7) C* Here, Cmin represents the best solution obtained by

t i

Generate v ; } do{

*

the BPSO algorithm and C represents the best makespan value reported in the literature. For each instance in the benchmark, the experiments are done for 15 times. The results are given in Table 2 where ∆ avg denotes the average RPDs from the optimal

for (i = 1,…, N ) update velocity by formula (5); update position by formula (6); apply local search and perturbation; update personal best pit ;

or best-known solutions, and ∆ max , ∆ min denote the

update group best pgt ; t := t + 1; }( while t < tmax )

Figure 2: Proposed BPSO Algorithm 3.4. Local Search and Perturbation Mechanisms Without local search, we found that MBPSO stagnates quite easily in early stage and hardly makes any progress after that. Then we introduce a local search applied directly to the binary vector, in which each bit is reversed and if positive progress is made, the reverse is accepted. Experiments show that local search improves the performance of our MBPSO. To further decrease the chance of premature convergence, perturbation is introduced as well. If a particle's evaluation does not change for a given number of iterations (for example, 10 iterations), the particle is forced

maximum and minimum RPD from the optimal or best-known solutions, finally σ RPD denotes the standard deviation of RPD. Effectiveness of the local search and perturbation mechanisms is also tested and the corresponding results are presented in Table3. Performance of the proposed MBPSO is compared with the HDPSO algorithm proposed by Chandrasekaran. S. et al. [13] by solving benchmark FSPs and the results are presented in Table 4.

905

Proceedings of the Sixth International Conference on Machine Learning and Cybernetics, Hong Kong, 19-22 August 2007 Table 2: Performance of MBPSO Proble m Size n×m 20×5 20×10 20×20 50×5 50×10 50×20

∆ avg

∆ min

∆ max

σ RPD

0.1609 0.4862 0.4344 0.1837 1.0510 2.4296

0 0 0 0 0 0

0.5384 1.0423 0.8522 0.3950 1.8401 3.3837

0.0019 0.0028 0.0023 0.0012 0.0046 0.0049

Proble n

Instance

5

2 0

10

2 0

20

Table 3: Effectiveness of the local search and perturbation

MBP SO MBP SO witho MBP SO witho

5.

Proble m Size n×m 20×5 20×10 20×20 20×5 20×10 20×20 20×5 20×10 20×20

∆ avg

∆ min

∆ max

σ RPD

0.160 0.486 0.434 4 2.139 4.342 3.700 9 0.180 0.519 0.490 7

0 0

0.538 1.042 0.852 2 4.550 7.249 5.999 5 0.603 0.841 0.977 0

0.001 0.002 0.002 3 0.010 0.013 0.010 9 0.002 0.002 0.002 9

0 1.089 2.272 2.098 3 0.088 0.135 0.208 3

Conclusions

PSO could solve both continuous numeric and combinatorial optimization problems. Some smart methods modify the PSO to solve the FSPs and PFSPs. A modified version of Binary Particle Swarm Optimization Algorithm (MBPSO) is presented, and to avoid the stagnation in local optima, local search and perturbation are employed to improve the performance. Experiments fully demonstrate that MBPSO improve the Performance of the PFSPs solving. Several further works, such as the implementation of different local search mechanisms, different coding and decoding methods, or a new way to represent the solution, remain to be studied. Table 4: MBPSO compared with HDPSO

∆ avg

m

2 0

HDPSO

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

0 0 0 0 0 0 0 0 0 0 0.7585 0.6028 0.6684 1.0893 0.7047 0.0716 0.6065 1.3654 0.0628 1.0685 0.566 0.5241 0.9458 0.3149 0.742 0.3594 0.5279 0.4091 0.2235 0.3214

MBPSO

∆ avg

0

0.7

0.49

∆ avg 0 0.0392 0.2960 0.1753 0.0378 0.1674 0.8394 0 0.0542 0 0.4214 0.4541 0.7799 0.2612 0.3899 0.2243 0.0719 0.7976 0.4436 1.0182 0.5834 0.4444 0.5102 0.3869 0.5005 0.3354 0.5602 0.4333 0.2295 0.3612

∆ avg

0.1609

0.4862

0.4345

Acknowledgements Authors acknowledge Yuxuan Wang, for he gave us a variety of comments which helped us a lot to improve the quality of our present work. References [1] E. Taillard, “Benchmarks for basic scheduling problems”, European Journal of Operational Research, vol. 64, pp. 278-285, 1993. [2] M. Held and R. M. Karp, “A dynamic programming approach to sequencing problems,” Journal of the Society for Industrial and Applied Mathematics, vol. 10, no. 1, pp. 196-210, 1962.

906

Proceedings of the Sixth International Conference on Machine Learning and Cybernetics, Hong Kong, 19-22 August 2007 [3] K. R. Baker and L. E. Schrage, “Finding an optimal sequence by dynamic programming: an extension to precedence-related tasks,” Operations Research, vol. 26, no. 1, pp. 111-120, 1978. [4] C. N. Potts and L. N. Van Wassenhove, “A branch and bound algorithm for the total weighted tardiness problem,” Operations Research, vol. 33, no. 2, pp. 363-377, 1985. [5] H. Matsuo, C. J. Suh, and R. S. Sullivan, “A controlled search simulated annealing method for the single machine weighted tardiness problem,” Annals of Operations Research, vol. 21, pp. 85-108, 1989. [6] Nowicki E., and Smutnicki C., "A fast tabu search algorithm for the permutation flow-shop problem," European Journal of Operational Research, 91:160–175, 1996. [7] Reeves, C. R., “A Genetic Algorithm for Flow shop Sequencing,” Computers & Operations Research, 22(1):5–13, 1995. [8] Thomas St. utzle. “An ant approach to the flow shop problem,” Proceedings of the 6th European Congress on Intelligent Techniques and Soft Computing, pages 1560--1564, 1998. [9] J. Kennedy and R. C. Eberhard, “Particle swarm optimization,” Proc. of IEEE Int’l Conf. on Neural Networks, Piscataway, NJ, USA, pp. 1942-1948, 1995.

[10] J. Kennedy and R. C. Eberhard, “A discrete binary version of the particle swarm algorithm,” Proceedings of IEEE International Conference on Systems, Man, and Cybernetics. Piscataway, NJ: IEEE Press, 1997 :4104-4108. [11] Shi, Y. H., Eberhart, R. C., “Parameter Selection in Particle Swarm Optimization”, the 7th Annual Conference on Evolutionary Programming, San Diego, USA, 1998. [12] Quan-Ke Pan, M. Fatih Tasgetiren, Yun-Chia Liang, “A Discrete Particle Swarm Optimization Algorithm for the Permutation Flowshop Sequencing Problem with Makespan Criterion,” the 26th SGAI International Conference on Innovative Techniques and Applications of Artificial Intelligence, 2006. [13] Chandrasekaran. S., Ponnambalam. S.G., Suresh. R.K., Vijayakumar. N., “A Hybrid Discrete Particle Swarm Optimization Algorithm to Solve Flow Shop Scheduling Problems,” 2006 IEEE Conference on Cybernetics and Intelligent Systems, pp. 1-6, June 2006. [14] Tasgetiren, M.F.; Sevkli, M.; Yun-Chia Liang; Gencyilmaz, G., "Particle Swarm Optimization Algorithm for Single Machine Total Weighted Tardiness Problem," Congress on Evolutionary Computation, 2004, Volume 2, 19-23, pp. 1412 - 1419, June 2004.

907

A Modified Binary Particle Swarm Optimization ... - IEEE Xplore

Aug 22, 2007 - All particles are initialized as random binary vectors, and the Smallest Position. Value (SPV) rule is used to construct a mapping from binary.

429KB Sizes 3 Downloads 240 Views

Recommend Documents

A Modified Binary Particle Swarm Optimization ...
Traditional Methods. ○ PSO methods. ○ Proposed Methods. ○ Basic Modification: the Smallest Position Value Rule. ○ Further Modification: new set of Update ...

Discrete Binary Cat Swarm Optimization Algorithm - IEEE Xplore
K. N. Toosi university of Tech. ... its best personal experience and the best experience of the .... the cat and the best position found by members of cat swarm.

Entropy based Binary Particle Swarm Optimization and ... - GitHub
We found that non-ear segments have lesser 2-bit entropy values ...... CMU PIE Database: 〈http://www.ri.cmu.edu/research_project_detail.html?project_.

Chang, Particle Swarm Optimization and Ant Colony Optimization, A ...
Chang, Particle Swarm Optimization and Ant Colony Optimization, A Gentle Introduction.pdf. Chang, Particle Swarm Optimization and Ant Colony Optimization, ...

EJOR-A discrete particle swarm optimization method_ Unler and ...
Page 3 of 12. EJOR-A discrete particle swarm optimization method_ Unler and Murat_2010 (1).pdf. EJOR-A discrete particle swarm optimization method_ Unler ...

Application of a Parallel Particle Swarm Optimization ...
Application of a Parallel Particle Swarm Optimization. Scheme to the Design of Electromagnetic Absorbers. Suomin Cui, Senior Member, IEEE, and Daniel S.

Particle Swarm Optimization for Clustering Short-Text ... | Google Sites
Text Mining (Question Answering etc.) ... clustering of short-text corpora problems? Which are .... Data Sets. We select 3 short-text collection to test our approach:.

particle swarm optimization pdf ebook download
File: Particle swarm optimization pdf. ebook download. Download now. Click here if your download doesn't start automatically. Page 1 of 1. particle swarm ...

An Improved Particle Swarm Optimization for Prediction Model of ...
An Improved Particle Swarm Optimization for Prediction Model of. Macromolecular Structure. Fuli RONG, Yang YI,Yang HU. Information Science School ...

Particle Swarm Optimization: An Efficient Method for Tracing Periodic ...
[email protected] e [email protected] ..... http://www.adaptiveview.com/articles/ipsop1.html, 2003. [10] J. F. Schutte ... email:[email protected].

Particle Swarm Optimization: An Efficient Method for Tracing Periodic ...
trinsic chaotic phenomena and fractal characters [1, 2, 3]. Most local chaos control ..... http://www.adaptiveview.com/articles/ipsop1.html, 2003. [10] J. F. Schutte ...

NEXT: In-Network Nonconvex Optimization - IEEE Xplore
Abstract—We study nonconvex distributed optimization in multiagent networks with time-varying (nonsymmetric) connec- tivity. We introduce the first algorithmic ...

Optimizing Binary Fisher Codes for Visual Search - IEEE Xplore
The Institute of Digital Media, Peking University, Beijing, China. {zhew,lingyu,linjie,cjie,tjhuang,wgao}@pku.edu.cn. Fisher vectors (FV), a global representation obtained by aggregating local invari- ant features (e.g., SIFT), generates the state-of

modified kohonen learning network and application in ... - IEEE Xplore
APPLICATION IN CHINESE CHARACTER RECOGNITION. Hong Cao and Alex C Kot. School of Electrical and Electronics Engineering. Nanyang Technological ...

Oriented Modulation for Watermarking in Direct Binary ... - IEEE Xplore
watermark embedding, while maintaining high image quality. This technique is ... extracted features and ultimately to decode the watermark data. Experimental ...

Srinivasan, Seow, Particle Swarm Inspired Evolutionary Algorithm ...
Tbe fifth and last test function is Schwefel function. given by: d=l. Page 3 of 6. Srinivasan, Seow, Particle Swarm Inspired Evolutionar ... (PS-EA) for Multiobjective ...

Design and Optimization of Multiple-Mesh Clock Network - IEEE Xplore
Design and Optimization of Multiple-Mesh. Clock Network. Jinwook Jung, Dongsoo Lee, and Youngsoo Shin. Department of Electrical Engineering, KAIST.

Placement Optimization for MP-DSAL Compliant Layout - IEEE Xplore
Abstract—Sub 10-nm technology node requires contacts whose size and pitch are beyond optical resolution limit. Directed self- assembly lithography with ...

IEEE Photonics Technology - IEEE Xplore
Abstract—Due to the high beam divergence of standard laser diodes (LDs), these are not suitable for wavelength-selective feed- back without extra optical ...

Transmit Power Optimization for Two-Way Relay ... - IEEE Xplore
Abstract—In this letter, we consider a two-way relay channel where two source nodes exchange their packets via a half-duplex relay node, which adopts physical-layer network coding (PNC) for exchanging packets in two time slots. Convolutional codes

Design and Optimization of Multiple-Mesh Clock Network - IEEE Xplore
at mesh grid, is less susceptible to on-chip process variation, and so it has widely been studied recently for a clock network of smaller skew. A practical design ...

wright layout - IEEE Xplore
tive specifications for voice over asynchronous transfer mode (VoATM) [2], voice over IP. (VoIP), and voice over frame relay (VoFR) [3]. Much has been written ...

Device Ensembles - IEEE Xplore
Dec 2, 2004 - time, the computer and consumer electronics indus- tries are defining ... tered on data synchronization between desktops and personal digital ...