International Journal of Computer Science Research and Application 2013, Vol. 03, Issue 01(Special Issue), pp. 39-47 ISSN 2012-9564 (Print) ISSN 2012-9572 (Online) © Author Names. Authors retain all rights. IJCSRA has been granted the right to publish and share, Creative Commons 3.0

INTERNATIONAL JOURNAL OF COMPUTER SCIENCE RESEARCH AND APPLICATION www.ijcsra.org

Algorithm for Dynamic Partitioning and Reallocation of Fragments in a Distributed Database Nicoleta Magdalena Ciobanu (Iacob)1 1

Ph.D. Student, Faculty of Mathematics and Computer Science/Department of Computer Science, University of Pitesti and PhD Assistant Professor, Faculty of Finance, Banking and Accounting, "Dimitrie Cantemir" Christian University, ROMANIA, [email protected] Address Liviu Rebreanu Street, No 46-58, 031793 Bucharest, Telephone Numbers 0040757011895

Abstract Given the fact that the volume and the diversity of data grow considerably year by year, the problem of efficient data management rise because the data must be available at any time and must be accurate. In this paper is proposed a heuristic algorithm for fragmentation and reallocation of new fragments to other sites in an unbalanced system in order to obtain an optimal dynamic distribution of data fragments with the smallest possible cost. The dynamic characteristic of the model consist in the fact that the change of access models (read, write) must lead to the re-fragmentation and reallocation of fragments and creation or deletion of fragments replicas (the fragments replicas can change their rights) depending on the users data access histograms. These decisions are taken by algorithms utilizing cost function which estimate the difference in future communication costs between the change of a given replica and keeping that replica on the actual condition. The proposed model can be applied also in parallel databases because every site take decisions about their own fragments and the decisions are taken without any site synchronization.

Keywords: Distributed Database, Fragmentation, Allocation, Algorithm.

1. Introduction The evolution of information technology generated also a tremendous evolution of database systems and in this context distributed database technology has changed the centralized point of view by offering major advantages. Definition 1.1: According to (Elmasri & Navathe, 1999), we can define a distributed database (DDB) as a collection of multiple logically interrelated databases distributed over a computer network, and a distributed database management system (DDBMS) as a software system that manages a distributed database while making the distribution transparent to the user. Distribution of data is a collection of fragmentation, allocation and replication processes (Khan & Hoque, 2010). A distributed database system is a database system which is fragmented or replicated on the various configurations of hardware and software, located usually at different geographical sites within an organization (Beynon-Davies, 2004). Definition 1.2: Fragmentation. The system partitions the relation into several fragments and stores each fragment at a different site. The fragmentation is the partitioning of a global relation R into fragments R1, R2,…,Rm containing enough information to reconstruct the original relation R (Ciobanu Iacob, 2011).

40

International Journal of Computer Science Research and Application, 3(1):39-47

The following information is used to decide fragmentation:  quantitative information: frequency of queries, site, where query is run, selectivity of the queries etc;  qualitative information: types of access of data, read/write etc. In terms of quantitative information about user applications, we need to have two sets of data:  minterm selectivity: number of tuples of the relation that would be accessed by a user query specified according to a given minterm predicate.  access frequency: frequency with which user applications access data. If Q = {q1,…,qm} is a set of user queries, acc(qi) indicates the access frequency of query qi in a given period (Ozsu & Valduriez, 2011). There are some basic rules to be followed in defining the fragments:  the condition of completeness: the entire global relationship should be partitioned into fragments;  the condition of reconstruction: in any event it must be possible to reconstruct the global relationship from its fragments;  the condition of disjunction: the fragments must be disjunctive, so that data replication can be controlled explicitly at data allocation level. Types of fragmentation:  horizontal: a relational table can be split up so that some rows of a database tables are located at one site, other rows with records are located at another site. Each table then contains the same number of columns, but fewer rows;  vertical: the columns of a table are divided up among several sites on the network. Each table then contains the same number of rows, but fewer columns. The advantages of the fragmentation:  usage – applications work with views rather than entire relations;  efficiency – data is stored close to the place where it is mostly frequently used;  parallelism – with fragments are the unit of distribution, a transaction can be divided into several subqueries that operate on fragments;  security – data not required by local applications is not restored, and consequently not available to unauthorized users;  performance of global applications that require data from several fragments located at different sites many be slower (Iacob Ciobanu, 2011). The fragmentation and the replication can be combined: a relation can be partitioned into several pieces and can have multiple replicas of each fragment (Silberschatz et al, 2010). Definition 1.3: Replication is the operation of storing portions from a database, as copies, on multiple nodes on a network. If a user updates a local copy then DDBMS automatically updates all copies of that data. Assuming replicas are mutually consistent, replication improves availability since a transaction can read any of the copies. In addition, replication provides more reliability, minimizes the chance of total data loss, and greatly improves disaster recovery. Although replication gives to the system better read performance, it does affect the system negatively when database copies are modified. That is because an update operation, for example, must be applied to all of the copies to maintain the mutual consistency of the replicated items (Rahimi & Haug, 2010). The objective of data allocation is to find the optimal placement of fragments with minimal cost and maximum performance. (Dumitru, 2005). Let's consider an unbalanced distributed database system that is formed by a number of n sites, S1, S2,…, Sn and a global table T, figure 1 (Ciobanu Iacob & Ciobanu Defta, 2012). A table T can be entirely stored on a single site Si, i = 1,…,n or can be horizontally fragmented (F1, F2,…,Fm) on a number of m sites. Cost analysis of fragmentation. The cost for the relation which will be fragmented on m sites (Alom et al, 2009) is defined as the product between the actual number of tuples (card(Fnew)) from fragment, number of attributes (Nat), the transmission cost (Cat) for attribut and the number of fragments of partitioned relation. TCO-F = card(Fnew) *Nat * Cat * m

(1)

An important result of this paper is the integration of fragmentation, replication and data allocation in a dynamic and complete decentralized and automatized system model (the system continuously monitors the database and adjusts itself to the recent workload).

41

International Journal of Computer Science Research and Application, 3(1):39-47

An important characteristic of the model consist in the fact that the change of access models (read, write) must lead to the re-fragmentation and reallocation of fragments and creation or deletion of fragments replicas depending on the users data access histograms. Informations regarding fragmentation, the nodes where the copies are stored and the rights of the fragments (read/write) in nodes are realized by a common catalog service using a distributed hash table. Final methods monitor continuously the database and adapt to the recent workload. The static methods are based on offline analysis of database access. Database data access is continuously made. Statistics are stored using dynamic histograms which are progressively maintained.

Figure 1: Distributed database from the system model The histogram is a collection of buckets and every bucket is stored as a triplet (bp, R[bp], W[bp]), where p is the bucket number, R[bp] – number of read accesses from a histogram bucket and W[bp] – number of write accesses from a histogram bucket. Those histograms are an approximation, because the details are limited to the histograms buckets. In order to improve the performance of histogram operations in this algorithm are utilized so-called equilength histograms. Because all the buckets have the same length (W), finding the correct bucket for a value is very simple. Every site offers a collection of histograms for every fragment which has a local replica. In order to store only the recent accesses, two sets of histograms are used: the old and the actual (Hauglid et al, 2010). The algorithms use both collections to make them have the same length of the bucket. All the operations are placed in the actual collection. Periodically evaluation algorithms are executed and as a result the old collection (Ho) is deleted and replaced with the content of the actual collection (Hc) an then Hc is deleted and then utilized for new statistics. In this way it is guaranteed the fact that the replicas were updated.

42

International Journal of Computer Science Research and Application, 3(1):39-47

Definition 1.4: A histogram is an approximation of data distribution (Donjerkovic et al, 2000), realized by partitioning in k (k≥1) disjoint two by two collections named buckets and analyzing every bucket by some concise information regarding the attribute values which are in a bucket and the frequencies corresponding to those. The creation of histogram H related to replica R from the local site N (Ioannidis, 2003) consists of the following steps:  

n distinct points are read; minimum value (Xmin) and maximum value (Xmax) from the histogram: Xminmin(Hc[N,R])=min(v1(d),v2(d),...,vn(d)); Xmaxmax(Hc[N,R])=max(v1(d),v2(d),...,vn(d));  length (W) of a bucket: W=vih(d+1)-vih(d) ;  number (kh) of buckets: kh=[(Xmax-Xmin)/W];  division points (borders) between buckets: vj(d)=Xmin+j*W, j=0,...,kh-1, where j is the bucket number. The replica access is made at tuple level. Every time a tuple is accessed in one of the local replicas, the histogram is updated accordingly. The data update in the histogram is done like this:     

a new point x is read (the access of a tuple); the bucket in which x resides (bpx/W) is determined: if in every moment the access of a tuple is made beyond the range of buckets of the actual histogram (bp  H), a new bucket is created for x; value x is inserted in the corresponding bucket; the type of access for x is verified. If x is a read operation (If getRight(x)=’read’ and getRight(x)≠’write’ Then) then the number of reads (R[bp]R[bp]+1) from the bp bucket increase; otherwise (If getRight(x)=’write’ Then) the number of writes (W[bp]W[bp]+1) increase; the points are uniformly redistributed in the values range of the bucket.

2. Partitioning of fragments and reallocation. Algorithm The re-fragmentation and data allocation algorithm on different nodes from database where are frequently accessed, have the role to minimize the network traffic by identifying parts of a table which must be extracted to form a new fragment and migrate them to a remote site by taking into consideration the number of fragments on a node and their dimensions. A fragment reallocation or the transfer of the write right (only in the case where a replica have read access and the other write access) between the fragment replicas situated on two sites, with the smallest load and the biggest benefit, will be made by talking into consideration function cost. The benefit of replica migration (write access replica) from the local site N to remote site N1 consist of the fact that the remote writes will become local operations. The cost will consist of writes at local sites and the cost of migration. Reallocation of a fragment F from node N in node N1 (Petrescu Horvat, 2010) is necessary when: a) The number of update requests received from node N1 (remote) is bigger than the number of requests received directly from node N (local): nwr(N1) > nwl(N) < == > nwr(N1) – nwl(N) > 0 (2) nwr(N1)writeremote(N1,R,bmin,bmax) – represent the number of write acceses received by replica R, stored in bucket [bmin,bmax], from the remote sites N1. nwl(N)writelocal(N,R,bmin,bmax) - represent the number of write acceses received by replica R, , stored in bucket [bmin,bmax], from the remote site N. b) The cost of extraction and migration of fragment Fnew is smaller than the difference between the cost of requests received in node N1 and retransmitted to node N and the cost of requests received from node N and transmitted to node N1.

43

International Journal of Computer Science Research and Application, 3(1):39-47

The extraction and migration cost of the fragment Fnew is determined like this: Let P be the collection of sites on which reside the replicas with read rights of fragment F and Q the collection of sites on which reside the replicas with write rights of fragment F. The matrix of transfer cost and latency between the nodes P and Q from database, noted with C, is: C = { CP,Q | P,Q nodes in database} (3) Observation: The transfer and latency cost from node P to node Q is the same with the transfer and latency cost from node Q to node P: CP,Q = CQ,P (4) The notification cost of replicas with write right Q and the cost of obtaining an answer is: CW2*

C

(5)

N,Q

Q

(We have “2” because it is taken into consideration the cost of received queries in node Q and the retransmit of those to node N.) The notification cost of replica P with read right and the cost of updating their values is: CR

C

(6)

N, P

P

The cost of a fragment extraction Fnew from the local site N is defined as the product between the extraction cost (Cet) for attribut, number of attributes (Nat) and the actual number of tuples (card(Fnew)) from fragment Fnew (transferred quantity): CE = card(Fnew)* Nat*Cet (7) The cost for migrating a fragment Fnew from site N to site N1, which is the principal cost of refragmentation and reallocation is: CM = 2*

C

N,Q

+ 2*

C

Q

N1, Q

+

C

N, P

+

P

Q

C

N1, P

+

P

 Cs

(8)

N1

Cs – represent the cost of storing the fragment Fnew at site N1 and is defined as the product between the storage cost (Cst) of the tuple at site N1, actual number of tuples from fragment Fnew and the decision variable y: Cs = Cst*card(Fnew)* y, (9)

1 , if the fragment Fnew is store on site N1; y 0 , otherwise. The decision variable y is used to select only those cost values for the sites where the fragments are stored. The total cost for the extraction and migration of a fragment Fnew from the site N to the site N1 is: CT = CE+ CM = CE + 2*

C

N,Q

+ 2*

Q

C

N1, Q

+

C

N, P

+

P

Q

C

N1, P

+

P

 Cs

(10)

N1

Difference between costs (the cost of requests received in node N1 and retransmitted to node N and the cost of the requests received from node N and transmitted to node N1), which in fact is the benefit, is determined like this: DC = 2*

C

N1, Q

* (nwr(N1) – nwl(N)) + 2*

Q

C

N,Q

* (nwr(N1) – nwl(N))

(11)

Q

The difference between the queries solving cost on the two sites, N and N1, and the extraction and migration cost of the fragments (UtilEM = benefit - cost) must be positive: UtilEM = DC - CT = (2* nwl(N))) – (FSC *CE + 2*

C Q

C Q

N,Q

+ 2*

N1, Q

C Q

* (FSB*nwr(N1) – nwl(N)) + 2* N1, Q

+

C P

N, P

+

C P

C

N,Q

* (FSB*nwr(N1) –

Q

N1, P

+

 Cs ) > 0 N1

The necessary time to update a fragment F on node N is very small and can be neglected (TA→0).

(12)

44

International Journal of Computer Science Research and Application, 3(1):39-47

Given the statistics existent on every site, the proposed algorithm examine access for every replica and evaluate the possible reallocations and refragmentations based on recent history using a cost function (UtilEM). The algorithm is executed at given time intervals, individually for every replica. If the site does not have a positive useful value, then no change is made on that site. Given the fact that the number of accesses of replica include only the recent history, the actual number of tuples from fragment is scaled (fragment dimension) with the weight of function cost (FSC) and in this way is independent of the history length. Because migrations cause delay of accessing tables, the migration must not be permitted when the number of local accesses is higher than the number of remote accesses (this fact can result in an unstable situation in which a fragment is migrated continuously between sites). To reduce this problem, we scale the benefit part of the functions cost by FSB ϵ [0...1]. Optimizations. In order to reduce the costs, some optimizations can be made. At the beginning of an update transaction the data integrity is assured by sending some “warning” messages to the nodes that contain replicas with read rights which are going to be updated. After finishing the update transaction, only the nodes marked with “warning” will receive the updated value (thus limiting the network traffic) to update the database catalogue and then the “warning” will be deleted. The algorithm can be further optimized: using the control sum on the same data collection, if a hash table was modified (the control sum is different) the table can be split in smaller fragments which will be verified and in this way only the modified data will be updated, avoiding the traffic which would be needed to update the entire table. The extraction and migration algorithm of a fragment from node N to node N1 consist of the following steps:  

  



The collection of nodes P with read access and the collection of nodes Q with write access are determined. All the possible new fragments Fnew and the possible sites N1 are evaluated using the function UtilEM. The proposed heuristic consist of transmitting the fragment which have the smalles communication cost at the sites where is util - Migrate(Fnew,N1). The selection of node N1 will take into consideration the best response time with the best benefit. In this way the load of every node and distance from node N will be considered. Then all the compatible fragmentations with positive values will be performed. Two fragmentations are compatible if the extracted fragments do not overlap. In case we have two incompatible fragmentations, the fragmentation with the highest UtilEM value is chosed. If a refragmentation decision is taken, all the sites with read rights are notified in order to be able to perform the same refragmentation. Because the update of read replicas is made after all the replicas with write rights are updated, problems related to data consistency can appear. To resolve this, in the beginning of an update transaction the data consistency is assured by transmitting some warning messages to the sites that contain read replicas of the data that will be updated. To keep the number of fragments low, any adiacent fragments which have replicas on the same site are joined FF1





F2. If two fragments are joined, the replicas with write rights of those fragments must

be updated (those sites must delete their replicas or obtain a replica of the joined fragment). In the end, old access statistics from the local replicas with write access are eliminated (histogram is reorganized.

The algorithm for re-fragmentation and reallocation can be formalized as: START[REFRAGMENTATION(F,R)] //R is the master replica of F and is on the local site N INPUT { The sites of DDBs: S1,…,Sn; Sites communication cost matrix; Loading sites: load(S1),...,load(Sn); The current histogram corresponding to replica R from the kh

local site N is: HHc[N,R], where H=

 (b

ih

, R[bih ], W[bih ]),

ih=1,kh }

ih 1

BEGIN // The load of every site is computed, (Ozsu & Valduriez, 2011) For each Sr ϵ S, r1 to n Do compute (load(Sr)) End For NgetLocalSite(F) //N is the site where fragment F is located

45

International Journal of Computer Science Research and Application, 3(1):39-47

//The write right is transferred from site P to site Q P=Ø; Q =Ø For each Sr ϵ S, r1 to n Do If getRight(R,Sr)=’read’ and getRight(R,Sr)≠’write’ Then //replica R has the right read on the site Sr PP



Sr

Else //If getRight(R,Sr)=’write’ Then //replica R has the right write on the site Sr Q  Q



Sr

End If End For fragØ //the set of fragments of F nf0 // the number of fragments of F For each N1 ϵ S\N, N11 to n Do If getRight(R,N1)=’write’ Then If (UtilEM>0) and (max-min+1>card(frag)) Then fragfrag



(N1,min,max,UtilEM)

nfnf+1 c[nf]UtilEM //a vector is build which contain the useful value for every fragment s[nf]N1 //remote sites N1 d[nf]compute(dist(N, N1)) //the computed distance from site N to site N1 l[nf]compute(load(N1)) //sites load End If End If End For ordering (s,c,nf) // the nf sites s are ordered by value c sorting(s,d,l,nf) //the nf sites s are ordered by the response time. This, will take into account the distance (d) to the site N and loading (l)sites removeIncompatible(frag) For all (N1,min,max,UtilEM) ϵ frag Do //with the biggest benefit and smallest response time If load(N1)>limsp Then //the available space F1



Fnew



F2 = F

Migrate(Fnew,N1) //is recalculated loading the site after the transaction compute (load(N1)) compute (load(N)) //warning messages are sent to sites that contain replicas with read right of the data that need to be updated For each Src ϵ S, rc1 to n Do If getRight(R,Src)=’read’ and getRight(R,Src)≠’write’ Then Send "warning" message End If End For //are updated only data from the replicas with read right marked with "warning" of which checksum differs by that of site N Boolean dataIsWarning = true While (dataIsWarning) Do dataIsWarning = false For each Src ϵ S, rc1 to n Do If getRight(R,Src)=’read’ and getRight(R,Src)≠’write’ Then dataIsWarning = dataIsWarning or (R is marked with warning message) If checksum(TH,Src)≠checksum(TH’,N) Then UPDATETABLE(TH,Src) End If End If End For End While

46

International Journal of Computer Science Research and Application, 3(1):39-47

End If End For FF1



F2 //fragments are joined

Histogram is reorganized END

To avoid different fragmentation decisions which take place simultaneously at sites with replicas of the same fragment, this algorithm is applied only to master replicas. Description of algorithm implementation. Every time a part of a document that is not found on local site N is accessed, the number of remote accesses (nrr, nwr) from table CONTOR situated on local site is incremented depending on the type of access. In order to decide the migration of a requested part of the document (which need the smallest communication cost) from the site N1 with the biggest benefit and the best response time (it will be taken into account the distance to the local site and the load of the remote site) to the site where N is frequently accessed, the following steps are executed:  the value of utile variable UtilEM on every site which contains the requested document is computed (these sites are determined from the hash table situated on every site).  if UtilEM>0 and for a period of time there are no requests for that part of the document on the site where the migration N1 is desired then the extraction and migration of that part is executed.  data accuracy is assured. Notes:  in the extraction and migration process of a replica it will be taken into account the available storage space of the site N where the replica will be moved;  the algorithm is processed at regularly time intervals for every fragment from a specific site which has a replica;  if a decision to partition, create or delete a replica from a site is taken, a trigger will be run in order to update the database catalogue for every site. Triggers are procedures that are stored in the database and are implicitly run when a certain event occurs. Traditionally, triggers support the execution of a PL/SQL block when an INSERT, UPDATE, or DELETE occurred on a table. The result of every execution of algorithm may be:  don’t do anything (the fragment is as supossed to be);  migrate the entire replica with write right;  extract a new fragment Fnew and migrate new replica with write access to the site N1 (if on N1 is enough storage space available). The proposed system model is an improvement of the system model from (Hauglid et al, 2010):  if in (Hauglid et al, 2010) the sites have equal computing and communication capacities, the proposed system is unbalanced (the nodes execute the same operation on the same data collection at a different time, latency between the two nodes depend on the network and the load on a node is not constant). In reality, there is a small probability to exist ideal systems (with the same number of resources etc) like those presented in (Hauglid et al, 2010), so their implementation is pure theoretical with no practical benefit. In this paper have been taken into account the parameters which can vary from one system to another and this resulted in a much more realistic and complex approach.  storage space on every site is continuously monitored;  data consistency is assured. The fragmentation in distributed databases increases the level of concurrency and therefore increase system throughput for query processing.

3. Conclusion The proposed method for an unbalanced distributed database system has two major components: 1) the detection of replicas access models and 2) given those statistics, decisions on re-fragmentation and reallocation will be made.

47

International Journal of Computer Science Research and Application, 3(1):39-47

In this paper we focused on data allocation problem with the aim to assure an optimal distribution of data in the process of the distributed database design in correlation with data fragmentation. The decision to use a fragmented database is very important because it determines the execution performance of a distributed query. Future researches focus on the development of the proposed system model in order to detect, based on query analysis, the models which appear recurrently. The presented domain is very ample, permitting the continuation of applicative and fundamental researches in multiple directions.

4. Acknowledgment This work was partially supported by the strategic grant POSDRU/88/1.5/S/52826, Project ID52826 (2009), co-financed by the European Social Fund – Investing in People, with in the Sectorial Operational Programme Human Resources Development 2007-2013.

References Alom B.M. M., Henskens F. and Hannaford M., 2009, Query Processing and Optimization in Distributed Database Systems, IJCSNS International Journal of Computer Science and Network Security, VOL.9 No.9, September 2009, pp. 143-152. Beynon-Davies P, 2004, Database systems. (3rd ed.). New York: Palgrave-Macmillan. Ciobanu Iacob N.M., 2011, The Fragmentation of Distributed Databases, Scientific Bulletin, University of Pitesti, Mathematics and Computer Science Series, No. 17/2011, pp. 11-20. Ciobanu Iacob N.M., Ciobanu Defta C.L., 2012, Distributed Databases. A proposed dynamic model fully automated and decentralized, Proceedings of the 12th Conference on Artificial Intelligence and Digital Communications - AIDC 2012, University of Craiova, Faculty of Exact Science, 5-7 October, 2012, pp. 113-120, Orșova. Donjerkovic, D., Ioannidis, Y. E. and Ramakrishnan, R., 2000, Dynamic histograms: Capturing evolving data sets, In Proceedings of ICDE, pp. 1-20. Dumitru F., 2005, Data allocation problem in distributed environment, Scientific Annals of the “A.I. Cuza” University of Iasi. Elmasri, R. and Navathe, S.B., 1999, Fundamentals of Database Systems. Hauglid, J.O., Ryeng, N.H. and Nørvåg, K., 2010, DYFRAM: dynamic fragmentation and replica management in distributed database systems, Journal Distributed and Parallel Databases, Volume 28 Issue 2-3, December 2010, Pages 157-185. Iacob Ciobanu N.M., 2011, Vertical Fragmentation. Case Study: Platform E-learning, 2nd Word Conference on Information Technology – WCIT 2011, Bahcesehir University & Near East University, November 23-26, 2011, Antalya – Turkey. Ioannidis, Y., 2003, The history of histograms (abridged), In Proceedings of VLDB 2003. Khan, S.I. and Hoque, A.S.M.L., 2010, A New Technique for Database Fragmentation in Distributed Systems, International Journal of Computer Applications (0975 – 8887), Volume 5– No.9, August 2010, pp 20-14. Ozsu M.T., Valduriez P., 2011, Principles of Distributed Database Systems. (3th ed.), New York: Springer. Petrescu Horvat M., Dynamic relocation of fragments in a distributed database, Phd thesis, Babeş-Bolyai University ClujNapoca, Mathematics and Computer Science, Cluj-Napoca,2010. Rahimi S. and Haug F.S., 2010, Distributed database management systems. A Practical Approach, IEEE, Computer society, Hoboken, N. J: Wiley. Silberschatz, H.F. Korth, S. Sudarshan, 2010, Database System Concepts. (6th ed.), McGraw-Hill. Vasileva, S., Milev, P. and Stoyanov, B., 2007, Some Models of a Distributed Database Management System with Data Replication, International Conference on Computer Systems and Technologies – CompSysTech’07.

A Brief Author Biography Nicoleta Magdalena Ciobanu – I am a POSDRU Ph.D. student at University of Pitesti, Mathematics and Informatics specialization since 2009 and this month I have sustained my thesis with name "Distributed databases. A dynamic system model fully automated and decentralized". My research activities are in the field of algorithms and databases.

Copyright for articles published in this journal is retained by the authors, with first publication rights granted to the journal. By the appearance in this open access journal, articles are free to use with the required attribution. Users must contact the corresponding authors for any potential use of the article or its content that affects the authors’ copyright.

Algorithm for Dynamic Partitioning and Reallocation ...

database management system (DDBMS) as a software system that manages a ... A distributed database system is a database system which is fragmented or ...

477KB Sizes 2 Downloads 220 Views

Recommend Documents

Adaptive Partitioning for Large-Scale Dynamic Graphs
system for large-scale graph processing. In PODC,. 2009. [3] I. Stanton and G. Kliot. Streaming graph partition- ing for large distributed graphs. In KDD, 2012. [4] L. Vaquero, F. Cuadrado, D. Logothetis, and. C. Martella. xdgp: A dynamic graph pro-

Dynamic reallocation in teams of Unmanned Air Vehicles
The problem of having several Unmanned Air Vehicles organized as teams and ... Enemy Air Defenses (SEAD) mission where teams of Unmanned Combat Air ...

Dynamic reallocation in teams of Unmanned Air Vehicles
mathematical models, so the operators must approve or modify the plan and the ... presented in [4] where predictive models describe the dynamics of assets ...

A Dynamic Algorithm for Stabilising LEDBAT ...
Jun 22, 2010 - and file sharing applications co-exist in networks. A LEDBAT ... the access network of an Internet Service Provider (ISP). The buffer size of most ...

An online admission control algorithm for dynamic traffic ... - CNSR@VT
how to handle the dynamic changes for online traffic arrival and departure in both the ... our algorithm will configure MIMO degree-of-freedom. (DoFs) at each ...... This sequential accounting of IC responsibility is the basis of our ...... INFOCOM 2

A Lightweight Algorithm for Dynamic If-Conversion ... - Semantic Scholar
Jan 14, 2010 - Checking Memory Coalesing. Converting non-coalesced accesses into coalesced ones. Checking data sharing patterns. Thread & thread block merge for memory reuse. Data Prefetching. Optimized kernel functions & invocation parameters float

A Dynamic Scheduling Algorithm for Divisible Loads in ...
UMR exhibits the best performance among its family of algorithms. The MRRS .... by local applications (e.g. desktop applications) at the worker. The arrival of the local ..... u = (u1, u2, ... un) : the best solution so far, ui. {0,1} в : the value

A Dynamic Replica Selection Algorithm for Tolerating ...
in this system are distributed across a local area network. (LAN). A machine may ..... configuration file, which is read by the timing fault handler when it is loaded in the ..... Introduction to the Next Generation Directory Ser- vices. Technical re

An online admission control algorithm for dynamic traffic ... - CNSR@VT
how to handle the dynamic changes for online traffic arrival and departure in ...... for distance and bandwidth with appropriate dimensions. We assume the ..... [4] F. Gao, R. Zhang, Y.-C. Liang, and X. Wang, “Design of learning- based MIMO ...

A Dynamic Algorithm for Stabilising LEDBAT ...
Jun 22, 2010 - a LEDBAT source must not increase its sending rate faster than. TCP. ... the access network of an Internet Service Provider (ISP). The.

Capital Reallocation and Aggregate Productivity
Jun 14, 2016 - model with dispersion shocks alone accounts for nearly 85% of the time .... elements: A is aggregate TFP and K is the aggregate stock of capital. ... focus on business cycle, not reallocation moments. 4 .... For the dynamic program of

Effective data distribution and reallocation strategies for ...
Modern large distributed applications, such as mobile com- munications and ... efficiency need, we develop two novel strategies: a static data distribution strategy DDH and a ... communication and processing cost in the system. (2) Oftentimes ..... t

Reallocation, Competition and Productivity: [1.2ex ...
Capital controls distort competition and all firms' investments in technology. → Financial liberalization can foster productivity ... No other reform occurred at that time. − Capital controls created asymmetries in ... One-period, one sector and

Effective data distribution and reallocation strategies for ...
Modern large distributed applications, such as mobile com- munications and ... For example, when a telecom subscriber makes a phone call, a request ... query load is an essential issue greatly concerned by the service providers. Looking into ...

Reallocation and Firm Dynamics
Background. ▷ Consider the Neoclassical production ... Background. ▷ Consider the Neoclassical ... An empirical illustration. ▷ A version of Baily, Hulten, and ...

Appendix: Secular Labor Reallocation and ... - Semantic Scholar
We provide here details of the derivations in section 2.3 in the main text. Given frictionless ..... The line βrecession reports the coefficient from the same specification as the main body of the table but for the first 30 months of ...... Drew, Ju

Appendix: Secular Labor Reallocation and Business Cycles
Aggregating over all industries in a location, we write the price of output in location a as a function .... A mean-preserving shock at the aggregate level is one that keeps ..... 5Alternatively, we could implement the Nash solution also at t+1, so w

Wage Rigidities, Reallocation Shocks, and Jobless Recoveries
Aug 23, 2010 - generating a negative comovement between unemployment and job vacancies (Abraham and. Katz, 1986). And models of wage rigidities ran ...

Unemployment Insurance and Labor Reallocation!
Sorbonne. Email: franck.malherbet@uni%bocconi.it, Address: Via Salasco 5, 20136 Milano,. Italy. ..... mass of the unemployed workers or the mass of vacant jobs is nil. The instan% .... will choose the sector in which they will be best off.

Appendix: Secular Labor Reallocation and Business Cycles
and Business Cycles .... recession begins in 1980, we use a 4 year change to minimize loss of observations while still allowing for business ...... gitudinal design of the Current Population Survey: Methods for linking records across 16 months ...

productivity growth and worker reallocation
Finally, a proof of existence of an equilibrium solution to the model is also provided. 1. INTRODUCTION. In their review ... one firm to another frequently. As Davis et al. (1996) and others document, job ... Finally, we prove that an equilibrium sol

Capital reallocation and liquidity
Keywords: Capital reallocation; Liquidity; Frictions; Business cycle .... which excludes net changes in employment and is therefore more comparable to our.

Capital Reallocation
May 13, 2018 - Li, Benjamin Moll, Adriano Rampini and David Zeke for helpful conversations and comments. ...... Mark Gertler and Nobuhiro Kiyotaki. Financial ...