Preferential attachment trees, m-ary trees, and beta coalescents Helmut H. Pitters Department of Statistics University of California, Berkeley October 24, 2016 Abstract We consider so-called increasing trees which are random rooted labeled trees parameterized by a sequence (φk )k≥0 of non-negative real numbers. Increasing trees encompass, among others, random recursive trees, preferential attachment trees as well as m-ary increasing trees, which serve as fundamental models in the study of social networks and in computer science, respectively. Starting with an increasing tree on n nodes we pick one of its edges, {U, V } say, according to a simple random mechanism. If U is closer to the root than V, the subtree rooted at V is discarded and its labels are attached to U . This procedure is called lifting. We show that the tree obtained after lifting is again an increasing tree, and repeated lifting after exponential waiting times yields a continuous-time Markov chain on increasing trees. Each increasing tree induces a partition of the set {1, . . . , n} whose blocks consist of labels that are attached to the same node. We show that the partition-valued process induced by the Markov chain on increasing trees is a beta n-coalescent (up to a random time change) whose parameters are given explicitly in terms of (φk ). Beta coalescents serve as fundamental models in population genetics for the genealogies of chromosomes sampled from large populations of haploid individuals. As special cases we obtain the construction of the Bolthausen-Sznitman coalescent studied by Goldschmidt and Martin [3], and the construction of the arcsine coalescent by Pitters [6].

1

1

Increasing trees

A tree t = (V, E) is a connected graph with vertex set V = V (t) and edge set E = E(t) that does not contain cycles. The edge set consists of unordered pairs of vertices. We use the term vertex and node synonymously. A tree t is called rooted if there is one distinguished node, ρ = ρ(t) ∈ V, called the root of t. A labeled tree t = (V, E) is a rooted tree whose nodes are labeled, i.e. there exists a label set, L = L(t), and a bijection l : V → L . We say that node v ∈ V carries label l(v) and call l a labeling of t. In what follows we will only consider label sets L that are endowed with a total order ≤ . Define the size #t of t to be the number #V of its nodes. Figure 1: Increasing trees with at most 3 nodes. 1

1

1

2

1

2

1

3

3

2

2

3

An increasing tree is a planar labeled tree (with totally ordered label set) such that the labels along any non-backtracking path starting from the root are increasing, see Figure 1. (Contrary to their growth in nature, we draw trees by putting their root at the top and proceeding downwards to the leaves.) We will put weights on increasing trees that depend on the outdegrees of their nodes, and this will allow us to also encode non-planar increasing trees.

1.1

Weighted increasing trees

Weighted increasing trees have been introduced by Bergeron, Flajolet and Salvy [1] under the name of “varieties of increasing trees”. As we will see, this general framework contains the models we focus on here, namely random recursive trees, preferential attachment trees and m-ary search trees, and offers a unified approach. Fix a sequence (φk )k≥0 of nonnegative real numbers such that φ0 , φk 6= 0 for some k ≥ 2. Define the

2

generating function of (φk ) by φ(z) :=

X

φk z k ,

(1)

k≥0

and regard φ as a formal power series. On the set of all increasing trees I we define the weight (function) w : I → [0, ∞) that maps a tree t to its weight Y w(t) := φd+ (v) , (2) v∈t

where d+ (v) denotes the outdegree of node v, i.e. the number of its successors. Here we slightly abuse notation and write v ∈ t to indicate that v is a node in t. We refer to (φk ) as the weight sequence. Notice that a tree may well have zero weight. However, there are always trees with non-zero weight, e.g. the tree consisting of a root from which exactly k leaves are dangling, since φ0 , φk > 0 for some k ≥ 2. Let I n := {t ∈ I : #t = n} denote the set of all increasing trees of size n, so I = ∪n≥1 I n . If the φk are all non-negative integers, we can interpret the weight w as follows. Suppose t ∈ I n is an increasing tree on n nodes. For any node v ∈ t with outdegree k = d+ (v) we have available φk different types, or colors say, that we P can assign to v. Thus w(t) counts the number of ways in which we can color t, and t∈I n w(t) counts the number of different increasing trees on n nodes such that each node v is assigned one of φd+ (v) colors. Example 1 (Planar trees). Define the equivalence relation ∼ on I by letting s ∼ t if s and t are different embeddings of the same non-planar increasing tree. We can interpretQthe equivalence class [t]∼ of t as a non-planar increasing tree. Setting + w(t) P := v∈t (1/d (v)!), t ∈ I (i.e. φk = 1/k!), and defining the weight w([t]∼ ) := s∈[t]∼p w(s) of [t]∼ as the sum of weights of the corresponding increasing trees, we obtain w([t]∼ ) = 1. Example 2 (Planar m-ary increasing trees). Consider planar increasing trees in which any node v has at most m ∈ N children. Moreover, we think of v as coming with m (ordered) slots eachof which is either empty or contains exactly one child of v. Hence there are φk := m different types of nodes of outdegree k ≥ 0, since this is k the number of ways to place k children into the m slots while respecting their order. We call these trees m-ary increasing trees. Picture an empty slot of node v as an unlabeled successor of v. We call a node that corresponds to an empty slot external as opposed to a labeled node, which we call internal. When drawing m-ary trees we represent external nodes by empty squares. 3

(m)

Let In denote the set of m-ary increasing trees. Moreover, define the equivalence relation ∼ on I (m) by letting s ∼ t if and only if after removing all external nodes from s and t one obtains the same increasing tree. Thus, we can (and will) identify each equivalence class [t]∼ , t ∈ I (m) , with the corresponding increasing P tree in I n . If (m) we assign weight w(t) := 1 to any t ∈ I and define w([t]∼ ) := s∈[t]∼ w(s), then  Q m w([t]∼ ) = v∈t d+ (v) . Figure 2 and Figure 3 show the four increasing trees on at most 3 nodes and their weights together with the corresponding 3-ary trees. Figure 2: Increasing planar 3-ary trees with at most 3 nodes. increasing tree

weight

planar 3-ary increasing trees

1

φ0

1

1

φ1 φ0

1

1

2 2

2 1

2

1

2

φ2 φ20

1

1

3 2

3

2

3

1

2 1

3

φ2 φ20

3

1

1

2 3

2

3

2

1

3

4

2

1.2

Random increasing trees

The weight w induces a probability distribution µn on I n defined by picking a tree t ∈ I n with probability proportional to its weight w(t). More formally, µn (t) := where In := w(I n ) :=

P

t∈I n

w(t) In

(t ∈ I n ),

(3)

w(t) is the sum of weights of increasing trees of size n. 1

Remark 1. Let us compute In for some small values of n. Since

1

,

are the only 2

increasing trees of size 1, respectively 2, we have ! 1

I1 = w( 1 ) = φ0 ,

I2 = w

= φ1 φ0 ,

(4)

2

and   1

!

1

I3 = w

!

1

  2 2 2  +w   = 2φ2 φ0 + φ1 φ0 .

+w 2

3

3

2

(5)

3

By a random increasing tree T of size n we mean a random element of I n distributed according to µn , and we write T ∼ µn . The family of random increasing trees associated with φ is the collection of all increasing trees I together with the sequence of distributions {µn , n ∈ N}. Both the definitions and the notation of our families of increasing trees differ somewhat from the varieties of increasing trees introduced by Bergeron, Flajolet and Salvy [1]. Let X zn I(z) := In (6) n! n≥1 denote the exponential generating function of the sequence (In ). Bergeron, Flajolet and Salvy [1, Theorem 1] prove the following theorem that relates I and φ. Theorem 1 (Theorem 1 in Bergeron, Flajolet and Salvy [1]). The exponential generating function I of a family of increasing trees defined by the degree function φ solves Z I(z) dy = z. (7) φ(y) 0 5

Letting K be the compositional R x dy inverse of I, i.e. K is defined implicitly by K(I(z)) = z, we have K(x) = 0 φ(y) . It is worthwhile to see some examples of random increasing trees. Some of these examples are closely related to fundamental sorting and searching algorithms in computer science while others have appeared as basic models of real-world networks. Example 3 (Random recursive trees). A recursive tree of size n is a non-planar increasing tree of size n whose nodes are labeled by 1, . . . , n. A recursive tree tn+1 of size n + 1 can be constructed recursively as follows. Start with a recursive tree t1 consisting of one node labeled 1. If a recursive tree tn on n nodes is given, pick one of its nodes, v say, and add a new node, w say, carrying label n + 1 by connecting w and its parent v by an edge. This construction immediately shows that there are (n − 1)! recursive trees of size n. If at each step of this recursive construction the parent node v is chosen uniformly at random among all available nodes the random tree Tn obtained at step n is equal in distribution to a tree picked uniformly at random among all recursive trees of size n. For this reason Tn is called a random recursive tree (RRT). In the literature random recursive trees have also been called recursive trees, cf. [4] and [2]. Let us now look at RRTs in the framework of increasing trees. From the above description and Example 1 we want to put weight 1 on any non-planar increasing tree Q [t], that is w(t) = v∈t 1/d+ (v)!, thus for any k ≥ 0 we have φk = 1/k!, hence φ(z) = exp(z). Consequently, In = #(I n / ∼) counts the number of non-planar increasing trees of size n, and the push-forward of µn under t 7→ [t]∼ , t ∈ I n , is the uniform R I(z) −y distribution on I n / ∼ . Moreover, equation (7) reads z = e dy = 1 − e−I(z) 0 P 1 zn yielding I(z) = log 1−z = n≥1 n . The power series expansion of the logarithm yields In = (n − 1)! which is indeed the number of recursive trees on n nodes as we convinced ourselves earlier. For small values of n this is in agreement with the formulas obtained from (4) and (5): I1 = φ0 = 1, I2 = φ1 φ0 = 1, I3 = 2φ2 φ20 + φ21 φ0 = 2. Example 4 (Preferential attachment trees). Fix a parameter r > 0. A preferential attachment tree (PAT) Tn of size n is a plane increasing tree of size n that can be constructed by the following growth rule. For n = 1 the tree T1 consists of a single node with label 1. Suppose then that Tn−1 has been constructed for some n ≥ 2. Conditionally given Tn−1 the tree Tn is constructed by attaching a new node with label n as a successor to node v with probability proportional to d+ (v) + r, i.e. with probability P { node labeled n attaches to v | Tn−1 } = 6

d+ (v) + r , n − 2 + r(n − 1)

(8)

P since v∈tn−1 (d+ (v) + r) = n − 2 + r(n − 1). The node with label n is placed into any of the d+ (v) + 1 available positions for a child of v chosen uniformly at random. Equivalently, we can pick a tree t ∈ I n with probability proportional to w(t) := Q d+ (v) /d+ (v)!), where for x ∈ R and n ∈ N we denote by xn := x(x + 1) · · · (x + v∈t (r n − 1) the nth rising factorial power of x, and agree on setting x0 := 1. In other words, this law agrees with µn for φk := rk /k!, k ∈ N0 . The generating function of (φk ) is the binomial series φ(z) =

X

φk z k =

k≥0

X k≥0

rk

zk = (1 − z)−r . k!

(9)

According to (7) we have Z z= 0

I(z)

 I(z) (1 − y)r+1 1 − (1 − I(z))r+1 (1 − y) dy = − = , r+1 r+1 0 r

and solving for I(z) we obtain I(z) = 1 − [1 − (r + 1)z]

1 r+1

k X 1 [(r + 1)z]k =− − . r + 1 k! k≥1

(10)

n 1 From (10) we can read off In = −(r + 1)n − r+1 , the sum of weights of preferential attachment trees on n ≥ 1 nodes. For n ≤ 3 this is in agreement with the formulas obtained from (4) and (5): I1 = φ0 = 1, I2 = φ1 φ0 = r, I3 = 2φ2 φ20 + φ21 φ0 = r(2r + 1). Linear preferential attachment trees. An interesting model of preferential attachment trees is obtained for r = 1, known as linear preferential attachment trees. This model corresponds to the weight sequence given by φk = 1, k ∈ N0 , that is each planar increasing tree t ∈ I n has weight w(t) = 1 and µn is the uniform measure on I n . Notice that the number of planar increasing trees on n nodes is In = −2n (− 12 )n = 2n−1 ( 12 )n−1 = (2n − 3)!!, in agreement with [6, Equation (1)]. In [6] the author constructed the arcsine coalescent by lifting linear preferential attachment trees. Example 5 (m-ary increasing trees). Fix a positive integer m ≥ 2. An m-ary increasing tree is a planar increasing tree that can have additional unlabeled nodes, so-called external nodes, such that each node has a total of m successors. We also call the labeled nodes in an m-ary increasing tree internal. An m-ary increasing tree of size n is an m-ary increasing tree with n internal nodes. Of course, this is another 7

representation of the planar m-ary trees introduced in Example 2. We can construct any m-ary increasing tree tn+1 of size n + 1 iteratively by starting with t1 consisting of the root node with label 1 which has m successors, all of which are external. If tn is an m-ary increasing tree of size n, pick one of its external nodes, v say, and turn v into an internal node by labeling it with n + 1 and attaching to it m successors, all of which are external. We call the random tree Tn , obtained by picking in each step of this construction the external node uniformly at random, a random m-ary increasing tree. Clearly, Tn is equal in distribution to an m-ary tree of size n chosen uniformly at random. Since we are only interested in random m-ary increasing trees in what follows, we will also refer to them as just m-ary trees. From the above construction it is clear that an m-ary tree of size n has en := n(m − 1) + 1 external nodes. Consequently, there are en m-ary trees of size n + 1 that can be obtained from an m-ary tree of size n by attaching a new node. Thus, denoting by Mn the set of all m-ary trees of size n, we have #M1 = 1 and #Mn+1 = #Mn en = e1 e2 · · · en =

n Y

(k(m − 1) + 1) = (m − 1)n+1 (

k=1

1 )n+1 . m−1

Recall  from Example 2 that if in the framework of increasing trees we allow for m φk = k types of nodes with outdegree k, then w(t) counts the number of m-ary trees that can be obtained by ordering the successors of each node v in t ∈ I n and placing them into one of the m boxes corresponding to v. Now φ(z) = (1 + z)m , and applying (7) we have Z z= 0

I(z)

1 dy = φ(y)

Z

I(z) −m

(1 + y) 0

(1 + y)1−m dy = 1−m 

I(z) = 0

[1 + I(z)]1−m − 1 . 1−m

Solving for I(z), I(z) = (1 + (1 − m)z)

1 1−m

−1=

X k≥1

1  1−m

k

((1 − m)z)k .

1  1 1 Consequently, In = 1−m (1 − m)n n! = ( 1−m )n (1 − m)n = ( m−1 )n (m − 1)n = #Mn , n  x where for x ∈ R, n ∈ N we denote by n := xn /n! the generalized binomial coefficient and by xn := x(x − 1) · · · (x − n + 1) the nth falling factorial power of x. For n ≤ 3 this agrees with the formulas obtained from (4) and (5): I1 = φ0 = 1, I2 = φ1 φ0 = m, I3 = 2φ2 φ20 + φ21 φ0 = m(2m − 1). For m = 2 we obtain increasing binary trees. For the correspondence between increasing binary trees and binary search trees see [2], p. 203.

8

We have seen above an explicit iterative construction of random recursive trees, preferential attachment trees and m-ary increasing trees where nodes arrive one by one in the order of increasing labels and are incorporated into the existing tree according to a specific growth rule. The following remarkable result by Panholzer and Prodinger [5] shows that among all random increasing trees the above mentioned three families are the only ones that can be constructed via such a growth rule. Lemma 1 (Lemma 5 in Panholzer and Prodinger [5]). For a family I of random increasing trees the following properties are equivalent: 1. The sum of weights In of trees of size n satisfies In+1 = c1 n + c2 In

(n ∈ N)

(11)

for some constants c1 , c2 . 2. Fix a random increasing tree T with distribution µn . For j ≤ n let T 0 be the subtree in T spanned by the nodes whose labels are less than or equal to j. Then T 0 ∼ µj .

(12)

3. The family I can be constructed via a (random) growth rule. A family I of increasing trees satisfies one (and therefore all) of the above properties iff the generating function φ of its weight sequence is given by one of the following three formulas. Case A: φ(z) = exp(c1 z) for c1 > 0, Case B: φ(z) = (1 + c2 z)d for c1 > 0 and d := c1

c1 c2

+ 1 ∈ {2, 3, . . .}.

+1

Case C: φ(z) = (1 + c2 z) c2 for c1 > −c2 > 0. The constants c1 , c2 are the same as the ones in property (1) above. Remark 2. Since random recursive trees, preferential attachment trees and m-ary trees can all be constructed by a growth rule, let us compute the corresponding constants c1 , c2 according to property 1 in Lemma 1. For random recursive trees we have In+1 /In = n, thus c1 = 1 and c2 = 0. For preferential attachment trees we find In+1 1 = (n − )(r + 1) = (r + 1)n − 1 In r+1 9

yielding c1 = r + 1 and c2 = −1. Finally, for m-ary trees we obtain In+1 1 =( + n)(m − 1) = (m − 1)n + 1, In m−1 thus c1 = m − 1 and c2 = 1. Alternatively, we could have read off c1 and c2 by comparing the respective generating functions φ with cases A, B and C in Lemma 1. A partition of a nonempty set A is a set, π say, of nonempty pairwise disjoint subsets of A whose union is A. The members of π are called the blocks of π. Let #A denote the cardinality of A and let PA denote the set of all partitions of A. Before we proceed to our main result we need to allow for a more flexible labeling of increasing trees. Instead of labeling an increasing tree t of size k with labels 1, 2, . . . , k, we may label the tree by k elements of some other totally ordered set. In what follows we will take as labels the blocks of some partition π of [n] of size #π = k. We endow any partition π of [n] with the total order ≤ defined by least element, i.e. B ≤ C if and only if min B ≤ min C for any two blocks B, C of π. The probability measure on random increasing trees of size #π with label set π is denoted µπ .

2

Lifting increasing trees

We now define the lifting procedure which is at the heart of our construction of beta n-coalescents. Consider an increasing tree t of size n (labeled by some partition containing n blocks) such that w(t) > 0. Suppose e = {u, v} is an edge in t such that u is closer (in graph distance) to the root ρ(t) than v. Then e is lifted as follows. Discard the subtree tv rooted at v together with {u, v}, and replace the label l(u) = B of u by {B} ∪ {c ∈ C : C ∈ L(tv )}. For our construction of n-coalescents via lifting we are going to pick an edge E of the increasing tree t in a random fashion as follows. For any node u ∈ t with outdegree d+ (u) = k (in particular φk > 0, since w(t) > 0) let w(u) := (k + 1)

φk+1 φ0 φk

(13)

be the local weight of u and, setting φ−1 := 0, define ¯ := w(u)

φk−1 φk φ0

(14)

¯ to be the lifting weight of u. Notice that if u is a leaf, it has lifting weight w(u) =0 since φ−1 = 0. Moreover, for m-ary trees the local weight of a node u with outdegree 10

m is w(u) = 0 since φm+1 = 0. In particular, the growth rule for m-ary trees never picks a node with outdegree m. Put an exponential clock on each node in t with rate given by the lifting weight of the node, all clocks being independent. Pick the one node U whose clock is the first to ring. Conditionally given U, let V be one of the successors of U drawn uniformly at random. Then the tree tL is obtained from t by lifting the edge {U, V }. From the definition (2) of the weight of an increasing tree it is apparent that the weight of a tree factorizes into the weights of subtrees. The next Lemma makes this statement precise. Consider an increasing tree t and nodes u, v ∈ t. Let tu denote the tree obtained by removing from t the descendants of u (but not u). Let tv denote the subtree of t consisting of v and its descendants. If u is a parent of v, let graft(tu , u; tv ) denote the set of increasing trees obtained by connecting the root of tv with the node u in tu , where each of these trees has ρ(tv ) placed at a different location among the children of u. In particular, # graft(tu , u; tv ) = d+ (u) + 1, t ∈ graft(tu , u; tv ), and graft(su , u; sv ) = graft(tu , u; tv ) for any s ∈ graft(tu , u; tv ). Lemma 2. For a finite tree t and two nodes u, v ∈ t such that u is a parent of v we have w(t) = w(tu )wtu (u)w(tv )/(d+ (u) + 1). Let TnL be the tree obtained by lifting a random increasing tree Tn with distribution µn . The next Lemma shows that (TnL | L(TnL ) = π), the tree TnL conditioned on having label set π ∈ Pn , is an increasing tree on π with law µπ . Lemma 3. Let Tn ∼ µn . Then, conditional on having label set π, the tree TnL is a random increasing tree with distribution µπ , i.e. (TnL | L(TnL ) = π) ∼ µπ .

(15)

Proof. Fix a subset C ⊆ [n] of size k := #C ≥ 2. Denote by hC; ni the partition of [n] whose blocks are the elements in [n] \ C and C. Let tC denote an arbitrary but fixed increasing tree of size n − k + 1 with label set hC; ni. Then  L  L P T = t C n P Tn = tC |TnL has label set hC; ni = P {TnL has label set hC; ni} = w(tC )/In−k+1 = µhC;ni , where the second to last equality is seen as follows. Slightly abusing notation, we write d+ tC (C) for the number of successors of the node in tC that carries label C. Let t?C denote the tree obtained from tC by replacing the label C by its minimal element c := min C. 11

For two increasing trees t, t0 ∈ I we say that t contains t0 if after removing from t all nodes whose labels are not in L(t0 ) the remaining tree equals t0 . We have that [ {Tn contains t?C } = {Tn ∈ graft(t?C , c; t0 )}, t0 ∈I ∆C\{c}

and by Lemma 2 X

P {Tn contains t?C } =

P {Tn ∈ graft(t?C , c; t0 )}

t0 ∈I ∆C\{c}

X

=

t0 ∈I ∆C\{c}

w(t?C )wt?C (c)w(t0 ) Ik−1 # graft(t?C , c; t0 ) = w(t?C )wt?C (c) . + In (dt? (c) + 1)In C

Moreover,   P TnL = tC = P TnL = tC |Tn contains t?C P {Tn contains t?C } ¯ (Tn |Tn contains t?C ) (c) w 1 Ik−1 = w(t?C )wt?C (c) + ¯l(Tn ) In dt (C) + 1

(16)

C

(d+ tC (C) + 1)φ0 φd+ (C) 1 1 tC (C)+1 Ik−1 ? C =¯ w(t ) C + φd+t (C) In l(Tn ) φ0 φd+t (C)+1 dtC (C) + 1 φd+t

C

C

1 w(t?C )Ik−1 =¯ . In l(Tn ) To lift the one edge in (Tn | Tn contains t?C ) which connects the nodes with labels c and c0 := C \ {c} (c0 is the root of the subtree in (Tn | Tn contains t?C ) spanned by the nodes with labels in C \ {c}), one first has to pick the node with label c, which happens with probability φd+t (C) 1 1 C ¯ ? , ¯l(Tn ) w(Tn |Tn contains tC ) (c) = ¯l(Tn ) φ0 φ + dt (C)+1 C

where ¯l(t) counts the number of nodes in t that are not leaves, since the outdegree of c in (Tn | Tn contains t?C ) is d+ tC (C) + 1, and, conditional on c being picked, one then has to pick the successor of c with label c0 , which happens with probability 1 d+ Tn (c)

=

1 d+ tC (C) 12

+1

.

Consequently, the probability to pick the one edge in Tn that yields tC after being ¯ Tn (c)/(d+ lifted is proportional to w tC (C) + 1)). It is important to notice that the probability in (16) only depends on C via #C. Moreover, X   1 In−k+1 Ik−1 P TnL has label set hC; ni = P TnL = t0C = ¯ . In l(Tn ) t0C ∈I hC;ni

The claim follows. Our main result is that starting with a tree Tn with distribution µn and repeatedly applying the lifting procedure yields a beta n-coalescent. Let Tn be an increasing tree with law µn . Denote by Tn := {Tn (t), t ≥ 0} the continuous-time Markov process on increasing trees obtained by repeated lifting of Tn (0) = Tn . Furthermore, let Πn0 := {Πn0 (t), t ≥ 0} be the process defined on the partitions of [n] by setting Πn0 (t) := L(Tn (t)). The next theorem shows that for the families of random recursive trees, preferential attachment trees as well as m-ary increasing trees the process Πn0 is a beta n-coalescent (up to a random time change). Theorem 2 (Lifting increasing trees yields a beta n-coalescent). The process Πn0 is a continuous-time Markov chain with initial state ∆n and absorbing state {[n]} such that whenever Πn0 is in a state of b blocks a merger of k specific blocks occurs at rate c2 c2 λ0b,k = dn λn,k (1 + , 1 + ), (17) c1 c1 with  0 c2  1 = − r+1 c1   1

for random recursive trees, (18) for preferential attachment trees with parameter r > 0, for m-ary increasing trees, m−1 Q Qm c2 c2 where dn := m−1 i=2 (i + 2 c1 )/ i=2 (i + 1 + c1 ), and λn,k (a, b) denotes the transition rates of the beta(a, b) coalescent. Proof. Because of Lemma 3 it suffices to compute the transition rates of Πn0 in its initial state ∆n . Fix a subset C ⊆ [n] of size k := C ≥ 2. Let hC; ni denote the partition of [n] consisting of the singletons in [n] \ C and the block C. Recall that Tn starts in state Tn (0) = Tn , where Tn has distribution µn . The rate at which we see a lifted tree TnL whose label set consists of hC; ni is given by  In−k+1 Ik−1 λ0n,k (C) = ¯l(Tn )P TnL has label set hC; ni = In 13

(19)

If Πn0 is equal in distribution to a beta n-coalescent (up to a random time change), then we must have λ0n,k = dn λn,k (a, b) for some positive constants dn and parameters a, b > 0. We can find a, b by computing the ratio of consecutive rates λ0n,k+1 (C) In−k Ik In In−k Ik = = 0 λn,k (C) In In−k+1 Ik−1 In−k+1 Ik−1 k + ( cc21 + 1) − 2 c1 (k − 1) + c2 = = , c1 (n − k) + c2 n − k + ( cc21 + 1) − 1 which yields a = b = cc21 + 1. To find dn , notice first that λn+1,k b+n−k = , λn,k a+b+n−2 hence

λ0n+1,k dm+1 b + m − k = . 0 λn,k dm a + b + m − 2

On the other hand, λ0n+1,k Im−k+2 Ik−1 Im c1 (m − k + 1) + c2 ) m+a+b−2 m+b−k = = = 0 λn,k Im+1 Im−k+1 Ik−1 c1 m + c2 m+a−1 m+a+b−2 and therefore dn+1 = Moreover, d2 =

1 , 1+c2 /c1

n + 2 cc21 n+a+b−2 dn = dn . n+a−1 n + cc21

(20)

and we obtain Qm−1

c2 i=2 (i + 2 c1 ) dn = Q m c2 , i=2 (i + 1 + c1 )

(21)

with the convention that an empty product equals one.

References [1] François Bergeron, Philippe Flajolet, and Bruno Salvy, Varieties of increasing trees, CAAP ’92 (Rennes, 1992), 1992, pp. 24–48. MR1251994

14

[2] Philippe Flajolet and Robert Sedgewick, Analytic combinatorics, Cambridge University Press, Cambridge, 2009. MR2483235 [3] C. Goldschmidt and J. Martin, Random recursive trees and the Bolthausen-Sznitman coalesent, Electron. J. Probab. 10 (2005), no. 21, 718–745. [4] A. Meir and J. W. Moon, On the altitude of nodes in random trees, Canad. J. Math. 30 (1978), no. 5, 997–1015. MR506256 [5] Alois Panholzer and Helmut Prodinger, Level of nodes in increasing trees revisited, Random Structures Algorithms 31 (2007), no. 2, 203–226. MR2343719 [6] Helmut H. Pitters, Lifting linear preferential attachment trees yields the arcsine coalescent (2016). Preprint available at https://sites.google.com/site/pittersh/arcsinecoalescent.pdf.

15

Figure 3: Increasing planar 3-ary trees with at most 3 nodes. increasing tree

planar 3-ary trees

weight

1

2

φ21 φ0

1

3

1

2

2

3

3 1

1

2

2

3

3

1

1

2

2

3

3

1

1

2

2

3

3 1

2

3

16

Preprint

Oct 24, 2016 - study of social networks and in computer science, respectively. .... function I of a family of increasing trees defined by the degree function φ.

440KB Sizes 1 Downloads 180 Views

Recommend Documents

preprint - Mhbateni.com
He is also with Center for Computational Intractability, Princeton, NJ 08540. ..... For simplicity, we call vertices of V that belong to B(x,2∆1) “red”, and vertices of V ..... hicle routing problems, in Approximation, Randomization, and Combin

Author preprint
This effect is known as the shine-through effect, because the vernier seems to .... dominance, and a performance of 50% that both the vernier and the anti-vernier ..... A. Illustration of the sequences of vernier, anti-vernier and surround used in ..

Author preprint
Each participant was seated at a distance of 2 meters from the display. ... At SOAs outside the interval between -60 ..... The data show that only the baseline.

preprint - Mhbateni.com
∗Department of. Computer. Science,. Princeton. University,. Princeton,. NJ .... the k-Stroll problem is summarized in the following theorem: ..... vertex equals its out-degree, while the set (2) of constraints requires each subset U ⊂ V of vertic

Preprint - Catrin Campbell-Moore
revision theories for concepts that are concerned with real numbers, but also ..... We will call the associated revision sequences Closed Properties Revision Se- ...... to the organisers and participants at a number of conferences including:.

preprint
This is a preprint of a chapter for the book Reinforcement Learning: State of the ... understanding of psychology and neuroscience can inspire research in RL and machine learning in ... For example, the tone is presented briefly (500 ms).

Preprint - Catrin Campbell-Moore
The proposal is only legitimate if the topology has certain properties, which are outlined in Section 2.1; that section is separated as some readers may wish to ...

Author preprint
was controlled by a PC via fast 16-bit DA converters. Line el- ..... The data show that only the baseline ..... Target recovery in metacontrast: The effect of contrast.

Google Sites Preprint
Jan 4, 2010 - bidding function, it is easy to compute the corresponding c.d.f. of the player's bid, and .... We assume vi ∼ U(0,1) for i = 1,2, and get the bidding function of player i .... certain websites, and wait for someone with its complement

14-665 Chew (preprint).pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. 14-665 Chew ...

Preprint - Catrin Campbell-Moore
Apr 21, 2015 - According to Caie's Accuracy Criterion June is rationally required to be in state c. Proof. U(Cr) = (. 1 − BS(w¬γ, Cr) if Crρ(γ)>0.5. 1 − BS(wγ, Cr).

Guida et al EJCP preprint
math problems in education (e.g., Cordova & Lepper, 1996; Davis-Dorsey, Ross, ...... Educational Technology Research and Development, 48, 49–59. Lopez ...

14-665 Chew (preprint).pdf
player (or singer) to be received by a human listener. Performing music. serves as one of the most visceral and direct forms of transmitting information. (knowledge, perceptions, needs, and desires) and eliciting emotion. responses, unencumbered by t

1 NOTE: This preprint is made available because the ... - CiteSeerX
Since the expressions for (16) are lengthy but easy to compute, we have ...... computationally intensive bootstrap to quantify uncertainty, see Jones [53]. It is.

1 NOTE: This preprint is made available because the ... - CiteSeerX
Building 13, Room 3W16,. 13 South Drive ... Index Terms — Diffusion tensor imaging, error propagation, covariance structures, diffusion tensor ..... be constructed analytically, is well defined only when the diffusion tensor contained within γ is 

PREPRINT – IN PRESS AT FRONTIERS: A JOURNAL ...
and maintain diversity in the College of Engineering. ... research to increase diversity within their college of engineering. ..... CA: AltaMira Press, 2005). 4.

This paper is a preprint (IEEE “accepted” status).
weight optimization leads to a strictly convex (and thus, good-natured) optimization problem. Finally, an .... of weight optimization and propose a corresponding optimization method. In Section 4 we focus on ..... changes in compression, when the ste

This paper is a preprint (IEEE “accepted” status).
made more or less accidentally due to limited calculation precision, which lead to a periodic rescaling of ..... heavy impact on processing speed and memory requirements. We now describe the outline of our approach .... the Broyden-Fletcher-Goldfab-S

March 9, 2012 16:44 Preprint submitted to IJCGA ... - GIPSA-lab
Mar 9, 2012 - Flag(G) \ K. We call these minimal simplices blockers. We prove ..... order 0 blocker. Fig. 3. Hasse diagram of K. a b c d e f abc abd acd bcd cdf ..... Write B(x, r) for the closed ball with center x and radius r and let U ⊂ RD. An Î

This is a preprint of a review article published in Human Genomics 1 ...
Email: [email protected]. There is now a wide .... With the availability of Java, HTML and TCL as cross-platform languages for GUI development, it is.

This is a preprint of a review article published in Human Genomics 1 ...
Methods for this type of data are usually termed “parametric” because an explicit ... A disadvantage of this position is that any new program will aspire to improve ...