Network Slicing with Elastic SFC Xu Li, Jaya Rao, Hang Zhang, and Aaron Callard Huawei Technologies Canada Inc., Ottawa, Ontario CANADA Email: {xu.lica, jaya.rao, hang.zhang, aaron.callard}@huawei.com Abstract—Network slicing often involves the instantiation of certain network functionality into network nodes, possibly subject to some function chaining constraints. As an integral aspect of network slicing, software defined topology (SDT) techniques are used to define service-specific data plane logical topologies including virtual function locations and the connections in between. Network slices are subsequently rendered based on respective logical topologies via network function virtualization (NFV) and software defined networking (SDN) principles. In this paper, we introduce the novel concept of elastic service function chain (SFC), which is an ordered list of virtual functions that may be optional or recursive, and address the problem of SDT with such chaining requirements. We mathematically formulate the SDT problem as a combinatorial optimization problem that includes a multicommodity flow problem and a bin packing problem as subproblems. Because the problem inherits NP hardness from the bin packing sub problem, we develop a heuristic algorithm to tackle it. The algorithm’s effectiveness and performance are evaluated via numerical analysis and simulation study. Our investigation shows that the algorithm is more advantageous than the selected benchmark solutions especially in high loading scenarios, where it converges significantly faster, accommodates remarkably more services, and considerably improves the network performance.

I. I NTRODUCTION Network slicing is an emerging concept of instantiating different ‘slices’ of network resources directed towards different network services (e.g. MBB, MTC, eHealth, etc.), which are typically offered to different customers or groups of customers and potentially have distinct QoS requirements and/or packet processing requirements. Network resources comprise both computing resources at network nodes (e.g. CPU, memory, storage, I/O) and transport resources (e.g. bandwidth) over network links. In the case of wireless networks, the latter such as spectrum resource may also be present at nodes. A network slice, also known as a virtual network, corresponds to the allocation of pooled network resources for a given service such that the service appears substantially ‘isolated’ from other services from the customer’s point of view. Here, isolation implies that one service’s performance is not negatively impacted by the traffic of another service. Within the framework of network slicing, the identification of which nodes should be instantiated with which network functions so as to provide a desired service-level capability is subject to not only network resource constraints but also function chaining constraints (if any exists). Typically, these constraints are satisfied via manual resource provisioning. For maximizing the flexibility and responsiveness of the network to changing customer demands, especially in future networks, this procedure ought to be automated. Automated network slicing can be applied in reconfigurable network architectures, in which pooled network resources are commercial off-theshelf (COTS) hardware components capable of configuration through virtualization approaches such as software defined

networking (SDN) [1] and network function virtualization (NFV) [2] principles. A. Software defined network (SDN) SDN separates traffic management from traffic forwarding, enabling centralized control and enriched agility [1]. In the SDN control plane, one or a few SDN controllers manage network resources (specifically, transport resource) and control network traffic globally. Based on the status information from individual network elements and the overall traffic requirements, the controllers make traffic control decisions by solving a traffic engineering (TE) problem. According to the TE solution, they then instruct, for example, via the OpenFlow [3] southbound APIs, data plane hardware to forward packets for optimizing the operation of the entire network. TE is to jointly determine for individual flows the communication paths and rate allocation along the paths, with respect to their QoS requirements, e.g. rate demand, and network resource constraints, e.g. link capacity, so that a network utility is maximized. Flows are split among their routing paths in the data plane, following the TE decision in the control plane. B. Network function virtualization (NFV) While SDN provides the framework for achieving centralized control of a network and as a result, potentially achieve globally optimal network performance, another degree of freedom for facilitating network operations is realized via the complementary concept of NFV [2]. It is imperative for the network to support multitudes of services, each possibly distinguished by a sequence of service-specific functions in addition to their peculiar traffic characteristics. In this regard, NFV allows implementing various network functionality such as deep packet inspection and address translation on demand as software entities on NFV-enabled COTS hardware, for example, OPNFV nodes [4]. As a consequence, NFV effectively decouples the network functions from the physical equipment. This also implies that the SDN controllers themselves can be instantiated as virtualized functions in any high-capacity processor(s) residing within either a data center, server or a network node. C. Software defined topology (SDT) The combined SDN and NFV technologies indeed provide a compelling solution for facilitating the rapid roll-out of highly programmable and flexible services while achieving high degree of scalability and cost effectiveness for both customers and network operators. There is a need for a component that joins SDN and NFV and guide their operations toward the network slicing goal. The concept of SDT [5] comes into play in this regard and it has been endorsed by the METIS project [6] to define a new, flexible network architecture for future 5G systems. For a given service, there are a number of virtual functions (VFs) for the service traffic to visit in

certain order, known as a service function chain (SFC), so as to meet the service-level functional requirement and/or to fulfill the networking purpose. The goal of SDT is to determine the locations of the VFs in the network and defines a logical topology of the function locations. The nodes and links in the logical topology are respectively accompanied with computing resource requirements and traffic QoS requirements. A network slice is then rendered based on the logical topology for the service using NFV and SDN techniques. D. Our contribution According to [5], [6], the procedure of network slicing is basically divided into two steps, which we refer to as slice framing (by SDT) and slice rendering (by NFV and SDN) here. This separation enables a network operator to perform dynamic slice rendering, in other words, dynamic resource allocation with a stable slice skeleton for optimizing network resource utilization and service-level QoS/E. In this work, we address the SDT-centric step of slice framing. Our contribution comes mainly from the following two aspects: the introduction of elastic SFC and the SDT problem formulation and solution. A SFC that contains optional or vertically-recursive VFs is considered elastic. In the literature, only non-elastic SFC have been studied. Elastic SFC enables a new dimension of flexibility to network slicing. It gives the network system the freedom of adding/removing VFs to help minimize costumers’ capital cost and/or to accommodate or satisfy more service requests. With elastic SFC, not only slice rendering but also slice framing may become adaptive to the network dynamics. To solve the SDT problem with elastic SFC, we formulate it as a combinatorial optimization problem based on an augmented SFC graph that captures services’ function chaining constraints and location constraints. We show that the problem is NP hard and devise a fast multi-stage heuristic algorithm to tackle it. For the purpose of performance evaluation, we compare the algorithm with an accelerated branch-and-bound technique (AB&B) and a random placement algorithm (RS) through numerical analysis. It is shown that the proposed algorithm is increasingly more advantageous than the AB&B and the RS as traffic load goes up. For example, in the highest loading scenario tested, it produces result as four times faster with less than 6% optimality loss than the AB&B, and accommodate as twice many service instances than the RS. The effectiveness of the algorithm is further illustrated through simulation study for a MTC utility meter reading use case, where significant performance gain is observed for both the MTC service traffic and the background traffic. The remainder of the paper is organized as follows. We briefly summarize related work and pinpoint the difference of this work from the prior art in Sec. II. We formally introduce the concept of elastic SFC in Sec. III and present the SDT problem formulation and the heuristic solution respectively in Secs. IV and V. Afterward, we report our numerical analysis and simulation study in Sec. VI. Finally, the conclusions drawn from the investigation are specified in in Sec. VII. II. R ELATED W ORK The literature most related to SDT is virtual network embedding and NFV management and orchestration, often addressed in the context of data-center networks. It can be divided into three categories: architecture and protocol, VF placement, and dynamic resource allocation. The first category

is mainly focused on functionality separation (modularization) [7] and information representation [8], without being concerned with the underlying supporting algorithms. The third category targets at the problems of dynamically assigning computing resources to VFs [9], bandwidth resources to traffic flows [10], individually or jointly [11], to enhance the system performance on the fly; solutions are used in the slice rendering step. These two categories of research are clearly different from this work as we are here focusing on the development of technical solutions to VF placement. VF placement is often jointly performed with virtual link provisioning. The prior art on VF placement may be classified as non-recursive placement and placement with horizontal recursion. Non-recursive placement [12]–[14] aims to find a one-to-one mapping between a VF and a network location, with respect to function chaining requirements, computing resource limits, bandwidth constraints, and/or network loading conditions. The slightly generalized problem with horizontal recursion [15] allows a VF to be instantiated at multiple places. These VF placement problems are usually formulated as a binary or mixed integer programming problem and solved through relaxation or heuristic. In some solutions, assumptions are made about the network topology, such as tree-like structure [13], and decisions are taken on a per traffic flow basis [15]. Due to space limitation, we are not be able to provide detailed review of these works. Readers are referred to the original papers for elaboration or to [16] for a comprehensive survey on the subject matter. In the following, we draw clear distinction between this work and the prior art: • The previous VF placement research considered only mandatory VFs, each of which requires at least one instance to be created in the network without vertical recursion. In such a scenario, the function chaining requirement, if any exists, is not elastic, but stiff. • The SDT problem being studied here is an extended VF placement problem that uniquely supports elastic SFCs, where optional VFs or vertically-recursive VFs are present and the decision on the occurrence or recurrence of such VFs is part of the solution. Unlike horizontal recursion that limits the number of instances of a VF, vertical recursion defines the number of times a VF appears as a separate function in the SFC. III. E LASTIC SERVICE FUNCTION CHAINING A service is typically associated with a function graph (FG), a.k.a. VF forwarding graph (VF-FG), which contains a collection of service function chains (SFCs) [17], each being an ordered list of VFs. A VF is either a virtual service function (VSF) or a virtual networking function (VNF). A VSF is service-dependent, reflecting service business logic, and therefore normally defined by a customer; whereas, a VNF is offered by the network operator for empowering the networking process. A customer is not usually a subject-matter expert in networking and very likely supplies a partial FG involving only VSFs. The network operator customizes a networking procedure for a given service in accordance with the service’s characteristics. It takes the responsibility of completing the customer-supplied partial FG with the necessary VNFs for implementing the networking procedure. In a SFC, there are traffic sources, traffic destinations and

Mandatory function

Dependent function

Optional function

(a) Three types of functions k

Source

F1

F2

F3

Optional function

Dependent function

Mandatory function

Destination

(b) A SFC of three functions with recursion

Fig. 1: SFC illustration a number of functions in between. The sources may be derived by the network operator from the traffic description provided by the customer. For example, a source may represent the set of base stations or network nodes serving a particular geographic region from which the service traffic originates. Similar argument can be made for the destinations. The functions are connected one to another from the sources toward the destinations, implying the sequence in which they shall be traversed by the service traffic. Backward connections are used to indicate recursion of chain segments. A service can be divided into a number of primitive sub services such that each of them has a FG that contains a single SFC containing a single destination; SDT can be applied to the sub-services, rather than for the complex, original service. In this case, function sharing and traffic connection may have to be enforced among the subservices so that a SDT decision for the original service can be recovered from those for the sub-network services. Without loss of generality, in the sequel we consider primitive service scenarios and use SFC and FG inter-changeably. We consider three types of VFs: mandatory functions, optional functions, and dependent functions. Figure 1(a) shows their symbolic denotation. A logical service function path (SFP) is a path linking the source and the destination in a data plane logical topology computed by the SDT. There may include multiple logical SFPs in a logical topology. Along a logical SFP, a mandatory function must appear at least once; an optional function may not be present at all; the existence of a dependent function depends on the appearance of its prior function. The typical examples of the three VF types are non-exhaustively listed as follows: i) mandatory (routing and switching functions), ii) optional (packet aggregation, load balancer, DPI, caching, protocol accelerator), and iii) dependent (packet de-aggregation, traffic diagnosis). Every VF may be associated a traffic rate reduction/inflation factor indicating the impact of the function on the traffic going through it. Figure 1(b) shows an SFC of three functions, where the segment of F1-F2 is allowed to reoccur for up to k times, as indicated by the maximum recursion count over the backward link. The value of k will be always smaller than the number of NFV-enabled nodes in the network if a VF may be instantiated at most once at a location. Note that the segment may appear at most k+1 times as its normal occurrence is counted separately. Within the framework of SFC, this type of recursion is defined as vertical recursion. In theory any of the above VF types can be configured to be vertically recursive. In practice, vertical recursion is dependent on many factors such as network topology, network loading, the nature of the function, etc. A SFC that contains optional or vertically-recursive VFs is referred to as elastic SFC. In the literature, only non-elastic

SFCs have been studied. Elastic SFC enables a new dimension of flexibility to network slicing, empowering the network system to add and/or remove VFs. With elastic SFC, not only slice rendering but also slice framing may become adaptive to the network dynamics. As such, the network system becomes more intelligent and self-evolving to intervene in the network operator’s decision making. For example, it can help minimize costumers’ capital cost in the context of smart data pricing [18] (where pricing of VSF instantiation and management is situational) by not instantiating an optional VSF or reducing the vertical recursion of a VSF. In another example, it may autonomously alter the networking procedure of a service by dynamically removing an optional VNF or increasing the vertical recursion of a VNF according to network loading so as to accommodate or satisfy more service requests. IV. SDT WITH E LASTIC SFC Assuming that the network operator has completed the FGs of individual services, the goal of SDT is to compute service-specific data plane logical topologies for all services according to respective FGs and other necessary information. The computation may be carried out jointly for multiple services together, or incrementally one service at a time. When function sharing is allowed across services, node overlapping may appear between the logical topologies. In this section, we take an optimization approach to tackle the SDT problem. We model the performance of a VF in data processing rate (bits per second), in connection with computing resources. Such a performance model is necessary so that the impact of computing resources on function location decision can be described in the SDT problem statement. Then, we augment the SFC (i.e. FG) of each service. SFC augmentation ought to be performed in accordance with the types of the comprising functions. Based on the augmented SFCs (A-SFCs), we are able to formulate the SDT problem as an optimization problem. Finally, we solve the SDT optimization problem, for example, using an optimization tool kit, to obtain a solution. A. Function performance modeling In the literature, it has been shown that high-level service objectives can be transformed into low-level resource allocation policies [19]. For each VF, the provider (either a customer or the network operator) specifies a minimum (rigid) allocation and a maximum increment (fluid) allocation on a per-computing-resource-type basis. The former must be met in order for the VF to operate at the minimum acceptable level of performance, while the latter should be satisfied as much as possible, proportionally among all the resource types, for delivering best possible performance [20]. Requirements on elementary (native) resource and aggregate (virtualized) resource can further be differentiated, as the latter may have overhead on the VF performance [21]. For simplicity, we ignore the difference between native resource and virtualized resource, because a mapping between them can be generated through offline profiling according to previous study [22]. We view the fluid resource requirement of a VF f as a multi-dimensional cube, with each dimension being corresponding to a unique resource type. The cube may not likely be a regular cube, but skewed depending on the importance of individual dimensions in the function performance. As part of the SDT goal, a resource allocation cube proportional to the requirement cube is to be determined at each function

F1

F2

F1'

F2'

F3

Source

Destination

(a) Unfolded recursive segment

F1

F2

F1'

F2'

F3

Source

Destination

(b) Processed optional segment

F1@L1

F2@L1

F1'@L1

F2'@L1

F3@L1

F1@L2

F2@L2

F1'@L2

F2'@L2

F3@L2

Source

Destination

(c) Augmented SFC (A-SFC)

Fig. 2: SFC augmentation in correspondence to Fig. 1(b), with maximum recursion count k = 1 location. The decision t is scalar value, in range 0 to 1 inclusive, and interpreted as satisfaction ratio. The cubes can be normalized among all possible applications in order to make diverse computing resource requirements/allocations directly comparable. The cube-based measurement approach is known as workload allocation cube (WAC) [23]. We measure the performance Pf of VF f as data processing rate, in bits per second, and model its relation with resource allocation decision by a resource efficiency (RE) factor υf , which is clearly function- and platform- dependent. Then, we compute Pf by the following linear function: Pf = πf +υf ×t, where πf is the minimum perform that is attributed to rigid resource allocation. Furthermore, we define πf (and υf ) to be identical across all computing platforms and vary the resource requirements to reflect platform difference. We would like to note that, building a generic function performance model is not trivial, because it heavily depends on function logic, computing platform and implementation detail. But nevertheless, for a given function it is possible to apply experiment based methods. For example, we may exploit the WAC technique [23] and monitor the function performance with different WAC decisions; then, we apply curvy fitting on the monitoring results to extract the model. In order to remain focused on the primary SDT problem and not to be distracted by function performance modeling, which warrants separate research of its own, we chose to use the linear model above as a proof of concept. The solution we propose can readily accommodate any other model as long as it is convex. B. SFC augmentation Before augmenting a given SFC, we pre-process the SFC through two steps. At the first step, we unfold recursive segments to remove loops. For each recursive segment with up to k recursions (indicated by a backward connection in the SFC), we duplicate it adjacently along the chain for k times, and we create arcs from a VF in any of these segments (the original or a duplicate) to a depending VF in any of the subsequent duplicate segments to enable function dependency

across segments. The unfolding step can be best described by an example, as shown in Fig. 2(a) where the segment F10 F20 is the duplicate of the segment F1-F2 and the duplicate function F20 may depend on any of F1 and F10 . At the second step, we ignore the arcs created for enabling cross-segment function dependence at the first step and process the SFC for optional functions. We identify optional segments. An optional segment starts from an optional function and ends at the first subsequent non-dependent function along the chain. For each optional segment, we create arcs between its prior segment and its next segment. In Fig. 2(a), the loop-removed SFC has F1-F2 and F10 -F20 as optional segments. The result derived after the second step manifests as shown in Fig. 2(b). An augmented SFC (A-SFC), is a directed, acyclic graph, denoted as G(N, A) in which N are node set and A arc set. Given an arc a ∈ A, we use asrc to denote the source end of a and adst the destination end. An A-SFC augments a SFC in that it expands the pre-processed version of the SFC by duplicating each VF at every possible location (an NFV enabled network node). In an A-SFC, nodes N are therefore divided into three disjoint subsets: source node set S, destination node set D, and eta node set E. A source node has only out-going arcs, while a destination node has merely incoming arcs. Eta nodes are intermediate nodes that carry both out-going arcs and in-coming arcs; each of them corresponds to a unique pair of function and location and is denoted as ηpf , implying the presence of function f at location p. Given an eta node e, e ≡ ηpf , we denote by ef un the respective function and by eloc the respective location. Arcs are created among nodes N according to the connectivity defined in the pre-processed SFC. Figure 2(c) shows the final A-SFC. C. Problem formulation Table I lists the major denotations to be used throughout the rest of the paper. Many of them have a service-specific version, for which we will use a super-script to distinguish. For example, the eta node set that belongs to service v is denoted as E v ; γsv implies the data rate of source s in service v; the incoming traffic rate related to service v at an eta node e is represented by rev+ ; and so on and so forth. For easy of presentation, for every eta node e we define πe = πef un and υe = υef un . Given rigid resource allocation decision ye , ye ∈ {0, 1} and fluid resource allocation decision te , 0 ≤ te ≤ 1, the performance of e, namely the performance of the instance of function ef un at location eloc is therefore computed according to the model presented in Sec. IV-A as Pe = πe ye + υe te .

The performance of a service is reflected in two aspects: traffic performance and function performance. Along the ASFC, they are coupled by the well-known queuing theory. That is, At any eta node e, the incoming traffic rate re should not be larger than the processing rate Pe in order not to have congestion. Maximizing a service’s performance consequently implies maximizing the service’s traffic rate satisfaction with respect to the computing resource allocation on eta nodes confined by this constraint. In the SDT problem presented here, we assume a VF can be instantiated at most once at a single location, and we want to maximize the minimum service performance among all services present in the network and meanwhile minimize the cost, which is defined as the total cost on NFV-enabled network nodes, logical arcs (in the A-SFC graph), and network

Problem 1 (SDT): max U (x, y, z, q, r, λ) − M(δ + σ),

TABLE I: Key Denotations subject to A L R S N F P E W V C γs βa βl ϕa ϕl op dp gf αw p ρe ςe κse κ0s e πe υe γe+ xl ye zp te qpw λs λ δ σ M

arc set (in a partial A-SFC) link set (in a full A-SFC) relay node set source node set total node set N = E ∪ R ∪ S ∪ D total service function set total function PoP set and P ⊆ R total eta node set computing resource set service set set of must-not-collocate function groups data rate of source s statistical capacity of arc a ∈ A capacity of link l ∈ L cost factor of arc a ∈ A cost factor of link l ∈ L basic operational cost of point p ∈ P maximum number of functions allowed at point p ∈ P maximum number of instances allowed for function f ∈ F maximum amount of computing resource w at point p ∈ P traffic reduction factor at eta node e ∈ E maintenance cost of eta node e ∈ E rigid requirement of eta node e on computing resource w ∈ W fluid requirement of eta node e on computing resource w ∈ W minimum performance of function of eta node e ∈ E fluid resource efficiency of function of eta node e ∈ E incoming traffic rate to an eta node e ∈ E rate allocation over link l ∈ L binary indicator of non-zero incoming rate to eta node e binary indicator of non-zero incoming rate to point p ∈ P fluid resource allocation ratio of eta node e ∈ E utilization ratio of computing resource w at point p ∈ P the proportion at which the traffic rate of source s is satisfied minimum rate satisfaction among all sources maximum computational overloading among all eta nodes e ∈ E maximum resource over utilization among all NFV-enabled nodes p ∈ P a suitably large value

links (in the network graph); namely, C(x, y, z, r) = µ

X p∈P

ν

X a∈A

(op zp +

X

ςe ye )+

e∈E:eloc =p

ϕa ra + (1 − µ − ν)

X

ϕl xl ,

l∈L

where 0 ≤ µ, ν ≤ 1 are weighting factors and µ + ν ≤ 1. Define the minimum traffic satisfaction among all sources across all services as λ = mins∈S v ,v∈S λvs . We formulate the SDT problem as a combinatorial optimization problem with a maximization objective of U (x, y, z, r, λ) = (1 − ω)λ − ωC(x, y, z, r),

where 0 ≤ ω ≤ 1 is a weighting factor. The mathematical formulation is presented in Problem 1. It should be noted that in real-world applications, other SDT objectives are possible, up to the network operator’s definition. This SDT problem contains two sub problems: a hierarchical multi-commodity flow (HMCF) problem over the A-SFC graph and the network graph and a network cost minimization (NCM) problem on the NFV-enabled nodes. The first group of constraints (1) - (9) belong to the HMCF sub problem; the second group of constraints (10) - (18) belong to the NCM sub problem. The two sub problems are coupled through the eta node incoming rate variables γev+ . That is, an eta node with non-zero incoming rate decided within the HMCF sub problem implies a positive placement decision of the respective VF at

X

v

v

v

v

(1)

v+

v

(2)

v+

v

(3)

ra = λs γs , ∀s ∈ S , ∀v ∈ V

a∈A:asrc =s

X

v

ra = γe , ∀e ∈ E , ∀v ∈ V

a∈A:adst =e

X

v

ra = ρe γe , ∀e ∈ E , ∀v ∈ V

a∈A:asrc =e

X

v ra ≤ βa , ∀a v∈V v v λ ≤ λs , ∀s ∈ S , ∀v v v xl = ra , ∀a ∈ A, ∀v

X

∈A

(4)

∈V

(5)

∈V

(6)

xl = ra , ∀a ∈ A, ∀v ∈ V

v

(7)

xl , ∀n ∈ R, ∀v ∈ V

v

(8)

X

xl ≤ βl , ∀l ∈ L

(9)

≤ Mye , ∀e ∈ E

(10)

ye ≤ zeloc , ∀e ∈ E

(11)

ye ≤ gf , ∀f ∈ F

(12)

l∈L:lsrc =asrc

X

v

l∈L:ldst =adst

X

X

v

xl =

L∈L:lsrc =n

L∈L:ldst =n v

v∈V

X

v+

re

v∈V

X e∈E:ef un =f

X

v

ye ≤ 1, ∀p ∈ P, ∀c ∈ C , ∀v ∈ V

(13)

te ≤ ye , ∀e ∈ E : eloc = p, ∀p ∈ P X v ye ≤ dp , ∀v ∈ V, ∀p ∈ P

(15)

e∈E v :ef un ∈F c ,eloc =p

(14)

e∈E v :eloc =p +

πe ye − υe te − γe ≤ δ, ∀e ∈ E : eloc = p, ∀p ∈ P X w 0w w w (ye κe + te κ e ) = qp αp , ∀w ∈ W, ∀p ∈ P

(16) (17)

e∈E:eloc =p w

qp − zp ≤ σ, ∀w ∈ W, ∀p ∈ P

(18)

q, r, t, x ≥ 0; δ, σ ≥ 0

(19)

y, z ∈ {0, 1}

(20)

the respective location, subject to the feasibility check within the NCM problem and the overall maximization objective. We first examine the HMCF sub problem, in which two levels of MCF decision are made. At the top and first level, it decides whether and how each arc in the A-SFC is used. The per-service constraints (1) - (3) and the per-arc constraint (4) are applied for this level of decision. In particular, Constraint (1) computes the allocated source rate, and Constraint (2) computes the incoming rate to each eta node. Constraint (3) ensures flow conservation at each eta node with respect to the eta node’s traffic rate reduction factor. Constraint (4) makes sure the rate allocation for each arc is not larger than an upper bound, which is may be configured as infinity. Constraint (5) computes the minimum source rate among all services. At the second and bottom level, the HMCF problem decides how to support the top-level decision using physical links, by treating each arc as a flow and applying conventional MCF constraints. Constraints (6) and (7) ensure flow satisfaction; Constraints (8) and (9) are respectively flow conservation constraint and link capacity constraint. It is worth noting that, as a common practice in mathematical programming, aggregation techniques may be applied to flows and nodes to reduce problem complexity [24]. For example, flows destining to the same node may be aggregated into a single flow; destinations of flows originating from the same node may be aggregated as a single node; and so on.

Problem 2 (Simplified SDT): max U (x, q, r, λ) − M(δ + σ), subject to (1) - (9), (14) - (19). This problem is a simplified form of Problem (1) in that it takes y, z as input rather than as decision variables and does not apply the function location conflict constraint.

The SDT problem is presented for clarity in a straightforward way without applying aggregation. Let us now turn our attention onto the NCM sub problem, which is a variant of the well-known bin packing problem. Constraints (10) and (11) respectively converts eta node incoming rate decision to function placement (location selection) decision and computes NFV-enabled node activation decision. Constraint (12) is horizontal recursion constraint, confining the number of selected eta nodes of a function to below a given upper bound. Constraint (13) is the function location conflict constraint, which indicates that the involved functions can not be collocated. Constraint (14) enforces that no fluid resource allocation happens at any un-selected function location. Constraint (15) states that the number of functions hosted at the same location can not be larger than a maximum value. Constraint (16) enforces δ to equal the maximum overloading, i.e. difference between incoming traffic rate and data processing rate at an eta node, among all the eta nodes at solution. Constraint (17) computes the resource utilization ratio at each NFV-enabled node; Constraint (18) enforces σ to equal the maximum resource over-utilization at the NFVenabled nodes at solution. We penalize the solution by δ and σ as indicated by the last term in the objective function. V. A FAST H EURISTIC S OLUTION The SDT problem is NP hard because it contains the bin packing problem as a sub problem, which is a known to be NP hard. There is no polynomial-time solution to it, in general. The optimal solution presented in the previous section requires to solve the SDT problem optimally, thus not practical when the problem size is large. A common practice when solving such a combinatorial problem is to apply the classic branch-and-bound approach [25]. In this section, we tackle the NP hardness through a heuristic algorithm that exploits both the branch-and-bound approach (a simplified version that disallows backtracking for fast convergence) and the special structure of the SDT problem. The algorithm maintains two eta node sets E0 and E1 . The former contains eta node whose y values are fixed to 0; the latter contains eta nodes whose y values are fixed to 1. Let E 0 represent the remainder set, i.e. E 0 = E \ (E0 ∪ E1 ). Initially, both E0 and E1 are empty, and E 0 is identical as E. Furthermore, it maintains the most recent SDT optimization objective function value by a variable Obj. The algorithm is composed of three successive steps: bootstrapping, filtering and greedy selection. We elaborate on them respectively below. For ease of presentation, we refer to the status of an eta node as ‘off’ (or ‘on’) if the eta node’s y value is set to 0 (resp. 1). A. Bootstrapping through relaxation Iteratively perform the following steps to update E0 and E1 up to a maximum number of times or until they stabilize. 1) Relax y and z to be fractional variables in Problem (1) and solve the relaxed problem with the y variables of eta nodes in E1 as constant variables. If the problem is infeasible, record the objective function

value into Obj and terminate the current iteration; otherwise, it proceeds with the subsequent steps. 2) Choose two threshold values 0 ≤ θ1 ≤ θ2 ≤ 1. In the fractional solution, identify the group of eta nodes with a y value not smaller than θ2 and move them into a temporary set E10 . Among the remaining eta nodes, find those with a y value not larger than θ1 and move them into a temporary set E00 . The horizontal recursion constraint (12) must be respected. 3) Verify the function location conflict constraint (13) against E00 and E10 , and remove from the two sets the eta nodes involved in a violation. Then, set E0 = E00 and E1 = E10 , and terminate the current iteration. If Obj remains uninitialized, implying that the SDT problem is infeasible, the algorithm terminates; otherwise, it proceeds. Comment 1: The thresholds θ1 , θ2 should be defined on a per VF basis so as to accommodate the heterogeneous y value distribution of different VFs. They should be selected carefully such that the horizontal recursion constraint (12) is not violated while allowing sufficient flexibility for subsequently making function placement decisions. B. Filtering by enforcing location conflict constraints In E 0 , at each location p ∈ P identify the groups of eta nodes that are not allowed to be turned on simultaneously according to the function location conflict constraint (13). Define two temporary sets E00 and E10 . Process each group independently via the following steps: 1) Initialize all the eta nodes in the group to be off with all the other eta nodes in E 0 being on. 2) Test each eta node in the group individually as follows: turn the eta node on, evaluate z, solve Problem (2) with y, z being constant variables, and record the problem feasibility and objective function value. 3) From the test results, find the eta node whose being turned on leads to feasibility or maximum increase or least decrease in the objective function value (by comparing to the value of Obj), move it into E10 and the rest into E00 , and update Obj. Then, update E0 and E1 by merging E00 and E10 into them, i.e. E0 = E0 ∪ E00 and E1 = E1 ∪ E10 . Comment 2: In case of a tie for eta node selection in Steps B.3), a choice can be made randomly or according to certain selection policy. The policy can be tailored in favor of the benefit in some particular aspect(s), such as performance-first policy, cost-first policy, constraint-first policy. These three polices respectively indicate the eta node resulting in a favorable maximum performance increase (e.g. in terms of minimum performance or mean performance), a minimum cost increase, or minimum constraint violations to be selected. Regardless of what policy to use, a random choice may sometimes be inevitable for tie breaking. C. Greedy selection by turning off eta nodes Temporarily turn on all the eta nodes in E 0 . Solve Problem (2) in this temporary situation and update Obj with the result. In E 0 , identify the groups of eta nodes corresponding to individual optional functions. Sort the groups according to their order in the SFC. Sequentially process each group along the sorted list through the following iterative procedure, which terminates when the group becomes empty.

9 10

8

7

6

5

4

3

2

1

0

28

27

26

25

24

23

22

21

20

45

44

43

42

41

40

39

38

60

59

58

57

56

55

54

73

72

71

70

69

68

84

83

82

81

80

93

92

91

90

99

98

11

29

12

30

46

13

31

47

61

14

32

48

62

74

15

33

49

63

75

85

16

34

50

64

76

86

94

17

35

51

65

77

87

95

101

18

36

52

66

78

88

96

102 106

19

37

53

67

79

89

97

103 107 109 110

100

105 104 108

Fig. 3: A logical view of the network used in simulation 1)

Check if the initial members of the group handled in the previous iteration are all off (if any exists). If they are, move all the eta nodes of the dependent functions (either directly or through a dependency chain) of the previous group into E0 . 2) Test each eta node in the group individually as follows: turn the eta node off, evaluate z, solve Problem (2) with y, z being constant variables; if the problem is infeasible, move the eta node into E1 , 3) From the test results, find the eta node whose being turned off leads to maximum increase or least decrease in the objective function value and move it into E0 . Tie can be broken by a pre-defined policy. 4) From the test results, further find the eta nodes that always have a total incoming rate (γ + ) smaller than a threshold value θ3 and move it into E0 . 5) Update Obj. The problem may need to be resolved to reflect the new status of the eta nodes. Then, process the eta nodes in E 0 , which all belong to a mandatory function, in the same way as above. Solve Problem (2) afterward with E0 and E1 to obtain a base solution. Modify the base solution by moving eta nodes with low incoming traffic rate from E1 to E0 to satisfy the horizontal recursion constraint for each function. Re-solve Problem (2) with updated E0 and E1 to obtain the final solution. Comment 3: The enforcement of horizontal recursion constraints can be carried out through an alternative procedure similar to Steps C.2 - C.5 with increased complexity: process the functions whose horizontal recursion constraints are violated in their order in the SFC; when processing a function, turn off eta nodes incrementally, one at a time until the horizontal recursion constraint is satisfied; the processing result is applied immediately in the subsequent function processing. VI. P ERFORMANCE E VALUATION This section provides the key evaluations results of the proposed multi-stage heuristic algorithm which addresses the underlying SDT problem. Additionally, the end-to-end traffic performance conducted using a C++ based network simulator are given. As shown in Fig. 3, the network topology used in the simulations consists of 111 nodes where the nodes are arranged in layers to surround an Application Server (AS), i.e. the node with ID 110. Each node in a layer is connected via directional links to two other nodes in the adjacent layer closer

to the AS in the both uplink and downlink directions. The link capacities are incremented by 1Gbps across layers, beginning with 1Gbps from the outermost layer. In the considered topology, the network is divided into two parts: i) the Radio Access Network (RAN) consisting of 90 nodes with a ID ranging from 0 to 89 and ii) Core Network (CN) consisting of 21 nodes with a ID ranging from 90 to 110. Note that a number of RAN and CN nodes (with IDs 41, 50, 71, 76, 92, 95, 110) are configured to be NFV-enabled and considered as points of presence (PoPs) over which the service related VF can be instantiated. In regards to the packet header processing capability at the network nodes, it is assumed at each RAN and CN node has access to 16 CPU and 64 CPU per incoming link, respectively. As each packet occupies a single CPU for header processing, a RAN node (or CN node) may simultaneously processes 16 (resp. 64) packets coming from the same incidental link. All the NFV-enabled nodes are configured to have the same packet header processing capability as CN nodes. For the evaluation of the SDT technique, an MTC (machine type communications) use case pertaining to smart utility meter reading service is considered. Here, the AS is assigned as the sink node to which all MTC traffic is ultimately routed in the uplink direction. The SFC of the modeled MTC service is assumed to consists of two types of VFs namely, an optional packet aggregation function and a dependent de-aggregation function, which may reoccur horizontally and together reoccur vertically. Each of the two VFs has some computing resource requirements and certain impact on traffic rate. Observe that in order to minimize the amount of traffic traversing through the network it is desirable to place the packet aggregation VF close to the traffic sources while the de-aggregation VF is placed at the sink. However, since there are only a limited number of locations where they can be instantiated, it is vital to carefully configure the data plane logical topology such that the resource utilization efficiency (RUE) is maximized. In this case, given a set of NFV-enabled candidate PoPs and traffic load with a specified demand originating from each RAN node (aggregated traffic from all MTC devices served by the RAN node), SDT computes the data plane logical topology for the MTC service. Below we will report our numerical and simulation results. Some additional parameters applied in the performance evaluation may be introduced along the way. Due to space limitation we are not able to comprehensively provide all the parameters. But nevertheless, it is of paramount importance to note that, whenever a comparative study is conducted, the results are derived under the assumption of same setup for fairness. A. Numerical results Figure 4(a) shows the RUE performance, characterized by the ratio of total traffic rate over the weighted sum cost of link and node usage. Note that the RUE result shown in Fig. 4(a) is normalized with respect to the performance obtained with an accelerated branch and bound (AB&B) technique which is expected to provide close to optimal result. Specifically, the applied AB&B technique uses the minimum variation in the objective function value as the criterion for bounding and disallows backtracking for fast convergence. Also shown in Fig. 4(a) is the number of iterations required by the proposed SDT algorithm for convergence, normalized with respect to the iterations required by that of the AB&B approach. Both of

1

3.5

0.98

3

0.97

2.5

0.96

2

0.95

0.94

1.5

2

4

6

8 10 12 14 Per Flow Traffic Load (Mbps)

16

18

(a) RUE and AC vs. Traffic Load

1 20

12 11

0.9

Relative Resource Utilization Efficiency

0.99

Resource Utilization Efficiency Ratio (RS/SDT)

4 Resource Utilization Efficiency Ratio Algorithm Iterations Ratio

Algorithm Iterations Ratio (Optimal/SDT)

Resource Utilization Efficiency Ratio (SDT/Optimal)

1

0.8

0.7

0.6

0.5 ω = 0.2 ω = 0.5 ω = 0.8

0.4

2

4

6

10 9 8 7 6 5 SDT (MTC Rate: 5Mbps) RS (MTC Rate: 5Mbps) SDT (MTC Rate: 10Mbps) RS (MTC Rate: 10Mbps) SDT (MTC Rate: 20Mbps) RS (MTC Rate: 20Mbps)

4 3 2

8 10 12 Number of Candidate PoPs

14

16

(b) RUE vs. # of NFV-enabled Nodes

18

1

2

4

6 8 10 12 Number of Service Instances

14

16

18

(c) RUE vs. # of Service Instances

Fig. 4: Numerical results these performance indicators are shown as a function of the per flow traffic load. From the RUE perspective, it is clear that the proposed heuristic technique trails behind that of the optimal result where the performance gap increases with the traffic load. This is because as the load increases, the number of VF instances and consequently, the PoPs, considered as part of the solution search space increases. Since the heuristic technique eliminates the VF instances on a cluster basis as opposed to individually, certain marginal loss (a maximum loss of 6% is observed) in the optimality performance ensues when the per MTC flow traffic rate is set at 20Mbps. However, on the positive side, it allows significant reduction in the number of iterations needed for convergence, by a factor as large as 3/4. Next, for highlighting the consequence of not selecting the PoPs and VF instantiations optimally, the RUE performance obtained with the SDT algorithm is compared against that obtained with a random PoP selection technique. In this respect, while the SDT jointly solves the HMCF and NCM sub-problems, the random selection (RS) technique determines the PoPs arbitrarily given a set of candidate PoPs. The performance comparison between these two techniques is shown in Fig. 4(b) in terms of the RUE ratio (RS/SDT) versus the number of candidate PoPs available for different cost minimization objective weight ω values. Note that decreasing ω implies less importance placed in minimizing the consumption of PoPrelated resources and hence, may result in a more liberal outcome in VF instantiation and PoP utilization. Here, since both the SDT and RS techniques identify the PoP for service provisioning from the same candidate set, the performance gap between SDT and RS is not significant when the number of candidate PoP available is relatively low. However, as the candidate set is enlarged, the use of RS approach results in huge performance losses, especially when the cost weight value ω is relatively high (i.e. ω = 0.8). This is because while SDT enables the traffic load to be adequately balanced when distributing between the VFs and PoPs, the RS approach may result in wasteful use of the network resources and thus, widening performance gap. The RUE performance achieved with the SDT technique when supporting multiple services instances (i.e. SFCs) on the same the network is shown Fig. 4(c). In this regard, each service instance corresponds to an MTC SFC which contains the two VFs. Note that for showing the service embedding gain, the RUE is shown as a relative value normalized with respect to the absolute RUE value achieved when there is only one service instance. Additionally, the relative RUE achieved when using the RS technique, where the PoPs are arbitrary

selected for VF placement, is also shown in Fig. 4(c) as a comparison. In general, regardless of the technique applied, the RUE increases with the number of service instances because of the potential to support higher amount of combined traffic from all services while utilizing the same available network resources. The difference between these two techniques becomes more pronounced when evaluating in terms of the rate of change in the RUE with respect to the number of service instances and the maximum number of services simultaneously supported for a given amount of traffic load. In this case, while the SDT technique allows to compactly pack adequate number of VFs in the given set of candidate PoPs for all supported services, the RS lacks the flexibility in balancing the traffic load channeled through the VFs instantiated at the selected PoPs. Although performing optimal traffic distribution among the VFs instantiated in the arbitrarily chosen PoPs allows the RS technique to approximate the RUE performance of that of the SDT in low load conditions (i.e. 5Mbps per flow), the RUE of RS decreases significantly as the load increases to 20Mbps where only a maximum of 5 services can be supported by RS compared to 9 for the SDT. B. Simulation results For the end-to-end traffic performance simulations, a mixed traffic profile consisting of the MTC and background is assumed. Here, the background traffic is generated both in the uplink and downlink directions to represent an underlying video streaming service. For the MTC traffic, the 3GPP asynchronous transmission model, equivalent to constant bit rate (CBR) is applied. To simulate different network loading conditions, the MTC rate is varied between 5Mbps to 20Mbps per flow while the background traffic rate is fixed at 150Mbps per flow. For assessing the achievable performance gain, a fixed network topology (i.e. the case of without SDT), modeled after the Evolved Packet Core (EPC), is used where certain CN nodes are fixed as SGWs. The simulation results for the MTC traffic are shown in Fig. 5 in terms of throughput and mean delay performance achieved when optimizing the VF placement and PoP selection via SDT. From the results, it is observed that SDT achieves performance gain exceeding 25%, especially in the case of the high load conditions when assessed in terms of delay. Although the 50th percentile throughput achieved with SDT in the high load scenario deviates by a margin of 1.5Mbps from the required demand of 20Mbps, the case without SDT trails further behind with a loss of 3Mbps. The traffic performance experienced by the background traffic is shown in Fig. 6 for different MTC traffic loading conditions. Similar to the MTC traffic, the throughput under-

1

1 Without SDT With SDT MTC traffic rate: 5 Mbps MTC traffic rate: 10 Mbps MTC traffic rate: 20 Mbps

CDF [%] (one entry per session)

0.8 0.7

0.9 0.8 CDF [%] (one entry per session)

0.9

0.6 0.5 0.4 0.3 0.2

0.6 0.5 0.4 0.3

Without SDT With SDT MTC traffic rate: 5 Mbps MTC traffic rate: 10 Mbps MTC traffic rate: 20 Mbps

0.2

0.1 0

0.7

0.1 4

6

8

10 12 14 Session Thoughput [Mbps]

16

18

0

20

0

(a) Flow Throughput

0.1

0.2

0.3 0.4 0.5 Mean Packet Delay [s]

0.6

0.7

0.8

(b) Mean Packet Delay

ficiency and iii) ability to simultaneously support significantly high number of services. Furthermore, the end-to-end traffic simulations reveal that the applied SDT mechanism enhances not only the performance of the foreground service traffic but also that of the under-laying background traffic. Overall, the insights gained from the techniques and results presented in this paper further motivate other research opportunities for making the best possible use of the scarce network resources at both the link and node levels without adversely affecting the traffic performance of the instantiated services with diverse SFC related requirements. R EFERENCES

Fig. 5: MTC traffic end-to-end performance 1

CDF [%] (one entry per session)

0.8 0.7

[2]

H. Hawilo, A. Shami, M. Mirahmadi, and R. Asal, “NFV: State of the Art, Challenges and Implementation in Next Generation Mobile Networks (vEPC),” IEEE Network, vol. 28, no. 6, pp. 18-26, 2014.

[3]

F. Hu, Q. Hao, and K. Bao, “A Survey on Software-Defined Network and OpenFlow: From Concept to Implementation,” IEEE Communications Surveys & Tutorials, vol. 16, no. 4, pp. 2181-2206, 2014.

[4]

Open Platform for NFV, https://www.opnfv.org

[5]

H. Zhang, S. Vrzic, G. Senarath, N.-D. Dao, H. Farmanbar, J. Rao, C. Peng, and H. Zhuang, “5G Wireless Network: MyNET and SONAC,” IEEE Network, vol. 29, no. 4, pp. 14-23, 2015.

[6]

H. Tullberg, et al., “METIS system Concept: The Shape of 5G to Come,” IEEE Communications Magazine, 2015.

[7]

S. Clayman, E. Maini, A. Galis, A. Manzalini, and N. Mazzocca, “The Dynamic Placement of Virtual Network Functions,” Proc. IEEE NOMS, 2014.

[8]

S. Mehraghdam, M. Keller, and H. Karl, “Specifying and Placing Chains of Virtual Network Functions,” Prof. IEEE CLOUDNET, 2014.

[9]

Z. Xiao, W. Song, and Q. Chen, “Dynamic Resource Allocation Using Virtual Machines for Cloud Computing Environment,” IEEE Tran. Parallel and Distributed Systems, vol. 24, no. 6, pp. 1107-1117, 2013.

[10]

S. Agarwal, M. Kodialam, and T.V. Lakshman, “Traffic engineering in software defined networks,” Proc. IEEE Infocom, 2013.

[11]

M. Abu Sharkh, M. Jammal, A. Shami, and A. Ouda, ”Resource Allocation in a Network-Based Cloud Computing Environment: Design Challenges,” IEEE Communications Magazine, vol. 51, no. 11, pp. 46-52, 2013.

[12]

J.W. Jiang, T. Lan, S. Ha, M. Chen, and M. Chiang, “Joint VM Placement and Routing for Data Center Traffic Engineering,” Proc. IEEE Infocom, 2012.

[13]

R. Riggio, T. Rasheed, and R. Narayanan, “Virtual Network Functions Orchestration in Enterprise WLANs,” Proc. IFIP/IEEE IM, 2015.

[14]

M. Xia, M. Shirazipour, Y. Zhang, H. Green, and A. Takacs, “Network Function Placement for NFV Chaining in Packet/Optimal Datacenters,” Journal of Lightwave Technology, vol. 33, no. 8, 2015.

[15]

M.C. Luizelli, L.R. Bays, L.S. Buriol, M.P. Barcellos, and L.P. Gaspary, “Piecing Together the NFV Provisioning Puzzle: Efficient Placement and Chaining of Virtual Network Functions,” Proc. IFIP/IEEE IM, 2015.

[16]

“A. Fischer, J.F. Botero, M.T. Beck, H. Meer, and X. Hesselbach, “Virtual Network Embedding: A Survey,” IEEE Communications Surveys & Tutorials, vol. 15, no. 4, 2013.

[17]

J. Halpern and C. Pignataro, “Service Function Chaining (SFC) Architecture,” IETF RFC 7665, Oct. 2015. https://tools.ietf.org/html/rfc7665

[18]

S. Sen, C. Joe-Wong, S. Ha, M. Chiang, “Smart Data Pricing,” New Jersey, USA, Wiley, 2014.

[19]

Y. Chen, S. Iyer, X. Liu, D. Milojicic, and A. Sahai, “Translating service level objectives to lower level policies for multi-tier services,” Cluster Computing, 11(3): 299-311, 2008.

[20]

D. Dolev, D.G. Feitelson, J.Y. Halpern, R. Kupferman, and N. Linial, “No justified complaints: On fair sharing of multiple resources,” Proc. ITCS, pp. 68-75, 2012.

[21]

M. Stillwell, F. Vivien, and Casanova, “Virtual machine resource allocation for service hosting on heterogeneous distributed platforms,” Proc. IEEE IPDPS, pp. 786-797, 2012.

[22]

T. Wood, L. Cherkasova, K. Ozonat, and P. Shenoy, “Profiling and modeling resource usage of virtualized applications,” Proc. ACM/IFIP/USENIX Middleware, pp. 366-387, 2008.

[23]

J.D. Cowan, D. Seymour, and J. Carlington, “Method and System for Determining Computing Resource Usage in Utility Computing,” US patent application No. 61/107,557, 6Fusion USA Inc., Oct, 22nd, 2009.

0.9 0.8

0.6 0.5 0.4 0.3

0.7 0.6 0.5 0.4 0.3

0.2

0.2

0.1

0.1

0 120

S. Sezer, S. Scott-Hayward, P.-K. Chouhan, B. Fraser, D. Lake, J. Finnegan, N. Viljoen, M. Miller, and N. Rao, “Are We Ready for SDN? Implementation Challenges for Software-Defined Networks,” IEEE Communications Magazine, vol. 51, no. 7, pp. 36-43, 2013.

1 Without SDT With SDT MTC traffic rate: 5 Mbps MTC traffic rate: 10 Mbps MTC traffic rate: 20 Mbps

CDF [%] (one entry per session)

0.9

[1]

125

130

135 140 145 Session Thoughput [Mbps]

(a) Flow Throughput

150

155

0

Without SDT With SDT MTC traffic rate: 5 Mbps MTC traffic rate: 10 Mbps MTC traffic rate: 20 Mbps 0

0.1

0.2

0.3 0.4 0.5 Mean Packet Delay [s]

0.6

0.7

0.8

(b) Mean Packet Delay

Fig. 6: Background traffic end-to-end performance goes higher level of deviation from its mean value of 150Mbps as the overlaying MTC traffic load increases. However, by applying SDT for the MTC traffic, it is feasible to reduce the background traffic throughput variance as seen in Fig. 6(a). As for the packet delay performance shown in Fig. 6(b), the difference between with and without SDT cases is marginal when the MTC traffic load is relatively low at 5Mbps and 10Mbps. The benefit of using SDT becomes more pronounced when the MTC load is increased to 20Mbps where comparable reduction in the delay with that of the MTC traffic is seen at the 50th percentile. In conclusion, utilizing the SDT technique not only results in enhancing the network resource utilization for the benefit of the network operator but also improves the end-to-end traffic performance of both the MTC and the background services. VII. C ONCLUSION In this paper, we investigated the problem of efficiently creating network slices within a given pool of network resources using the SDT technique. A network slice, in this regard, corresponds to a particular service composed of a set of VFs configured as an ordered SFC and rendered using NFV and SDN principles. In contrast to the conventional notion of SFC, this paper incorporates the novel concept of elastic SFC which contains VFs that may be optional or recursive, in addition to the mandatory ones. These distinct function chaining requirements are incorporated into the underlying SDT problem formulation and solved effectively using a fast multi-state heuristic algorithm. From the numerical results it is concluded that the proposed SDT algorithm is highly beneficial and practical for addressing the network slicing problem in future networks due to its i) fast convergence, ii) improved network resource utilization ef-

[24]

D.F. Rogers, R.D. Plante, R.T. Wong, and J.R. Evans, “Aggregation and Disaggregation Techniques and Methodology in Optimization,” Operations Research, vol. 39, no. 4, pp. 553-582, 1991.

[25]

J. Clausen, “Branch and Bound Algorithms - Principles and Examples,” Department of Computer Science, University of Copenhagen, Denmark, 1999. Internet resource: http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.5.7475

Network Slicing with Elastic SFC

Huawei Technologies Canada Inc., Ottawa, Ontario CANADA ...... [24] D.F. Rogers, R.D. Plante, R.T. Wong, and J.R. Evans, “Aggregation and Disaggre- ... ment of Computer Science, University of Copenhagen, Denmark, 1999. Internet.

630KB Sizes 9 Downloads 278 Views

Recommend Documents

Elastic Stream Computing with Clouds
C. Cloud Environment. Cloud computing is a way to use computational resources ... Cloud is only an IaaS (Infrastructure as a Service) such as. Amazon EC2 or ...

Elastic Stream Computing with Clouds
cloud environment and to use optimization problem in an elastic fashion to stay ahead of the real-time processing requirements. Keeping the Applicationʼ's.

Elastic Stream Computing with Clouds
[email protected]. Abstract—Stream computing, also known as data stream processing, has emerged as a new processing paradigm that processes incoming data streams from tremendous numbers of .... reduce the time needed to set up servers if we prepare i

Elastic computing with R and Redis - GitHub
May 16, 2016 - Listing 3 presents a parallel-capable variation of the boot function from the ... thisCall

Slicing
Sep 12, 2009 - analysis or data mining tasks on the generalized table, the data analyst has to make the uniform distribution assump- tion that every value in a ...

ACDC SFC FANS.pdf
433.92MHz). 16'' 3 speed floor standing fan. 16'' 3 speed floor standing fan with remote control & 8 hour timer. Code. Floor Standing Fans. Description. CH-1686.

Schemenauer Proposed SFC -
This collector produces of the order of 100 mL of water per hour and has been shown to have a collection rate that is ... are expensive and require a power source. The second requirement for a ..... There can be up to a 15% difference in a given hour

ACDC SFC GEWCH.pdf
There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. ACDC SFC ...

ACDC SFC TERASAKI.pdf
Advanced protection from live parts. • Up to 40kA in unbelievably. small 160A frame size. • Field addable accessories. • IP54 or IP65 varible depth handles. • Symmetrical door cut-out patterns. • Adjustable or fixed settings. Brilliant. Fea

Elastic Routing in Wireless Networks With Directional ... - IEEE Xplore
Abstract— Throughput scaling law of a large wireless network equipping directional antennas at each node is analyzed based on the information-theoretic ...

ACDC SFC DRIVES.pdf
Standard. Option: MR4 - MR9. -. Option. 2 x 0-10V /. 0(4)-20mA. 1 x 0-10V /. 0(4)-20mA. 6. -. +SBF3: 2 x NO/ NC, 1. x NO standard +SBF4: 2 x NO/NC, Therm. 3 Slots. RS 485 and Ethernet. Standard. Standard. Option OPT-E3/5-V. Option OPT-E7-V. Option OP

ACDC SFC GEWCH.pdf
pushbutton for bells. 1 x 1M 1P 16A. pushbutton and Shuttle ... FTP sockets. Code. Data. Telephone Socket ... ACDC SFC GEWCH.pdf. ACDC SFC GEWCH.pdf.

ACDC SFC FLAMEPROOF.pdf
D. B. O. التي تحصر القوس AB. #. Whoops! There was a problem loading this page. Retrying... Whoops! There was a problem loading this page. Retrying... ACDC SFC FLAMEPROOF.pdf. ACDC SFC FLAMEPROOF.pdf. Open. Extract. Open with. Sign In. Ma

elastic foundation model of rolling contact with friction - Universitatea ...
R=150 mm, D=300 mm, b=40 mm, ν=0.3, E=2.12*105 Mpa, K=3*108 Mpa – the ... The finite element method are one of the best methods to determinations.

elastic foundation model of rolling contact with friction - Universitatea ...
R=150 mm, D=300 mm, b=40 mm, ν=0.3, E=2.12*105 Mpa, K=3*108 Mpa – the maxim stiffness in this node If the pressure is changed the direction and it is ...

A Huge Challenge With Directional Antennas: Elastic ... - IEEE Xplore
an elastic routing protocol, which enables to increase per-hop distance elastically according to the scalable beamwidth, while maintaining a constant average ...

Schemenauer Proposed SFC
the advection of clouds over the surface of the mountain, the consideration of precipitation ..... Longbrook House, Ashton Vale Road, Bristol BS3 2HA, England.

Elastic Remote Methods - KR Jayaram
optimize the performance of new or existing distributed applications while de- ploying or moving them to the cloud, engineering robust elasticity management components is essential. This is especially vital for applications that do not fit the progra

ACDC SFC DECORATIVE.pdf
Page 2 of 28. Code. Code. Code. Code. ALD1105-L0077. MD2140529-10. RQ1107E-R8088. MD2140526-6. ALD1105-L0075. MD2140525-8+4. SST99005.

ACDC SFC 13MM.pdf
Page 2 of 4. 2. 4.5kA Circuit Breakers C Curve. 1P C Curve 1P+N C Curve 3P D Curve. C Curve. 4P C Curve. 2P D Curve 4P C Curve. D Curve. 1 Pole Code. 1M wide Amps. Amps. 2 Pole Code. 2M wide. 3 Pole Code. 3M wide. 1A. 2A. 5A. 10A. 15A. 20A. 25A. 32A.

ACDC SFC AUT.pdf
Metal protection. cover for lens. • Waterproof. housing. *Add voltage to code: 230VAC (86-265VAC) or DC (24VDC), eg. XHD-300-019-230VAC. Traffic Lights Traffic Light Set. Code Colour Type. • Excludes housing • Lumen >400mcd • Supply Volts: 23

ACDC SFC RHE.pdf
SC410/SC411: Opto-Electronic Control unit. SC501: Temperature Control Module. SC510/SC511: Thermistor Motor Protection Module. SC610/SC611: Flip-Flop Relay with/without Memory. SC900: Power Supply Module. CC120: Totalising Counter. SC700: Multi-Funct

ACDC SFC AUTOMATION.pdf
Controller (PLC). 17-23. Inductive Proximity. Switches. 44. Long Range Photo-. electric Sensor Kits. 53. Safety Control, Relays. & Connector Boxes. 55-56. Actuator Boxes, Cables & Connectors. 32, 37 & 38. Mark, Luminescence. & Colour Sensors. 5. Swit

ACDC SFC CAMPING.pdf
Whoops! There was a problem loading more pages. Retrying... Whoops! There was a problem previewing this document. Retrying... Download. Connect more apps... Try one of the apps below to open or edit this item. ACDC SFC CAMPING.pdf. ACDC SFC CAMPING.p