INTERNATIONAL JOURNAL OF ELECTRONICS, INFORMATION AND SYSTEMS, VOL.12, NO.2, February 2010
13
Implementation of the Radio Network Controller Signaling Protocol Stack using Cross Layer Design Dr. George Tselikis
[email protected] 4Plus Technologies S.A.,Greece +30-210-6198583, +30-210-2220084 Abstract—Layering is the dominating design methodology of communication protocol stacks. This architecture is characterized by a stack of protocol layers where each protocol undertakes specific tasks by using the services made available by the layers below it, and providing new services to upper layers. However, by retaining the strictly modular architecture, where each layer keeps its independence and communicates solely with its adjacent layers, the implementation of the protocol stack may have a negative impact on the overall system performance. Recently, the cross-layer design has been introduced. This paper presents implementation details and performance measures of the cross-layer design used for the production of a commercial 3G Radio Network Controller (RNC) simulator, in order to obtain performance gains. Index Terms—Cross-layer design, 3G, RNC, signaling, protocol stack.
1. Introduction The layering architecture is the most prevalent methodology for the design of communication protocol stacks. In a strict layered architecture, like the OSI or TCP/IP model, each layer is implemented as a distinct module which makes use of the services provided by its adjacent lower layer and provides new services to its upper layer. This architecture allows solely the direct communication between adjacent layers via some specific set of exchanged primitives; the communication between non adjacent layers is considered as violation of the layering principle and is not allowed. However, a strict layered design is not flexible enough and may result in an inefficient implementation of a protocol suite. As new challenging networking environments, such as next generation wireless networks, attract the interest of researchers and communication system designers the inefficiencies of layered protocol stacks come into front. Thus, the new alternative option of the cross-layer design
has been introduced recently. In short, cross-layer design refers to protocol design done by actively exploiting the dependence between protocol layers to obtain performance gains [1]. This is unlike layering, where the protocols at the different layers are designed independently. The concept behind cross-layering is rather intuitive. Instead of treating a layer as a completely independent functional entity, information can be shared among layers. Protocols can be designed by violating the reference architecture, for example, by allowing direct communication between protocols at nonadjacent layers or sharing variables between layers. The ability to share information across layers is the central aspect of crosslayer design. So instead of a mere replacement, cross layering can be seen as an enhancement of the layered approach. The ultimate goal is to preserve the key characteristics of a layered architecture and in addition to allow the utilization of the information of one layer to improve the performance of a different layer’s protocol. Most published papers present several cross-layer approaches in theoretical level. The innovation of this paper is that it goes beyond theory by presenting implementation details and gains from the cross-layer design, applied for the development of a real commercial product, which is a 3G-RNC simulator [2]. The cross-layer design was adopted to make our system as fast as possible, i.e. one major performance goal of this system was the transmission/reception of signaling messages at very high rates, as 500 messages in one (1) second. This paper is organized as follows. Section 2 presents related work on cross-layer design and section 3 describes the protocol stack of the RNC simulator. Section 4 describes the adopted cross-layer architecture and presents implementation code in C++ language that may help any potential researcher or developer working on similar projects.
INTERNATIONAL JOURNAL OF ELECTRONICS, INFORMATION AND SYSTEMS, VOL.12, NO.2, February 2010
2. Related Work There have been a large number of cross-layer design proposals in the literature recently. In [1] the authors discuss the basic types of cross-layer design with examples drawn from the literature and categorize the initial proposals on how cross-layer interactions may be implemented. In [3] cross-layer architecture is achieved through the transfer of Internet Control Message Protocol (ICMP) messages. These ICMP messages are generated by some module running on the system and when a preregistered event occurs the event-related information is propagated to the upper layers through ICMP messages. In [4] the authors propose a method, named Cross-Layer Signaling Shortcuts (CLASS) that enables the direct communication between nonadjacent layers. In [5] the authors propose a framework for further enhancements of the traditional IP-based protocol stack to meet current requirements in all-IP wireless networks. MobileMan [6] presents a core component called Network Status, which functions as a repository for information that can be shared among the protocol layers. ECLAIR architecture proposed in [7] provides a guideline for designing and implementing cross-layer feedback on a mobile device. In ECLAIR, a tuning layer (TL) for each layer provides an interface to read and update the protocol’s data structures which determine its behavior. TLs are used by protocol optimizers (POs), which register for events with TLs. The
14
TLs notify the registered POs whenever an event occurs. In [8] the authors attempt to distill a few general principles for cross layer design in an effort to improve the performance of wireless networks.
3. RNC Simulator UMTS [9] provides a seamless communication scheme for the integration of a wide range of services with diverse QoS characteristics. The fundamental concept of UMTS is the separation of the radio access functionality from the Core Network (CN) functionality. In order to keep the access network independent from the CN, 3GPP introduced a new interface, the Iu interface [10]. The UMTS Terrestrial Radio Access Network (UTRAN) [11] provides user access to the UMTS CN through the Iu interface, integrates radio resources management functions and supports the necessary control and transport protocol mechanisms for user data transfer. The Radio Network Controller (RNC) is in charge of controlling the use of the radio resources and performing handover functions. The Iu interface specifies all the necessary procedures for the interconnection of RNCs with CN access points and the inter-working with existing networks. It also allows the access network to keep all radio access technology dependent and hide mobility functions from the CN. The Iu interface towards the Packet Switch (PS) domain is
INTERNATIONAL JOURNAL OF ELECTRONICS, INFORMATION AND SYSTEMS, VOL.12, NO.2, February 2010
called Iu-PS and the Iu interface towards the Circuit Switch (CS) domain is called Iu-CS. This paper describes the cross-layer architecture of the RNC simulator towards the Iu-PS interface. This product is developed by 4Plus [2,12] and emulates the RNC and UE functionality by supporting the Iu-PS protocol stack of figure 2.
3.1 Non Access Stratum (NAS) In the general description of the 3GPP layer 3, the defined functional models are for the Call Control (CC) of CS connections, Session Management (SM) for GPRS services, Mobility Management (MM) and Radio Resource Management (RR) for CS and GPRS services. RR functions are used to control, maintain and supervise physical connections that allow a point-to-point communication between the network and a User Equipment (UE). This includes the cell selection/reselection and the handover procedures. MM functions are used to establish, maintain and release connections between the UE and the network, over which
15
user information will be exchanged. The MM entities support the mobility of user terminal, such as informing the network of its present location and providing user identity confidentiality. CC functions are used to establish, maintain and release CS connections. Finally, SM functions are used to establish, modify and release Packet Data Protocols (PDP) contexts towards the PS domain. The above functional set is realized in the upper layer of the signaling stack, referred as NAS [13]. The NAS messages exchanged between the CN and the UE are transported transparently by the RNC in the payload of RANAP messages. 3.2 Radio Access Network Application Part (RANAP) The service that Access Stratum provides to the NonAccess Stratum (NAS) for data transfer between UE and CN is referred to as Radio Access Bearer (RAB). The layer that is mainly responsible for the overall RAB management in terms of setting up, modifying and releasing RABs, is the RANAP [14]. The RANAP layer handles all RAB related procedures and conveys NAS messages transparently between the CN and UE without interpreting them. 3.3 SCCP/MTP3b/SAAL
INTERNATIONAL JOURNAL OF ELECTRONICS, INFORMATION AND SYSTEMS, VOL.12, NO.2, February 2010
16
The SCCP [15] connection oriented services are used for the transfer of the upper layer signaling messages between the CN and the RNC. An SCCP connection is established each time the UE needs to communicate with the CN and no SCCP connection exists between them. Upper layer signaling messages are transferred transparently through this SCCP connection. The SCCP uses the routing services of the MTP3b [16] and the SAAL [17] transmission, retransmission and reordering mechanisms that enable the reliable transfer of the signaling messages with protection from loss, missinsertion, corruption and disordering.
Following, some examples of how this architecture violates the strict OSI layered model are presented.
4. Cross-Layer Architecture
b) The application informs at runtime the SCCP layer for the number of the simulated users that will perform signaling procedures (i.e. Attach_Request, Routing Area Request, PDP Context Activation,…). An SCCP connection should be established for the completion of each procedure since NAS procedures are connection oriented. In our implementation each SCCP connection is represented by a call object which keeps information regarding the SCCP connection (i.e. Source Local Reference – SLR). Thus, the SCCP may estimate if all users can be supported by checking the available resources and then notify the application if the scheduled testscenario can be performed or not.
The layer-by-layer propagation approach across the protocol stack is not efficient. For example, the intermediate layers have to be involved even if only the link layer and the application layer are actually targeted. This causes unnecessary processing overhead and propagation latency [4]. The proposed approach provides the ability that protocols belonging to different layers can cooperate by sharing linkstatus information, while still maintaining separation between the layers in protocol design. This reference architecture exploits the advantages of a cross-layer design, while still satisfying the layer-separation principle.
4.1. Direct communication of non-adjacent layers a) The application layer (App) informs at runtime the SSCOP layer about its delay requirements, the rate that intends to send/receive signaling messages to/from the CN and the average transmitted/received message size. Thus, the SSCOP layer may estimate at runtime the size of its transmission, retransmission and receive queues in order to avoid overflow cases and allocation of unused memory.
INTERNATIONAL JOURNAL OF ELECTRONICS, INFORMATION AND SYSTEMS, VOL.12, NO.2, February 2010
To give implementation details, the SCCP exports the following function, which is called by the application layer, when a testing scenario starts. #define MAX_SCCP_CALLS 200000 int Check_SCCP_Resources(int scenario_users) { if(scenario_users > MAX_SCCP_CALLS) NO_AVAIL_RESOURCE;
return
… } c) In the OSI model, when a layer receives a message from an upper layer it allocates a buffer, then it inserts its header, copies the message and forwards the new message to the low layer.
In our approach, when a layer must send a message to the network, it allocates a large buffer (i.e. a byte- array), it inserts the message into this buffer and forwards the buffer
17
to the lower layer, along with an offset that indicates the start of the message into this buffer. Since the message will be encapsulated as it traverses the protocol stack, the first positions of the buffer are intentionally left blank and the message starts exactly after them. When the lower layer gets the buffer, it adds its header bytes in the first blank positions prior the included message and forwards the buffer to the next layer, along with the offset that indicates the start of the new merged message into the buffer. The same happens until the buffer reaches the lowest layer. The main reason to adopt this method is to improve performance by avoiding memory allocation and copy functions as the message traverses the layers. As said, only one buffer is allocated in the layer that sends the message and each low layer adds its information in the specific position indicated by the upper layer.However, this method works only if the upper layer is aware of the header bytes that will be added by all lower layers, in order to allocate the proper buffer size. Alternatively, it may allocate a buffer with size equal to the maximum packet size allowed by the last layer, which is the SSCOP layer. In any case, cross-layer communication between non-adjacent layers is required to exchange this information.
INTERNATIONAL JOURNAL OF ELECTRONICS, INFORMATION AND SYSTEMS, VOL.12, NO.2, February 2010
18
To give implementation details, when the RANAP layer receives a NAS message it appends its information and forwards the new message to the SCCP layer.
// insert RANAP info in the first ‘hdr_len’ positions
// This function is defined in the RANAP layer and it is called by the NAS when it has to send a message to the network.
buf[1] = …;
buf[0] = …;
buf[hdr_len-1] = …;
int Forward_To_RANAP(BYTE* pkt, int offset_1)
memcpy(buf + hdr_len, pkt, pkt_len); // copy the NAS message at the end of the RANAP buffer.
{
Forward_To_SCCP(buf, hdr_len + pkt_len);
// The program finds the number of RANAP information bytes (i.e. hdr_len) int offset_2 = offset_1 - hdr_len;
delete [ ] buf;
// insert RANAP info
}
int Forward_To_RANAP(BYTE* pkt, int pkt_len)
Since we have 4 layers between NAS and SSCOP, this piece of code should be inserted in all layers down to the SSCOP. We performed an experiment with 200.000 simulated users in an Intel Pentium 2.80GHz running Win2K, sending the same NAS message (size < 200 bytes) to the network. The result is that the message of the last user leaves the SSCOP layer ~650 ms after the first user’s message, which means that the memory allocation and copy functions introduce some delay that shifts the message transmission time. Instead, the described scheme with a single buffer allocation in the NAS layer does not require extra memory allocation and copy functions as the message traverses the layers. Thus, we could claim that the RNC simulator supports the ‘parallel’ message transmission from all simulated users, which makes our system more competitive.
{
4.2. Parameter sharing.
// The program finds the number of RANAP information bytes (i.e. hdr_len)
In our implementation each user in the application layer is represented by an object (i.e. C++ class), which keeps user related information (i.e. IMSI). Each time a user performs a signaling operation the user identifier is passed to the
pkt[offset_2] = …; pkt[offset_2 + 1] = …; pkt[offset_2 + 2] = …; … Forward_To_SCCP(pkt, offset_2); } 4.1.1 Performance measures If we follow the strict OSI approach, each layer should allocate a buffer to store the incoming message like this:
BYTE* buf = new BYTE[hdr_len + pkt_len];
INTERNATIONAL JOURNAL OF ELECTRONICS, INFORMATION AND SYSTEMS, VOL.12, NO.2, February 2010
SCCP layer. As mentioned, the SCCP layer creates an object to handle the SCCP connection and stores the user identifier in a private member variable. Thus, inside SCCP layer each user maps to an SCCP connection. If the SCCP connection is established successfully the SCCP informs the application and passes the SCCP call identifier and the user identifier. The application layer stores the SCCP call identifier in a member variable of the object identified by the user identifier. The result of this parameter sharing is that each user object in the application level holds the corresponding SCCP call identifier and each SCCP call object inside the SCCP layer holds the corresponding user identifier. Let see the benefit of this parameter sharing. When the SCCP forwards a network message to the application it also passes the user identifier which corresponds to this SCCP connection. Thus, when the message reaches the application layer it is directly processed by the right user object indexed by the user identifier. In a similar way when the application sends a user message to the network (i.e. Detach Request) it also passes the SCCP call identifier. Thus, when the message reaches the SCCP it is directly processed by the right SCCP call object indexed by the call identifier.
19
int user_index; // Index of the corresponding user object, which is defined in the Application layer and controls the SCCP call }; struct App_Data // API information exchanged between the Application and the SCCP { … BYTE* pkt; int pkt_len; int SCCP_Conn_index; int user_index; }; 1. When an Upper Layer (UL) message reaches the SCCP layer the ‘SCCP_Conn’ object that must handle it, is accessed directly via:
Imagine a testing scenario that more than 200.000 users are simulated, which normally requires equivalent number of established SCCP connections. The objects, in both directions, are directly accessed through the respective identifiers so we have performance gain. To give implementation details, the usage of indexes that allow the direct access of the right SCCP calls and user objects is done like:
SCCP_Conn[App_Data->SCCP_Conn_index]>Process_UL_Msg(App_Data->pkt, App_Data->pkt_len);
class User // defined in the Application layer (each user is handled by a separate object)
4.2.1 Performance measures
{ … int Process_LL_Msg(BYTE* pkt, int pkt_len); int SCCP_Conn_index; // Index of the corresponding SCCP call object, which is defined in the SCCP layer and is associated with a specific user }; class SCCP_Conn // defined in the SCCP layer (each user requires a different SCCP connection) {
2. When a Low Layer (LL) message reaches the Application layer the ‘User’ object that must handle it, is accessed directly via: User[App_Data->user_index]>Process_LL_Msg(App_Data->pkt, App_Data->pkt_len);
As said, the issue is how to associate the SCCP connection to the user, who controls it in the Application layer and vice versa. With the usage of respective indexes in the API information the right objects are accessed directly, so no time is wasted for retrieval purposes. However, the transfer of indexes from layer to layer is not considered as “right” OSI programming style. Normally, these indexes should be declared as “private” members in the respective classes and kept inaccessible from the outside world. Thus, another implementation could choose to pass other information to the SCCP in order to identify the user. For i.e., this information could be the IMSI, which is unique for each simulated user. struct App_Data // API information exchanged between the Application and the
… SCCP layers int Process_UL_Msg(BYTE* pkt, int pkt_len);
INTERNATIONAL JOURNAL OF ELECTRONICS, INFORMATION AND SYSTEMS, VOL.12, NO.2, February 2010
20
{
… // process the message
…
ReleaseMutex(evt_handle);
BYTE* pkt;
}
int pkt_len;
}
BYTE* IMSI;
4.3.1 Performance measures
};
With the merging of SSCOP and SSCF layers each received message waits to be processed in a single ‘wait’ point, which makes things simpler and faster. If we have two layers, which means two ‘wait’ points, then the probability to find a ‘mutex’ locked is increased. Also, if the layers are implemented separately then we should add the delay of the function calls to pass the message from the SSCOP to SCCF and vice versa. The result of the same experiment with 200.000 simulated users is that the total actual gain is about 0.1 ms.
In this case, the SCCP should keep a map table in order to associate the
to the respective SCCP call object. We may use the