Tài liệu Đề tài Evaluation of existing algorithms scheduling for Real-Time service flows for WiMAX uplink: Tables of Contents
Abstract
Nowadays, the demand for Internet broadband access is growing rapidly, which results in lots of new standards of accessing Internet broadband. Together with the increasing development of traditional wired broadband networks, wireless network access is expanding more and more. Wireless broadband access standard IEEE 802.16 came into existence as a result of this fact. IEEE 802.16 standards are established for Wireless Metropolitan Area Network- WMAN, however, the main part of 802.16 packet scheduling still remains unspecified. This thesis, thus, reviews analytical methods to evaluate the efficiency of real time system models with the use of single- server queue that have been used by many researchers. The service principle in the queue is EDF (Earliest Deadline First). Real-time jobs with exponentially distributed deadlines arrive according to a Poisson process, all jobs have deadlines until the end of service. The thesis also introduces a non-preemptive ED...
63 trang |
Chia sẻ: hunglv | Lượt xem: 1178 | Lượt tải: 0
Bạn đang xem trước 20 trang mẫu tài liệu Đề tài Evaluation of existing algorithms scheduling for Real-Time service flows for WiMAX uplink, để tải tài liệu gốc về máy bạn click vào nút DOWNLOAD ở trên
Tables of Contents
Abstract
Nowadays, the demand for Internet broadband access is growing rapidly, which results in lots of new standards of accessing Internet broadband. Together with the increasing development of traditional wired broadband networks, wireless network access is expanding more and more. Wireless broadband access standard IEEE 802.16 came into existence as a result of this fact. IEEE 802.16 standards are established for Wireless Metropolitan Area Network- WMAN, however, the main part of 802.16 packet scheduling still remains unspecified. This thesis, thus, reviews analytical methods to evaluate the efficiency of real time system models with the use of single- server queue that have been used by many researchers. The service principle in the queue is EDF (Earliest Deadline First). Real-time jobs with exponentially distributed deadlines arrive according to a Poisson process, all jobs have deadlines until the end of service. The thesis also introduces a non-preemptive EDF scheduling algorithm and admission control for real-time polling services in WiMAX.
Acknowledgements
It has been an honor that I have had the chance to study in the field that I have been taught and interested in. It was obvious that I had to try my best to finish my work, but without the help of many people, this could not have been completed.
Firstly, I would like to thank Prof. Dr. Nguyen Dinh Thong from the University of Technology, Sydney, for his helpful and enthusiastic instructions throughout the period of time that I worked on this thesis.
I am also very grateful to Mr. Nguyen Quoc Tuan, M.E. of the Department of Telecommunications, College of Technology, Vietnam National University Hanoi, for his sincere help and providing the best setting within the time I studied and researched in the Faculty of Electronics and Telecommunications.
Next, I am very thankful to Dr. Nguyen Thi Minh Tam, Ms. Nguyen Hai Ha, B.A. and Ms. Tran Thanh Thu, B.A. from the College of Foreign Languages, Vietnam National University Hanoi, who have helped me correct the English of this thesis.
I would also like to thank all my teachers and peers that helped me during the time that I have spent here at the university.
Last but not least, I also want to give my sincere thanks to my family who have constantly supported me during my four years studying far away from home.
Hanoi, date month year 2008
Nguyen Minh Khue
List of figures
Figure 1.1: Simple scheduling architecture…………………………………………..15
Figure 2.1: TDD frame structure……………………………………………………..23
Figure 2.2: Correspondence between the CID and SFID…………………………….28
Figure 2.3: Illustration of service flows and connections…………………………….28
Figure 2.4: Classification and CID mapping ………………………………………………...32
Figure 2.5: Header suppression at the sending entity ………………………………………...33
Figure 2.6: Header suppression mechanism at the receiving entity…………………………..33
Figure 2.7: Illustration of PHS operation……………………………………………………..35
Figure 2.8: DSC MAC management message used for the signalling of a PHS rule………..36
Figure 2.9 Quality of Service model in IEEE 802.16 ………………………………………..37
Figure 2.10 Service flow transition diagram………………………………………………….39
Figure 2.11 Medium Access Control architecture of the Base and Subscriber Station..........40
Figure 3.1. State-transition-rate diagram for Markov chain M………………………43
Figure 3.2 The modified view to the non-preemptive EDF queue ………………………50
Figure 3.3 Hierarchical structure of bandwidth allocation [24]……………………55
Figure 3.4 rtPS database structure in scheduling database module [24] …………..56
Figure 3.5 Token Bucket mechanism ……………………………………………. ..57
Figure 4.1 P-EDF analytic with µθ=4…………………………………………..........59
Figure 4.2 P-EDF analytic with µθ=8…………………………………………..........60
Figure 4.3 Loss probability analytic….……………………………………………….61
Figure 4.4 Bandwidth allocation for rtPS connection ……………………………. …62
Figure 4.5 Arrival and service curve for rtPS connection ………………………… ..63
List of tables
Table 1: CID ranges as defined in Reference IEEE 802.16-2004…………………..29
Table 2: Possible values of the PHS support field…………………………………..33
Glossary
Base Station (BS): Generalized equipment set providing connectivity, management, and control of the subscriber station (SS).
Broadband: having instantaneous bandwidths greater than around 1MHz and supporting data rates greater than about 1.5Mbps.
Broadband Wireless Access (BWA): Wireless access in which the connection(s) capabilities are broadband.
Burst Profile: set of parameters that describe the uplink or downlink transmission properties associated with an interval usage code. Each profile contains parameters such as modulation type, forward error correction (FEC) type, preamble length, guard times, etc.
Connection: A unidirectional mapping between base station (BS) and subscriber station (SS) medium access control (MAC) peers for the purpose of transporting traffic flow of a service. Connection are identified by a connection identifier (CID). All traffic is carried on a connection even for service flows that implement connectionless protocols, such as Internet Protocol (IP).
Connection Identifier (CID): A 16-bit value that identifiers a connection to equivalent peers in the MAC of the base station (BS) and subscriber station (SS). It maps to a service flow identifier (SFID), which defines the Quality of Service (QoS) parameters of the service flow associated with that connection.
Downlink: The direction from the base station (BS) to the subscriber station (SS).
Downlink Map (DL-MAP): A MAC messages that defines burst start times for both time division multiple and time division multiple access (TDMA) by a subscriber station (SS) on the downlink.
Frame: A structured data sequence of fixed duration used by some PHY specifications. A frame may contain both an uplink subframe and a downlink subframe.
Frequency Division Duplex (FDD): A duplex scheme in which uplink and downlink transmissions use different frequencies but are typically simultaneous.
Protocol Data Unit (PDU): The data unit exchanged between peer entities of the same protocol layer. In the downward direction, it is the data unit generated for the next lower layer. On the upward direction, it is the data unit received from the previous lower layer.
Service Access Point (SAP): The point in a protocol stack where the services of a lower layer are available to its next higher layer.
Service Data Unit (SDU): The data unit exchanged between two adjacent protocol layer. On the downward direction, it is the data unit received from the previous higher layer. On the upward direction, it is the data unit sent to the next higher layer.
Service Flows (SF): A unidirectional flow of medium access control (MAC) service data units (SDUs) on a connection that is provided by a particular Quality of Service (QoS).
Service Flow Identifier (SFID): A 32-bit quantity that uniquely identifiers a service flow to both the subscriber station (SS) and base station (BS).
Subscriber Station (SS): Generalized equipment set providing connectivity between subscriber equipment and a base station.
Time Division Duplex (TDD): A duplex scheme where uplink and downlink transmissions occur at different times about may share the same frequency.
Uplink: The direction from a subscriber station to the base station.
Uplink MAP (UL-MAP): A MAC messages that defines the uplink usage in terms of the offset the burst relative to the allocation start time (AST).
Acronyms
AAS Adaptive Antenna Systems
ACK Acknowledgement
AK Authorization Key
ATM Asynchronous Transfer Mode
ARQ Automatic Repeat Request
BE Best Effort
BPI+ Baseline Privacy Interface
BS Base Station
BWA Broadband Wireless Access
CBR Constant Bit Rate
CID Connection Identifier
CP Complete partitioning
CS Complete sharing
DCD Downlink Channel Descriptor
DHCP Dynamic Host Control Protocol
DOCSIS Data Over Cable Service Interface Specification
DSA Dynamic Service Addition
DSC Dynamic Service Creation
DSD Dynamic Service Deletion
ertPS Extended real-time Polling Service
FCFS First Come, First Serve
FIFO First In, First Out
GPC Grant Per Connection
GPSS Grant Per Subscriber Station
HEC Header Error Check
IEEE Institute of Electrical and Electronic Engineers
IP Internet Protocol
LAN Local Area Network
LoS Line of Sight
MAC Medium Access Control
NLoS Non Line of Sight
nrtPS non-real time Polling Service
PHS Payload Header Suppression
PHSF Payload Header Suppression Field
PHSI Payload Header Suppression Index
PHSM Payload Header Suppression Mask
PHSV Payload Header Suppression Valid
PHSS Payload Header Suppression Size
PKM Privacy Key Management
PHY Physical Layer
PMP Point to Multipoint
QoS Quality of Service
SAP Service Access Point
SFID Service Flow Identifier
SNMP Simple Network Management Protocol
SS Subscriber Station
TDMA Time Division Multiple Access
TDD Time Division Duplexing
TFTP Trivial File Transfer Protocol
VBR Variable Bit Rate
VoIP Voice Over IP
RLC Radio Link Control
RTG Receive/Transmit Transition Gap
RTP Real-Time Protocol
rtPS real-time Polling Service
UCD Uplink Channel Descriptor
UDP User Datagram Protocol
WiMAX Wide Area Network
WLAN Wireless Local Area NetworkWPAN Wireless Personal Area Network
Chapter 1.
Introduction to WiMAX Broadband Wireless Access and QoS Scheduling
There has recently been a considerable growth in demand for high-speed wireless Internet access, which has caused the emergence of new wireless technologies for short distances (i.e. IEEE 802.11) and also wireless technologies for long and medium distances (i.e. IEEE 802.16).
Wireless technologies for long and medium distances, in particular IEEE 802.16, offer another option apart from the current wired access networks such as cable modem and digital subscriber line (DSL) links. The IEEE 802.16 has become a highly applicable option, as it can be deployed rapidly even in areas where wired infrastructures could hardly reach. Moreover, it covers broad geographical area in a more economical and time efficient manner than traditional wired systems.
In comparison with 802.11 standard, 802.16 can serve a much greater number of simultaneous users and approximately 50 times greater (radial) coverage. The IEEE 802.16a standard has a range of up to 30miles with data transfer speeds of up 70 Mbps.
At the same time, customer demand for quality of service (QoS) has significantly increased as a result of the growth in broadband wireless access (BWA). BWA p providers are critically concerned about the provision of QoS. The IEEE 802.16 standards appear to offer a solution to this problem, by establishing a number of unique and guaranteed QoS parameters in terms of delay, jitter and throughput. This enables service providers to offer flexible and enforceable QoS guarantees, the benefit that has never been available with other fixed broadband wireless standards [1].
The IEEE 802.16 has great potential in the broadband market. Recent investment in Australia [2] indicates that IEEE 802.16 may become prevalent in the coming years. IEEE 802.16 standard are now supported by both the IEEE and the European Telecommunication Standard Institute (ETSI) HiperMan standard. The changes adopted by either of the two bodies are being reflected in the baseline technical requirements. It is also supported by parallel interoperability efforts through the industry forum known as Worldwide Interoperability for Microwave Access (WiMAX).
1.1 Quality of Service Fundamentals
Integrated telecommunication networks carry traffic of several different classes, including one or more real-time traffic classes, each with its own set of traffic characteristics and performance requirements. Two different approaches have been developed to deal with this phenomenon. The first approach is circuit-switched, in which sufficient resources are allocated to each connection to handle its peak rate. This guarantees that the connection will obtain the quality of service (QoS) it requires, but at the cost of under-utilizing the network resources. The second popular approach is the packet-switched approach, in which traffic from all sources is broken into packets and statistical multiplexing techniques are used to combine all the network traffic through single switching nodes. This allows higher network utilization, but requires more sophisticated controls to ensure that the appropriate QoS is provided [3].
Packet-switched networks were originally designed to provide best effort services, but later the demand for differentiated services caused the packet-switched networks to be integrated with QoS architecture. QoS architecture introduces tools to treat packets in different ways, for example, real-time packets will be given priority over non-real-time packets allowing them to traverse the network faster and arrive at the destination within their required delay bounds.
QoS in packet-switched networks can be characterized in terms of a specific set of parameters including delay, delay jitter, bandwidth and loss or error rate. Delay is the time that it takes for the packet to traverse from source to destination, it consists of transmission delay, propagation delay, and queuing delay in intermediate routers. Delay jitter is the fluctuation or variation in end-to-end delay from one packet to the next within the same packet flow. Bandwidth is a measure of the amount of data that a network allows one flow to transmit over a period of time. Drop rate of one flow measures the number of packets which is dropped due to buffer overflow, transmission error or expiry caused by over-staying than their maximum delay bound.
The ability to manage congestion and maintain QoS in a packet-switched network requires the collaboration of many components in the QoS architecture. The three main components are as follows:
Admission Control determines whether a new request for resources can be granted or not based on the knowledge of total network capacity and the already accepted flows. It has a critical role of limiting the number of flows admitted into the network so that each individual flow obtains its required QoS.
Packet Scheduling is a critical component in any QoS architecture, as the packets traverse different switches (or routers) along their way through the network. It determines the order in which packets belonging to different flows transmit on the output link, thus ensuring that packets from different application meet their QoS constraints. The basic scheduler architecture is shown in figure 1.1, in which several inputs are buffered and there is a single scheduler (server). An optimal scheduling mechanism will provide the necessary QoS guarantees required by different classes of traffic while efficiently utilizing the network resources.
Buffer Management has the responsibility of discarding one or more in coming packets before the output buffer overflows, in order to improve the performance of the network. One of the most common packet drop strategies is Random Early Detection (RED) [4].
Figure 1.1 Simple scheduling architecture.
1.2 QoS in IEEE 802.16
The IEEE 802.16 is designed with the aim of providing a guaranteed QoS. There are a number of features included in the current standard. However, the details of some of these features are still unspecified. In the section, we describe initially the current QoS architecture of the IEEE 802.16 and subsequently the unspecified features.
In IEEE 802.16, the QoS architecture is part of a MAC layer. For wireless networks it is natural to integrate the QoS architecture with the MAC protocol. The MAC protocol coordinates communication over the shared wireless medium. The 802.16 MAC provides QoS differentiation for different types of applications that might operate over 802.16 networks. IEEE 802.16 defines five classes of service to support the QoS in 802.16-2004 fixed WiMAX and one extra class for 802.18e-2005 Mobile WiMAX [5].
Unsolicited Grant Service (UGS) is designed to support real-time data streams consisting of fixed-size data packets issued at periodic intervals, such as T1/E1 and Voice over IP without silence suppression. UGS is prohibited from using any contention requests, and there is no explicit bandwidth request issued by subscriber station (SS). The base station (BS) provides fixed size access slots at periodic intervals to the UGS flows. However, the reserved bandwidth may be wasted when a corresponding UGS flow is inactive.
Real-Time Polling Service (rtPS) is designed to support real-time data streams consisting of variable-sized data packets that are issued at periodic intervals, such as moving pictures expert group (MPEG) video. The mandatory QoS service flow parameters for this scheduling service are minimum reserved traffic rate, which is defined as the minimum amount of data transported on the connection over period of time and maximum latency, which is the upper bound on the waiting time of a packet in the network. The rtPS flows are polled by BS through a unicast request polling frequently enough to meet the delay requirement of the service flows.
Extended Real-Time Polling Service (ertPS) The ertPS is designed for realtime traffic with variable data rate (such as VOIP service with silence suppression) over the WiMAX network.
Non Real-time Polling Service (nrtPS) is designed to support delay tolerant data streams consisting of variable-sized data packets for which a minimum data rate is required. The nrtPS flows like an rtPS flow are polled through a unicast request polling but at time-sale of one second or less. The nrtPS flows can also receive a few request polling opportunities during network congestion, and are also allowed to use the contention request.
Best Effort Service (BE) is designed to support data streams for which no minimum service level is required and therefore may be handled on a bandwidth available basic. The BE flows are allowed to use contention request opportunity. The applications in this class receive any bandwidth remaining after it has been allocated to all other classes.
1.2.1 Admission Control
In IEEE 802.16 before a subscriber station (SS) can initiate a new connection, it must first make a request to the base station (BS) with the service contracts required. The BS may reject the request based on its ability to uphold the requirements. The admission control mechanism at the BS is not specified in the standard. It is possible that the BS admits a connection based on statistical QoS, where, on average, all connections would not have their QoS satisfied, or that the BS could have a stricter model where QoS is absolutely guaranteed, even in the worst case scenario. The admission control problem will be discussed in more detail in chapter 2.
1.2.2 Scheduling
IEEE 802.16 specifies only the outbound traffic scheduling goals, and not the methodology. Essentially, traffic is to be serviced such that service contracts have no or minimal disruption.
Bandwidth can be requested in the initialization of a connection, such as an UGS traffic class connection, or, due to the uplink burst profile being so dynamic in other classes of traffic, the SSs would indicate current uplink bandwidth requirements for each.
When a SS is granted bandwidth it may choose how this is allocated among its connections. This means that connections may “borrow” bandwidth from other connections within the same SS. The only exception is UGS traffic, which is granted a fixed amount of bandwidth per frame and may not use more than this. Of course it may use less, in which case the unused bandwidth passes to other connections to be used.
1.3 Research Motivation and objectives
This research is motivated by the fact that user expectations of wired and wireless networks have increased with regard to a large variety of services and applications with different QoS requirements. The implementation of QoS guarantees in such networks is a prerequisite for best supporting services and applications over such networks. It is believed that this topic requires extensive research, both qualitative and quantitative, to find out the most suitable model of QoS architecture.
Another motivation of this research is the fact that an ideal QoS architecture needs to seamlessly support the same QoS constraints across heterogeneous types or networks including those designed for the wired and wireless environment. The emphasis of the research would be on broadband wireless access networks in which stringent requirements are imposed on the QoS architecture due to the numerous limitations of the wireless channel.
A further motivating factor is that IEEE 802.16 does not specify the methodology to use for scheduling and admission control. Previous research in this area has only recently gained momentum, with many researchers proposing algorithms for efficient bandwidth allocation; however, limitations are still apparent in their work.
The objective of this thesis is to give an evaluation of existing scheduling algorithms for real-time service flows for WiMAX uplink. A new priority admission control strategy is also introduced, which admits the connections into the system according to their QoS requirements. The new strategy can be used to minimize the connection blocking probability of higher priority connections which is acceptable from the viewpoints of both users and service providers.
1.4 Thesis organization
This thesis is divided into 4 chapters: Chapter 1 introduces Broadband Wireless Access (BWA). Chapter 2 describes the MAC layer functionalities of the IEEE 802.16. It also presents an overview of literature in the area of traffic scheduling and admission control and QoS architecture model. Chapter 3 is the description of the proposed scheduling algorithms and admission control framework for broadband wireless access (BWA). Finally, Chapter 4 explains the simulation models and the simulation methodology used in this study.
Chapter 2. IEEE 802.16 Standards
2.1 Protocol layer in 802.16
2.1.1 Physical layer
The WiMAX physical layer is based on Orthogonal Frequency Division Multiplexing (OFDM). OFDM is the transmission scheme of choice to enable high-speed data, video, and multimedia communications and is used by a variety of commercial broadband systems, including DSL, Wi-Fi, Digital Video Broadcast-Handheld (DVB-H), and MediaFLO, besides WiMAX. OFDM is an elegant and efficient scheme for high data rate transmission in a non-line-of-sight or multipath radio environment. In this section, we cover the basics of OFDM and provide an overview of the WiMAX physical layer.
The IEEE 802.16-2004 standard specifies 5 variants of the PHY layer distinguished by whether the PHY layer is Single Carrier (SC) or uses OFDM technology. The variants with a brief description follow:
Wireless Metropolitan Area Network – Orthogonal Frequency Division Multiplexing (WirelessMAN-OFDM): The WirelessMAN-OFDM PHY is based on OFDM technology designed mainly for fixed SSs, where the SSs are deployed in residential areas and businesses. The OFDM PHY supports sub-channelization in the uplink with 16 sub-channels. It also supports TDD and FDD frame structures with both FDD and H-FDD options. The modulation schemes supported are BPSK, QPSK, 16-QAM and 64-QAM.
Wireless Metropolitan Area Network – Orthogonal Frequency Division Multiple Access (WirelessMAN-OFDMA): This variant is based on OFDMA technology and offers sub-channelization in both uplink and downlink. The OFDMA PHY supports both TDD and FDD frame structures, with both FDD and H-FDD options. The variant is different from WirelessMAN-OFDM in that it supports sub-channelization in both the uplink and downlink directions resulting in broadcast messages being transmitted at the same time as data.
Wireless High Speed Unlicensed Metro Area Network (WirelessHUMAN): This specification of the PHY layer is similar to the OFDM based layer except it is focused on Unlicensed National Information Infrastructure (UNII) devices and other unlicensed bands.
Wireless Metropolitan Area Network – Single Carrier (WirelessMAN-SC): This variant specifies the use of the technology in the frequency range 10-66GHz. The PHY Layer design supports Point to Multi-Point (PMP) architecture whereby the BS acts as the coordinator for all the SSs in its cell. In this design, the BS transmits a Time Division Multiplexing (TDM) signal in which the SSs are allocated time slots serially. This variant provides support for both TDD and FDD frame structures. Both TDD and FDD support adaptive burst profiles whereby the modulation and coding options can be dynamically assigned on a burst by burst basis.
Wireless Metropolitan Area Network - Single Carrier Access (WirelessMAN-SCa): This variant of the PHY layer uses single carrier modulation in the 2-11GHz frequency range and it is intended for Non Line-Of-Sight (NLOS) operations. It supports both FDD and TDD frame structures with TDMA in the uplink and TDM or TDMA in the downlink. The PHY specification includes Forward Error Correction (FEC) coding for both uplink and downlink and framing structures that allow improved channel estimation performance over NLOS operations.
In the IEEE 802.16e-2005, PHY layer has been improved to adapt to the mobility, that's due to the exploitation of SOFDMA. S-OFDMA improves the performance of OFDMA256 for NLOS applications by:
Improving NLOS coverage by using advanced antenna diversity schemes, and hybrid ARQ (HARQ),
Improving coverage by achieving higher capacity through the use of Adaptive Antenna Systems (AAS) and MIMO technology,
Increasing system gain by using denser subchannelization, thereby improving indoor penetration,
Allowing tradeoff between coverage and capacity by using DL subchannelization,
Introducing high-performance coding techniques such as Turbo Coding and Low-density Parity Check (LDPC), enhancing security and NLOS performance.
OFDM allows smart antenna operations to be perform on complex flat sub-carriers and therefore complex equalizers are not required to compensate for frequency-selective fading. In fact, MIMO-OFDM/OFDMA is envisioned to be the corner-stone for 4G broadband communication systems. The full range of smart antenna technologies supported includes:
Beamforming; multiple antennas transmit weighted signals to improve coverage directivity.
Space-time code (STC): orthogonal coding for multiple transmit antenna for spatial diversity gain and reducing fade margin, e.g. Alamouti code for 2-antenna case.
Spatial Multiplexing (SM): multiple data streams are transmitted over multiple antennas. If the receiver also has multiple antennas, i.e. the MIMO case, it can separate the different streams to add linearly to give higher throughput. Unfortunately, MIMO, being dependent on the channel as well, is affected by poor fading. On the other hand, STC, being a transmit diversity, provides large coverage regardless of channel condition, but does not improve the peak data rate. In UL, MS only has one antenna, and two MSs can collaboratively transmit in the same slot as if two data streams are spatial multiplexed onto the same one antenna an UL collaborative SM.
2.1.2 MAC Layer
The Mac layer in WiMAX basically provides intelligence to the PHY layer. It contains 3 sublayers: service-specific convergence sublayer, MAC common part sublayer and the privacy sublayer.
Service specific convergence sublayer: the IEEE 802.16-2004 standard specifies 2 types of service convergence sublayer for mapping service to and from the MAC layer; the ATM sublayer for mapping ATM service and the packet sublayer for mapping packet services such as IPv4, IPv6 and Ethernet. The major task of the sublayer is to map Service Data Units (SDUs) to MAC connections, and to enable QoS and bandwidth allocation based on the parameters received from the upper layers. The convergence sublayer also has the ability to perform more complicated tasks such as payload header compression.
Mac common part sublayer: the MAC protocol, according to the IEEE 802.16-2004 standard, is mainly designed for Point to Multi-Point (PMP) operation. The MAC layer is connection-oriented including connection-less services mapped to a connection which allows a way of requesting bandwidth and mapping the QoS parameters of the connection. Connections contain a 16 bits Connection Identifiers (CIDs) which act as the primary addresses used for all operations. Each SS has a 48-bit MAC address which is mainly used as an equipment identifier. Upon entering the network, an SS is assigned 3 management connections in each of the uplink and downlink directions. These connections are:
Basic connection: this connection is responsible for transferring critical MAC and Radio Link Control (RLC) messages.
Primary Management Connection: this connection is responsible for transferring longer and more delay-tolerant control messages such as those used for authentication and connection setup.
Secondary Management Connection: this connection is used for transfer of standard based management messages such as Dynamic Host Configuration Protocol (DHCP), Trivial File Transfer Protocol (TFTP) and Simple Network Management Protocol (SNMP).
The functionalities supported by the Common Part Sublayer are:
Channel Acquisition: the MAC protocol includes an initialization procedure that allows as SS, upon installation, to scan its frequency list to find an operating channel. Once the SS is synchronized, it will periodically search for Downlink Channel Descriptor (DCD) and Uplink Channel Descriptor (UCD) messages that inform the SS about the modulation scheme used on the channel.
Ranging and Negotiation: after determining the parameters to be used for ranging, the SS will search for ranging opportunities by scanning the UL-MAP messages in every frame. The ranging response from the SS allows the BS to find out the distance between itself and the SS.
SS authentication and Registration: each SS contains a manufacturer issued digital certificate that establishes a link between the 48-bit MAC address of the SS and its public RSA key. After verifying the identity of the SS, if the SS is admitted into the network, the BS will respond to the request with an Authorization Reply containing an Authorization Key (AK), encrypted with the SS’s public key.
Bandwidth Grant and Request: the MAC layer distinguishes between 2 classes of SS, one accepts bandwidth for a connection and the other accepts bandwidth for the SS. For the Grant per Connection (GPC) class, bandwidth is granted explicitly to the connection of the SS. With the Grant per SS (GPSS) class, bandwidth is granted to the SS and then the SS decides how to distribute the bandwidth among its connections. The SS can use the bandwidth for the connection that requested it or for another connection. The IEEE 802.16-2004 standard allows SSs to request bandwidth via contention or piggyback mechanism. In contention mechanism, the SSs will attach their bandwidth request onto data packets. A bandwidth request can also be either incremental or aggregate. When the BS receives an incremental bandwidth request, it will add the quantity of the bandwidth requested to its current perception of the bandwidth needs of the SS. When the BS receives an aggregate bandwidth request, it will replace its perception of the SS with the quantity of bandwidth requested. Piggyback bandwidth requests are always incremental.
Frame structure and MAP messages: IEEE 802.16 MAC support both TDD and FDD frame structures. The MAC starts building the downlink subframe with DL-MAP (Downlink Map ) and UL-MAP (Uplink MAP) messages. The DL-MAP indicates the PHY transitions on the downlink while the UL-MAP indicates bandwidth allocation and burst profiles in the uplink. In a Time Division Duplexing (TDD) frame structure, the frame is divided into uplink and downlink subframes along the time axis (see Figure 2.1). The frame starts with the downlink sub-frame followed by a short gap called transmit/receive transition gap (TTG). The downlink sub-frame contains a preamble followed by a header and one or more downlink bursts. Each uplink burst contains a preamble that allows the BS to synchronize with each SS. The uplink sub-frame is then followed by a short gap called the receive/transmit transition gap (RTG) before the BS can start transmitting again.
Figure 2.1 TDD frame structure
Packing and Fragmentation of MAC SDUs: this functionality is executed in tandem with the bandwidth allocation process to maximize efficiency, flexibility and effectiveness. Fragmentation is the process by which a MAC SDU is divided into one or more SDU fragments. Packing is the process in which multiple MAC SDUs are packed into a single MAC PDU payload. Either functionality can be initiated by the BS in the downlink or by the SS in the uplink.
Scheduling Services: the common part sublayer maps each connection to a scheduling service, where a scheduling service is associated with pre-determined QoS parameters. The IEEE 802.16-2004 standard specifies five scheduling services: Unsolicited Grant Service (UGS), real-time Polling Service (rtPS), extended real-time Polling Service (ertPS), non-real time Polling Service (nrtPS) and Best Effort (BE). The bandwidth allocation mechanism for the UGS service as specified in the IEEE 802.16-2004 standard requires the BS to send fixed size grants to the SSs periodically.
Privacy sublayer: provides authentication, secure key exchange and encryption. IEEE 802.16’s privacy protocol is based on the Privacy Key Management (PKM) protocol of the Data Over Cable Service Interface Specification (DOCSIS) Baseline Privacy Interface (BPI+) but has been enhanced to fit seamlessly into the IEEE 802.16 MAC protocol and to better accommodate stronger cryptographic methods, such as the recently approved Advanced Encryption Standard[6].
2.2 Admission Control
Our approach towards supporting QoS requirements of different classes in IEEE 802.16 BWA is in two directions. One is in the form of an admission control mechanism and the other in the form of packet scheduling. Scheduling medicates the low level contention for service between packet of different classes, while admission control determines the acceptance or blocking of a new connection. These two levels of control are related for example, if too much traffic is allowed to enter the network by an admission control policy, then no scheduler would be able to provide the requested QoS of all classes. A functioning admission control is thus a prerequisite for any guarantee of packet level QoS. Both are important in ensuring that a requested QoS can be provided for a certain number of connections, a number that is dictated by the amount of available bandwidth in the system.
This chapter describes admission control policy which describes the techniques used to determine the availability of uplink resources for providing the QoS requirements specified by the requesting connection.
2.2.1. Objective
There is no question about the necessity of using admission control in any kind of networks that aims to provide QoS support for its connections. As the IEEE 802.16 is aiming to do so, it is essential to have some sorts of admission control being embedded in its QoS architecture, however, there is no admission control procedure is defined in the current standard. In this work we have suggested an efficient admission control module, targeting network utilization while giving a priority to the BE and rtPS connection request, since they carry a data of real-time applications.
2.2.2 Overview of Admission Control
I express the need for the admission control in order to control the usage and allocation of bandwidth resources for various traffic classes requiring certain QoS guarantee. Admission control is a key component in determining whether a new request for a connection can be granted or not according to the current traffic load of the system. This assumes great significance when the BS needs to maintain a certain promised level of service for all the connections being admitted (served). If the admission control admits too few connections, it results in wastage of system resources. On the other hand. If too many connections are allowed to contend for resources, then the performance of the already admitted connections degrades rapidly in the presence of new connections. Therefore, judicious decision making mechanism for allocating bandwidth to different classes of service is needed.
In IEEE 802.16, before an SS can initiate a new connection or changing or deleting an already admitted connection, it must first make a request to the BS. As mentioned earlier in chapter 1, four types of MAC layer services exist in IEEE 802.16. These service flows can be created, changed, or deleted through the issue of dynamic service addition (DSA), dynamic service creation (DSC) and dynamic service deletion (DSD) messages. Each of these actions can be initiated by the SS or the BS and are carried out through a two or three-way-handshake.
2.2.3 Admission Control Policy
The task of admission controller is to accept or reject the arriving requests for a connection in order to maximize the channel utilization, by accepting as many connections as possible, while keeping the QoS level of all connections at the level specified in their traffic profile. In other words it ensures that already admitted connections QoS will not be affected by the decision made. Although it may seems to be very intuitive and simple procedure, it has great influence on QoS of the admitted connections.
This issue have been studied and researched extensively in the context of wired and wireless networking. Although the focus of this research was not admission control, whenever it comes to IEEE 802.16, putting a well rounded introduction seems to be indispensable, thus we have concisely introduced some of the fundamental works that need to be done in this area as there is no defined procedure in IEEE 802.16.
Admission control algorithms can be categorized into three classes of complete sharing, complete partitioning and hybrid policies which is a combination methods of the other two.
Complete sharing (CS): allows all users equal access to the bandwidth available at all times. This strategy results in maximum utilization of the available bandwidth, specially in high traffic networks, which is what network providers aiming at. However, at the same time, it does not differentiate between connections of different priority that is a perverse outcome when connection of one class needs significantly less bandwidth than others. At this situations it might be desirable to reject calls of this type to increase the probability of future acceptance of a larger call. In other word it is not fair strategy to the wider bandwidth users as all request would be dealt with the same priority.
Complete partitioning (CP): divides up the available bandwidth into non-overlapping pools of bandwidth in accordance with the type of user’s connection. Therefore, number of existing users in each class would be prohibited to a maximum number M which admission decision will be made upon. This policy allows for more control of the relative blocking probability at the expense of overall usage of the network.
Hybrid policies: basically provide a compromise between the different policies by subdividing the available bandwidth into sections. Part of the bandwidth in completely shared and the other part is completely partitioned. Depending on the policy adopted, the partitioned division would be dedicated to some or all classes. This allows more live up to the QoS requirements of the different user type while maintaining higher network utilization.
In the above mentioned admission control policies, CP and CS, have no complexities to be elaborated and decision making would be a matter of checking a single condition, though, the hybrid strategy is a more open ended problem. As an example, in the following we suggest an algorithm that could be considered as a hybrid method for admission control.
2.3 Services and service flows
2.3.1 Connections and service flow
The CS provides any transformation or mapping of external network data received through the CS Service Access Point (SAP) into MAC SDUs received by the MAC Common Part Sublayer (CPS) through the MAC SAP (see Figure 1). This includes classifying external network Service Data Units (SDUs) and associating them with the proper MAC Service Flow Identifier (SFID) and Connection Identifier (CID). Classification and mapping are then based on two 802.16 MAC layer fundamental concepts
Connection: A connection is a MAC Level connection between a BS and an SS (or MS) or inversely. It is a unidirectional mapping between a BS and an SS MAC peers for the purpose of transporting a service flow's traffic. A connection is only for one type of service (e.g. voice and email cannot be on the same MAC connection). A connection is identified by a CID (Connection IDentifier), an information coded on 16 bits.
Service flow: A Service Flow (SF) is a MAC transport service that provides unidirectional transport of packets on the uplink or on the downlink. A service flow is identified by a 32-bit SFID (Service Flow IDentifier). The service flow defines the QoS parameters for the packets (PDUs) that are exchanged on the connection.
Figure 2.2 shows the relation between the SFID and CID. The relation between the two is the following: only admitted and active service flows (see the definitions below) are mapped to a CID, i.e. a 16-bit CID. In other terms:
Figure 2.2: Correspondence between the CID and SFID
A SFID matches to zero (provisioned service flows) or to one CID (admitted or active service flow).
A CID maps to a service flow identifier (SFID), which defines the QoS parameters of the service flow associated with that connection.
The definitions of connection and service flow in the 802.16 standard allow different classes of QoS to be found easily for a given element (SS or BS), with different levels of activation (see Figure 2.3). More details will now be given about connections (and CIDs) and service flows.
Figure 2.3: Illustration of service flows and connections
2.3.2 Connection Identifiers (CIDs)
A Connection IDentifier (CID) identifies a connection where every MAC SDU of a given communication service is mapped into. The CID is a 16-bit value that identifies a unidirectional connection between equivalent peers in the MAC layers of a BS and an SS. All 802.16 traffic is carried on a connection. Then, the CID can be considered as a connection identifier even for nominally connectionless traffic like IP, since it serves as a pointer to destinations and context information. The use of a 16-bit CID permits a total of 64K connections within each downlink and uplink channel. There are several CIDs defined in the standard (see Table 1). Some CIDs have a specific meaning. Some of the procedures introduced in this table, such as ranging, basic, primary and secondary management.
Table 1: CID ranges as defined in Reference IEEE 802.16-2004.
CID
Value
Description
Initial ranging
0 × 0000
Used by SS and BS during the initial ranging process
Basic CID
0 × 0001 – m
Each SS has a basic CID and has a short delay. The same CID value is assigned to both the downlink and uplink connections
Primary management
m + 1 − 2m
The primary management connection is used to exchange longer, more delaytolerant MAC management messages
Transport CIDs and secondary management CIDs
2m + 1 − 0 × FE9F
Used for data transfer and for secondary management connection
Multicast CIDs
0 × FE9F − 0 × FEFE
For the downlink multicast service, the same value is assigned to all SSs on the same channel that participate in this connection
AAS initial ranging CID
0 × FEFF
A BS supporting AAS (Advanced Antenna System) uses this CID when allocating an AAS ranging period (using AAS_ Ranging_Allocation_IE)
Multicast polling CIDs
0 × FFOO− 0 × FFF9
An SS may be included in one or moremulticast polling groups for the purposes of obtaining bandwidth via polling. These connections have no associated service flow
Normal mode multicast CID
0 × FFFA
Used in DL-MAP to denote bursts for transmission of downlink broadcast information to normal mode SS
Sleep mode multicast CID
0 × FFFB
Used in DL-MAP to denote bursts for transmission of downlink broadcast information to sleep mode SS. May also be used in MOB_TRF-INO messages
Idle mode multicast CID
0 × FFFC
Used in DL-MAP to denote bursts for transmission of downlink broadcast information to idle mode SS. May also be used in MOB_PAG-ADV messages
Fragmentable broadcast CID
0 × FFFD
Used by the BS for transmission of management broadcast information with fragmentation. The fragment subheader should use an II-bit long FSN on this connection
Padding CID
0 × FFFE
Used for transmission of padding information by the SS and BS
Broadcast CID
0 × FFFF
Used for broadcast information that is transmitted on a downlink to all SSs
2.3.3 Service Flows
A Service Flow (SF) is a MAC transport service that provides unidirectional transport of packets on the uplink or on the downlink. It is identified by a 32-bit SFID (Service Flow IDentifier). A service flow is characterized by a set of QoS parameters. The QoS parameters include details of how the SS may request uplink bandwidth allocations and the expected behaviour of the BS uplink scheduler.
2.3.3.1 Service Flow Attributes
A service flow is partially characterised by the following attributes:
Service Flow ID. An SFID is assigned to each existing service flow. The SFID serves as the identifier for the service flow in the network.
CID. Mapping a CID to an SFID exists only when the connection has an admitted or active service flow (see below).
ProvisionedQoSParamSet. This defines a QoS parameter set that is provisioned via means that the standard assumes to be outside of its scope. The standard states that this could be part of the network management system. For example, the service (or QoS) class name is an attribute of the ProvisionedQoSParamSet. There are five QoS classes, the fifth having been added by the 802.16e amendment.
AdmittedQoSParamSet. This defines a set of QoS parameters for which the BS, and possibly the SS, are reserved resources. The principal resource to be reserved is bandwidth, but this also includes any other memory or time-based resource required to subsequently activate the flow.
ActiveQoSParamSet. This defines a set of QoS parameters defining the service actually being provided to the service flow. Only an active service flow may forward packets. The activation state of the service flow is determined by the ActiveQoSParamSet. If the ActiveQoSParamSet is null, then the service flow is inactive.
Authorisation module. This is a logical function within the BS that approves or denies every change to QoS parameters and classifiers associated with a service flow. As such, it defines an ‘envelope’ that limits the possible values of the AdmittedQoSParamSet and ActiveQoSParamSet.
2.3.3.2 Types of Service Flow
The standard has defined three types of service flow:
Provisioned service flows. This type of service flow is known via provisioning by, for example, the network management system. Its AdmittedQoSParamSet and ActiveQoSParamSet are both null.
Admitted service flow. The standard supports a two-phase activation model that is often used in telephony applications. In the two-phase activation model, the resources for a call are first ‘admitted’ and then, once the end-to-end negotiation is completed, the resources are ‘activated’.
Active service flow. This type of service flow has resources committed by the BS for its ActiveQoSParamSet. Its ActiveQoSParamSet is non-null.
2.3.3.3 Classification and Mapping
Classification is the process by which a MAC SDU is mapped on to a particular connection for transmission between MAC peers. The mapping process associates a MAC SDU with a connection, which also creates an association with the service flow characteristics of that connection. Classification and mapping mechanisms exist in the uplink and downlink. In the case of a downlink transmission, the classifier will be present in the BS and in the case of an uplink transmission it is present in the SS. A classifier is a set of matching criteria applied to each packet entering the WiMAX/802.16 network. The set of matching criteria consists of some protocol-specific packet matching criteria (a destination IP address, for example), a classifier priority and a reference to a CID. If a packet matches the specified packet matching criteria, it is then delivered to the SAP for delivery on the connection defined by the CID. The service flow characteristics of the connection provide the QoS for that packet. The classification mechanism is shown in figure 2.4.
Payload Header Suppression (PHS)
The packets delivered to OSI model layer 2 may have very large headers, sometimes as long as 120 bytes. This is the case for some RTP/UDP/IPv6 packets (RTP, Real-Time Protocol, UDP, User Datagram Protocol). This is very often repetitive (redundant) information and so should not be transmitted on a scarce resource such as a radio channel, which should be used for useful information. This process is known as header compression and decompression in 3G-cellular systems. In the 802.16 standard, the PHS process suppresses repetitive (redundant) parts of the payload header in the MAC SDU of the higher layer.
Figure 2.4: Classification and CID mapping
The receiving entity restores the suppressed parts. Implementation of the PHS capability is optional. Figure 2.5 shows the PHS mechanism at the sending entity. Suppression of parts of the header leads to a compressed header. The receiver has to restore the header before properly using the received packet (Figure 2.6)
.
Figure 2.5: Header suppression at the sending entity.
Figure 2.6: Header suppression mechanism at the receiving entity.
To indicate whether the PHS is present or not, the PHS support field is used. This parameter indicates the level of PHS support. The PHS support field is a field in some MAC management messages, Registration Request and Registration Response. Table 2 shows the possible values of the PHS support field. The default value is 0 (no PHS).
Table 2: Possible values of the PHS support field
Value
Description
0
No PHS support
1
ATM PHS
2
Packet PHS
3
ATM and Packet PHS
PHS Rules
PHS rules application and signaling are different for the two defined CS specifications: ATM CS and packet CS:
The ATM standard defines two modes: the VP-switched mode (Virtual Path) and the VC-switched mode (Virtual Channel). The same PHS operation is applied for the two modes; the only difference between them is the payload header size after suppression. When the PHS is turned off, no part of any ATM cell header including the Header Error Check (HEC) field is suppressed.
In the Packet CS mode, if the PHS is enabled at the MAC connection, each MAC SDU starts with a Payload Header Suppression Index (PHSI), an 8-bit field that references which Payload Header Suppression Field (PHSF) to suppress. Once a PHSF has been assigned to a PHSI, it cannot be changed. To change the value of a PHSF on a service flow, a new PHS rule must be defined. Figure 2.7 shows the operation of the PHS in a Packet CS.
It is the responsibility of the higher-layer service entity to generate a PHS rule that uniquely identifies the suppressed header within the service flow. It is also the responsibility of the higher-layer service entity to guarantee that the byte strings that are being suppressed are constant from packet to packet for the duration of the active service flow.
Figure 2.7: Illustration of PHS operation.
As already mentioned, the sending entity uses the classifier to map packets into a service flow. The classifier uniquely maps packets to the PHS rule associated with this service flow. The receiving entity uses the CID and the PHSI to restore the PHSF. The PHS has a Payload Header Suppression Valid (PHSV) option to verify the payload header before suppressing it. It also has a Payload Header Suppression Mask (PHSM) option to allow the choice of bytes that cannot be suppressed.
The PHS rule provides PHSF, PHSI, PHSM, PHSS (Payload Header Suppression Size) and PHSV.
PHS signaling
PHS rules are created with DSA or Dynamic Service Change (DSC) MAC management messages. The BS defines the PHSI when the PHS rule is created. PHS rules are deleted with the DSC or Dynamic Service Deletion (DSD) MAC management messages. When a classifier is deleted, any associated PHS rule is also deleted. Figure 2.8 shows the use of a DSC message to signal the creation of a PHS rule and whether this rule is initiated by the BS or the SS.
In this figure, the use of the following DSC MAC management messages is introduced:
DSC-REQ. A DSC-REQ is sent by an SS or BS to dynamically change the parameters of an existing service flow. It can be used to create PHS rules.
DSC-RSP. A DSC-RSP is generated in response to a received DSC-REQ.
DSC-ACK. A DSC-ACK is generated in response to a received DSC-RSP.
The main DSC message parameters are SFID (only for DSC-RSP and DSC-ACK), CID. service class name, CS specification[7].
Figure 2.8: DSC MAC management message used for the signaling of a PHS rule.
2.4 QoS architecture model
The process of requesting and granting QoS in a network can be generally divided in two separate layers: application and network layers. The application layer, with the function of providing the end-user with a simplified and standardized view of the quality level that will be granted for a given service, is not aware of the technicalities of service requirements (such as bandwidth, delay, or jitter). Also, it does not depend on the technology-dependent issues associated to the actual networks that will be traversed (such as a fiber-optic, wireless, or xDSL). Meanwhile, the network layer deals with a set of technical QoS parameters. In details, the network layer maps these parameters on network-specific requirements that have to be fulfilled to provide the end-user with the negotiated quality level. This mapping is normally performed at the network layer in wired IP network. However, such an approach is hardly suitable for wireless networks, where there are a number of factors that influence with respect to wired networks, there is high variability of the network capacity due, for instance, to environmental conditions, the link quality experienced by different terminals is location-dependent. Consequently, the implementation of QoS provisioning at the MAC layer, as in IEEE 802.16, is often essential so as to gain a better insight of the present technology-dependent network status and to react as soon as possible to changes that might negatively affect QoS.
In IEEE 802.16 the prominent QoS functions of network provisioning and admission control are logically located on the management plane. As already indicated, admission control is outside the scope of the IEEE 802.16, which only covers the data control plane, as illustrated in figure 2.9. Network provisioning, the process of approving a given type of service, by means of its network-layer set of QoS parameters that might be activated later, can be either static or dynamic. Specifically, it is said to be static if the full set of services that the BS supports is decided a priori. This model is intended for a service provider who want to specify the full set of services that its subscribers can request, by means of manual or semiautomatic configure of the BS’s management information base (MIB). Meanwhile, with dynamic network provisioning, each request to establish a new service is forwarded to an external policy server, which decides whether to approve or not. This model allows a higher degree of flexibility, in terms of the types of service that the provider is able to offer to its subscribers, but it requires a signaling protocol between the BS and the policy server, thus incurring additional communication overhead and increased complexity.
Figure 2.9 Quality of Service model in IEEE 802.16
The admission control function, unlike the network provisioning function, which only deals with services that might be activated later, and that are therefore said deferred, is responsible for resource allocation. Thus, it will only accept a new service if it can provide the full set of QoS guarantees that it has requested, and the QoS level of all the services that have been already admitted would remain above the negotiated threshold. Admission control obviously works with a time scale smaller than that of network provisioning. This is motivated because network provisioning is much more complex than admission control, as shown in a recent study on an integrated end-to-end QoS reservation protocol in a heterogeneous environment, with IEEE 802.16 and IEEE 802.11e devices. Tested result demonstrated that the network provisioning latency of IEEE 802.16 equipments currently available in the market is in the order of several seconds, whereas the activation latency is in the order of milliseconds.
In IEEE 802.16, the set of network parameters that entirely defines the QoS of a unidirectional flow of packets resides into a service flows (SF) specification. The state of each SF can be: provisioned, admitted, active. Provisioned SFs are not bound to any specific connection, because their intentional function is to serve as an indication of what types of service are available at the BS. Then, when an application on the end-user side starts, the state of the provisioned SF will become admitted, thus booking resources that will be shortly needed to fulfill the application requirements. When the SF state becomes admitted, then it is also assigned a connection identifier (CID) that will be used to classify the SDUs among those belonging to different SFs. However, in this phase, resources are still not completely activated; for instance, the connection is not granted bandwidth yet. This last step is performed during the activation of the SF, which happens just before SDUs from the application starts flowing through the network.
Thus a two-phase model is used, where resources are booked before the application is started. At any time it is possible to “put on hold” the application by moving back the state of the SF from active to admitted. When the application stops the SF is set to either provisioned or deleted; in any case, the one-to-one mapping between the service flow identifier (SFID) and the CID is lost, and the CID can be reassigned for other purposes. The SF transition diagram is illustrated in Figure 2.9.
Figure 2.10 shows the blueprint of the functional entities for QoS support, which logically reside within the MAC CPS of the BS and SSs.
Figure 2.10 Service flow transition diagram.
Each DL connections has a packet queue (or queue, for short) at the BS (represented with solid lines). In accordance with the set of QoS parameters and the status of the queues, the BS DL scheduler selects from the DL queues, on a frame basis, the next SDUs to be transmitted to SSs. On the other hand, UL connection queues resides at SSs.
Bandwidth request are used on the BS for estimating the residual backlog of UL connections. In fact, based on the amount of bandwidth requested (and granted) so far, the BS UL scheduler estimates the residual backlog at each UL connection, and allocates future UL grant according to the respective set of QoS parameters and the (virtual) status of the queues. However, as already introduced, although bandwidth requests are per connection, the BS nevertheless grants UL capacity to each SS as a whole. Thus, when an SS receives an UL grant, it cannot deduce from the grant which of its connections it was intended for by the BS. Consequently, an SS scheduler must also be implemented within each SS MAC to redistribute the granted capacity to the SS’s connections (Figure 2.11)[8].
Figure 2.11 Medium Access Control architecture of the Base and Subscriber Station
Chapter 3. Scheduling and Admission for Real-Ttime Traffic
3.1 Scheduling and admission for real-time traffic.
The concept of scheduling a time-constrained job is central to both design and analysis of real-time systems, basing on the job characteristics to assign priorities to scheduling decisions. The need for an accurate estimation of performance measures such as the fraction of real-time jobs violating their timing constraints has resulted in the recently growing concern in developing models for evaluating the performance of such real-time scheduling policies. A real-time job has a deadline before which it is available for service and after which it must leave the system. We consider the model of deadlines until the end of service, where in a job is thrown away and considered lost if it does not complete execution before its deadline. Loss probability can have quite a profound effect on the dependability and/or performability of a real-time system. Examples of systems using such model may be found in many real-life situations, with the involvement of multimedia applications, high speed packet switching networks, avionic systems, industrial process control, and many other critical and non-critical applications. The general categorization of scheduling policies can divide them into: preemptive scheduling, in which processing of the currently running job can be interrupted by a higher priority job and non-preemptive scheduling, in which an arriving job is scheduled only after the completion of the current job execution. Despite greater system exploitation that preemptive scheduling can guarantee, the impossibility or prohibitive expense of preemption due to properties of hardware or software devices are probable cases. For example, in high speed packet switching networks, preemption requires the retransmission of the preempted packet. Scheduling over a shared media such as LAN, WLAN and field buses [11] such as CAN bus [12,13] is inherently non-preemptive, because each node in the network has to ensure that the shared channel is free before it can begin transmission. Further exploitation of non-preemptive processor scheduling is also found in light weight multi-tasking kernels, especially in multimedia applications [14]. Non-preemptive scheduling for real-time embedded systems has also the benefits of accurate response time analysis, ease of implementation, reduced run-time overhead, and guaranteeing exclusive access to shared resources and data which eliminates both the need for synchronization and its associated overhead.
The analysis of queuing systems handling jobs with deadlines has been addressed in numerous papers [18,19], most of them are focused on FCFS (First Come First Serve) scheduling policy. The analysis of scheduling policies that use information on timing constraints is increasingly being of interest in the literature [18,19]. Among such policies, it has been shown that preemptive and non-preemptive EDF are optimal policies within the classes of non-idling service time independent preemptive and non-preemptive scheduling policies, respectively. Despite the particularly valuable analysis of EDF thanks to its optimality, complexity of such analysis explains why there are very few papers on the analysis of EDF. Hong et al. [20] first introduced upper and lower bounds for the performance of an M/M/m/EDF+M queue (m=1 when deadlines are until the end of service). Their results were later improved in [18] for a single-server system.
This thesis makes an overview on the method presented in [17] for Poisson arrival jobs to the case of deadlines until the end of service in preemptive manner. The method to be introduced in this paper covers a wide range of input rates while it is simple to use. It gives a relatively good approximation for an important measure of performance in soft real-time systems, namely, the overall loss probability due to deadline misses and/or transient faults. A key parameter used in this method is , which is the rate of missing deadline when there are n jobs in the system. This important parameter is estimated using an upper bound and a lower bound for the case of preemptive EDF, for which a formulation simpler than the respective one in [17] is used. The latter formulation is then generalized to the case of non-preemptive EDF scheduling policy using a heuristic. To the best of our knowledge, no other analytical or approximation method for this problem exists. Comparison of the analytical and simulation results aims to prove the accuracy of the presented method.
3.1.1 System models [18].
In this thesis, M/M/1/EDF+M queue as in[18] is used. M/M/1/EDF+M queue is an infinite capacity single-server queue with EDF (Earliest Deadline First) scheduling policy, for which jobs arrive according to a Poisson process with average rate λ and each job requires a service time exponentially distributed with average rate µ. Moreover, a relative deadline, i.e. the interval of time between the arrival of a job and its deadline, is associated with each job. The last M indicates that the relative deadlines are random variables of an exponential distribution with mean θ. A model of deadlines until the end of service is considered. In this model, a job is discarded and considered lost if it does not complete execution before its deadline, which can occur while the job is in the queue or while it is in service. The job is thrown out if it is in the queue at the time when the deadline is reached, and it is aborted and then thrown out if, at the time that the job reaches its deadline, it is in service. As indicated in the definition of earliest-deadline-first (EDF) scheduling policy, the job closest to its deadline is to be served. A method for performance analysis of preemptive EDF scheduling policy is discussed in [18], for which a simpler formulation is presented in this subsection. Such analysis for preemptive EDF is made in Subsection 3.1.2, NP-EDF is also presented in subsection 3.1.3.
We begin with some notations that are used in[18] and also throughout this thesis. Let denote the set of natural numbers (including 0) and ℛ+ the set of positive real numbers. We also use E(n) to denote an Erlang random variable with parameters n and µ.(E(0) = 0.) Thus, E(n) has the probability distribution function
For t, and , let
the probability that one of the customers in the system during [t, t + ) misses its deadline, given there are n customers in the system at time t. (2)
Define
above is the (steady-state) rate of missing deadline when there are n jobs in the system (including the one being served). Then, the resulting Markov chain model of the system, M, may be shown as in figure 3.1.
Figure 3.1. State-transition-rate diagram for Markov chain M.
When the population of the system is n, it can be decreased to n-1 because of either completing the service requirements of a job (with rate µ) or missing a job’s deadline (with rate ).
Barrer [21] first derived an expression for , in terms of µ and θ, for a model with deterministic customer inpatience, unlimited capacity, and Poisson arrival process. Kargahi in [18] extends Barrer’s result to a larger class of models, namely, those with an arbitrary capacity, a general customer impatience, and a state-dependent Poisson arrival process. These latter results assume FCFS policy and show that is independent of λ and only depends on the number of jobs in the system. Now, we describe how to calculate for deadlines until the end of service as given in [18]. First, we assign as relative deadline of the nth customer that have to wait in the waiting queue, thus, is a random variable with probability distribution function G(.). We assume that G(0) = 0, thus, the probability distribution function of the relative deadlines is given by
Define to be the Laplace transform of n, i.e.,
Let be defined as in (4). Then, we have
We first assume that the model has an unlimited capacity. Let
Pn(t) = the probability that there are n customers in the system at time t.
We can write:
Let:
Then, in equilibrium, (8) is simplified as:
Using (7), we get:
The normalizing condition is:
Substituting for in (14), we find:
The probability of missing a deadline in the system (loss probability) may then be obtained as:
Which is the average rate of missing deadlines divided by the average rate of job arrivals. To analyze the system with EDF policy, we need to have a formulation for (of EDF) as defined in (4). Next, we review a method for estimating of preemptive EDF for an infinite-capacity system. Afterwards, such estimation will be generalized to non-preemptive EDF in the same system.
An important feature of the queuing system considered in this thesis is that each incoming customer to the system has a deadline until the end of its service. The difference between the deadline of a customer and its arrival time, referred to as a relative deadline, is assumed to be a random variable with a general probability distribution function G(.).
3.1.2 Loss rate for preemptive EDF
In this subsection, we present a method for estimating for preemptive EDF, denoted by , in the case of deadlines until the end of service. To do so, some bounds will be defined for . Combining the bounds, an estimation of the required values will be resulted. It is conjectured in [18] the ordering between the loss rate in each state of the Markov chain M, obtained from some properties of EDF and FCFS policies for deterministic and exponential deadline distributions. Thus, we take as the lower bound and as the upper bound of .
Where:
Since the exact values of for general distribution of deadlines are known, Kargahi [18] exploits this information to come up with some estimation for . Thus, the above two bounds are linearly combined using a multiplier to obtain an appropriate estimation of (if the exact values of were to be known, then solving M would result in an exact analysis of preemptive EDF). More explanation of the above approach for an infinite-capacity queue is given below.
Determination of the bounds. As indicated in [23], for some fixed value of mean relative deadline, θ, in a FCFS system, deterministically distributed deadlines generate the minimum loss probability among all types of deadline distribution. Such property is also assumed valid for loss rate in FCFS and EDF policies, which simulation results strongly confirm. On the other hand, due to the fact that for deterministic deadlines, EDF is the same as FCFS, we have . Moreover, an upper bound for may also be proposed as follows. Owning to the optimality property of preemptive EDF scheduling policy for deadlines until the end of service resulting in minimization of the loss probability, our conjecture is that such minimization property is also valid for loss rate, which is confirmed by simulation results, i.e., we have . There, we find
Determination of the multiplier. Contrary to the case of FCFS policy, simulation results in [18] strongly indicate that the state-dependent loss rate depends on λ for the case of EDF policy. Therefore, [18] takes advantage of some properties of EDF and some simulation results to find a multiplier which linearly combines the bounds defined previously. The multiplier must be adjusted as a function of λ. Therefore, the above determined bounds may be combined linearly to get an appropriate estimation of as follows
where the multiplier must be determined. As discussed previously, it has been shown that for FCFS policy, the loss rate is independent of λ and only depends on the number of jobs in the system.
and
where is given by (1) as following form:
Assume a queue with preemptive EDF scheduling policy and arrival rate λ, for which ρ=λ/µ is the normalized arrival rate with respect to µ. Moreover, and are assumed to be the loss rate and the corresponding multiplier for the queue, respectively. Therefore, the bounds of must be combined linearly using a multiplier as a function of ρ. Replacing in (19) with its equivalent values obtained from simulation, denoted by , and using some simple algebraic manipulations to find
The function used in (23), describing the two bounds of for the case of deadlines until the end of service, mentioned above. Next, we need to find the multiplier in a manner that is close enough to . Noting the simulation, we find that is a function of three parameters, namely, n, µθ, and ρ.
To find a function describing the behavior of with respect to the above three parameters, the following steps are needed to be considered [18]:
Assuming n and ρ to be fixed for exponential relative deadlines, and checking the simulation results of as obtained in (23) for different values of normalized mean relative deadline, it is found that the resulting values of are changed inversely with respect to the square root of µθ. Therefore, should be proportional to the inverse of as indicated in (24).
Assuming ρ and µθ to be fixed for exponential relative deadline, and reviewing the simulation results of for different values of n, it is found that and the number of waiting jobs in the queue, n, should inversely be related as indicated in (24).
Assuming n and µθ to be fixed for exponential relative deadlines, and checking the simulation results of for different values of ρ, it is found that and ρ should also be inversely related. Reviewing the simulation results of , from almost no traffic to very heavy traffic intensities, it is also found that 6.7/ρ1.25 is a good estimation for the input rate dependent part of as indicated in (24). This latter behavior can also be explained by some properties of EDF. Due to the dynamics of the EDF policy with respect to different values of ρ, for very small values of ρ where ρ, converges to ; therefore, tends to be very large as . The reason is that for very light traffic intensities, EDF behaves very similar to FCFS and the improvements of EDF over FCFS are quite limited. On the other hand, for large values of ρ, the behavior of EDF becomes more similar to that of FCFS with a deterministic relative deadline, where converges to the lower bound; therefore, tends to be very small as .
The comparison of the analytical and simulation results show that if we consider the exponential relative deadlines, the formulation derived for from steps 1, 2, and 3 above results in a good estimation for . On the other hand, for other distributions of deadlines, the same formulation does not result in a good enough estimation for and need to be improved.
Based on the above explanation, it is proposed that [18]
3.1.3 Non-preemptive EDF
Above is the analysis of scheduling policy for preemptive EDF; and the following is the presentation of scheduling policy for non-preemptive EDF. When non-preemptive EDF is used, job being served will not be preempted and will continue to get service even if the deadline of an arriving job is earlier than that of the job in the server. The presentation below is for the estimation of for non-preemptive EDF, denoted by , which results in approximating the performance of non-preemptive EDF scheduling policy. This is another view proposed by Kargahi in [18] (assuming n>0):
Since the serving is non-preemptive, after starting service, the behavior of the system with respect to this first job is like that of a system with FCFS scheduling policy.
As the remaining (n-1) jobs in the system follow EDF scheduling policy, the system can be divided into subsystems: the first one which contains the non-preemptive server with rate µ which can be considered as an FCFS system with capacity 1 (no waiting room), and the other which can virtually be assumed as a preemptive EDF system with a virtual server with rate µ’(Figure 3.2), where the job with the highest priority in the virtual system is entered to the FCFS subsystem, if it has no job to process.
Since the job in the FCFS queue leaves the system due to service completion with rate µ or deadline miss with rate , the rate of leaving the system for this job will be .
In [18], it is assumed such leaving rate as the service rate of the virtual server (µ’) in the preemptive EDF subsystem at which the waiting jobs are entered to the actual server. Therefore, .
This viewpoint to the system will lead to a loss rate of in the preemptive EDF subsystem, which can be calculated from (20), (21), (22) and (24) by replacing µ with and ρ with .
Finally, we obtain (assuming n>0) . (25)
Figure 3.2 The modified view to the non-preemptive EDF queue
Replacing with above in M and solving the resulting Markov chain using the method as presented above, we find for the system with non-preemptive EDF scheduling policy.
In which the probability of state n is
and
3.2 Some current scheduling algorithms for real-time polling services
Several packet scheduling algorithms for broadband wireless networks have been published recently, but in this thesis, we present a packet scheduling algorithm similar to that proposed in [24] which provides QoS support for a variety of real time applications as defined in IEEE 802.16.
IEEE 802.16 is capable to support multiple communication services (data, voice, video) with different QoS requirements. The media access control (MAC) layer defines QoS signaling mechanisms and functions that can control BS and SS data transmissions, which is relatively simple on the downlink (from BS to SS) because the BS is the only one that transmits during the downlink subframe. The data packets are broadcasted to all SSs and an SS only picks up the packets destined to it. One of the modes of uplink arbitration (from SS to BS) uses a TDMA MAC. The BS decides on the number of time slots that each SS will be allowed to transmit in an uplink subframe. This information is broadcasted by the BS through the uplink map message (UL-MAP) at the beginning of each frame. UL-MAP contains information element (IE) that include the transmission opportunities, i.e. the time slots in which the SS can transmit data in the predefined time slots as indicated in IE. The BS uplink scheduling module determines the IEs using bandwidth request PDU (BW-Request) sent from SSs to BS. In IEEE 802.16 standard, there are two modes of the BW-Request transmission: contention mode, in which SSs send BW-Request during the contention period. Contention is resolved using back-off resolution, and contention-free mode (polling), in which BS polls each SS and SSs reply by sending BW-Request. Due to the predictable signaling delay of the polling scheme, contention-free mode is suitable for real time applications. IEEE 802.16 defines the required QoS signaling mechanisms described above such as BW-Request and UL-MAP, but it does not define the Uplink Scheduler.
IEEE 802.16 defines five types of service flows for 802.16-2004 fixed WiMAX and an extra one, the extended rtPS, for 802.16e-2005 Mobile WiMAX, each with different QoS requirements and corresponding uplink scheduler policy:
Unsolicited grant service (UGS): this service supports constant bit-rate (CBR) or CBR-like flows such as Voice over IP. These applications require constant bandwidth allocation.
BW-Request: not required.
Uplink Scheduler: BS determines the IEs for the UL-MAP – it allocates a fixed numbers of time slots in each time frame.
Real-time polling service (rtPS): this service is for real-time VBR-like flows such as MPEG video. These applications have specific bandwidth requirements as well as a deadline (maximum delay). Late packets that miss the deadline will be useless.
BW-Request: used only in the contention-free mode. The current queue size that represent the current bandwidth demand is included in the BW-Request.
Uplink Scheduler: not defined in the current IEEE 802.16.
Extended Real-Time Polling Service (ertPS): This service is for realtime traffic with variable data rate (such as VOIP service with silence suppression) over the WiMAX network.
BW-Request: used only in the contention-free mode. The current queue size that represent the current bandwidth demand is included in the BW-Request.
Uplink Scheduler: not defined in the current IEEE 802.16.
Non-real-time polling service (nrtPS): this service is for non-real-time flows which require better than best effort service, i.e. bandwidth intensive file transfer. These applications are time-insensitive and require minimum bandwidth allocation.
BW-Request: uses either contention-free mode or contention mode. Current queue size is included in BW-Request.
Uplink Scheduler: not defined in current IEEE 802.16.
Best effort service (BE): this service is for best effort traffic such as HTTP. There is no QoS guarantee. The applications in this service flow receive the available bandwidth after the bandwidth is allocated to the previous three service flows.
BW-Request: uses only contention mode. Current queue size is included in BW-Request.
Uplink scheduler: not defined in current IEEE 802.16.
3.2.1 EDF Broadband Wireless Access Scheduling Algorithm
In some BWA systems (e.g. WiMAX), the queue size information in the SSs can be obtained by the BS from the bandwidth request messages by the SSs, and from which the number of arriving packets (in bits), their arrival time, hence the relative deadlines can be estimated from the current queue sizes. We will first define some quantities that are used throughout this subsection.
f = duration of a time frame (ms) which includes uplink and downlink subframe.
di = maximum delay requirement of connection i (ms).
qi(t) = queue size (bits) of connection i at time t.
si[ t, t+f] = number of bits of connection i that transmit during time frame interval [t,t+f].
ai[t, t+f] = number of bits of connection i that arrive during time frame interval [t , t +f].
Ndi[t,t+f] = number of bits waiting in queue of connection I with deadline at time interval [t, t+f].
Cuplink = total capacity (bps) allocated for uplink transmission.
Cdownlink = total capacity (bps) allocated for downlink transmission.
Ctotal = total capacity (bps) of the wireless network, Ctotal = Cuplink + Cdownlink.
CrtPS = average capacity (bps) allocated for current rtPS connection.
CBE = average capacity (bps) available for current BE connection = Cuplink-CUGS-CrtPS-CnrtPS.
CNRT= CnrtPS + CBE = Cuplink – CUGS – CrtPS.
Nuplink = total number of bits that SS are allowed to transmit in an uplink subframe, Nuplink=f*Cuplink.
ri = token bucket rate (average data rate) of connection i.
bi = token bucket size of connection i.
To capable to support all types of service flows (UGS, rtPS, ertPS, nrtPS, and BE), the proposed uplink packet scheduling uses a combination of strict priority service discipline, earliest deadline first (EDF) and weight fair queue (WFQ). The hierarchical structure of the bandwidth allocation is shown in Figure 3.3. The proposed UPS consist of three modules: information module, scheduling database module and service assignment module. In [24], it is proposed to apply earliest deadline first (EDF) service discipline for rtPS connection. Packets with earliest deadline will be scheduled first. The information module determines the packet’s deadline.
Information module: the information module performs the following tasks: (1) retrieves the queue size information of each connection from the BW-Request messages, (2) determines the number of packets (in bits) that arrived from rtPS connection in the previous time frame using the arrival-service curve concept, (3) determines rtPS packet’s arrival time and deadline and updates this information in the scheduling database module.
Figure 3.3 Hierarchical structure of bandwidth allocation [24]
Step 1: determines the number of arrivals during the previous time frame, ai[t-f,t]. This is accomplished by using the arrival-service curve concept. At time t = nf, (n = 1,2,3,…)
Input: queue size = qi(nf), Service = si[(n-1)f, nf]
Output: ai[(n-1)f, nf] = qi(nf) + si[(n-1)f, nf] – qi((n-1)f) (4.1)
Step 2: determines the packet’s deadline given the packet’s arrival information during the previous frame, ai[t-f,t] as determined in step1. The deadline from the packet’s arrival time plus the packet’s maximum delay requirement. Notes, ai[t-f,t] is number bits waited in the queue for time f, therefore, to avoid delay violation, these packets have to receive service in frame [(t-f) + (di-f),(t-f) + di]. Therefore, Ndi[(t-f) + (di –f), (t-f) + di] = ai[t-f,t]. In general form, step 2 will be: at time t = nf, (n = 1,2,3,…)
Input: ai[(n-1)f, nf]
Output: Ndi[(nf-f) + (di –f), (nf-f) + di] = ai[(n-1)f, nf]
The output of the information module is Ndi[a,b] will be updates the scheduling database module.
Scheduling database module: the scheduling database module serves as the information database of all connections in the network. Figure 3.4 show the database structure of the scheduling database module.
This is two dimensional database, per connection and deadline (frame). Item (I, [t,t+f]) includes Ndi[t,t+f] (received from the information module), it is the number of bits to be transmitted in frame [t,t+f]. figure 3.4 also shows the number of bits in the database table that correspond to the actual packets waiting in queue.
Service assignment module: the service assignment module determines the UL-MAP, using the database tables created in the scheduling database module. We use strict priority service discipline for allocating bandwidth among service flows (i.e. UGS, rtPS, nrtPS, and BE). Within each service flow we employ the following service discipline: EDF for rtPS, fixed bandwidth for UGS, WFQ for nrtPS and equal bandwidth allocation for BE.
Figure 3.4 rtPS database structure in scheduling database module [24]
Schedule the rtPS packets in the rtPS database until either there are no rtPS packets left or there is no more bandwidth left. After scheduling the packets, update Nuplink (Nuplink Nuplink – ). In case the total number of bits in the column is greater than Nuplink, Nuplink will be distributed to each connection based on its weight (wi = ri/, i = rtPS connection). For example, connection i will be scheduled with wiNuplink bits. If there are still packets left in the current time frame interval and Nuplink is equal to zero, these packets will miss the deadline. We can take the following two actions for the packets that miss their deadline: (1) drop the packets, or (2) reduce the priority of the packets by moving them to the BE database. After scheduling the packets, update the UL-MAP and the rtPS database.
3.2.2 Admission Control
Any wireless network is resource-constrained and can serve a limited number of users with a given QoS level. Hence, a CAC algorithm that decides whether new users should be admitted to the network is required. The goals of the CAC are satisfying QoS requirements for the admitted users, maximizing operator’s revenue by maximizing the network capacity, and supporting fairness and priorities among the users [21,22].
A CAC algorithm admits a new user to the network based on a given criterion. The admission criteria may be different. The known CAC schemes are based on SINR, interference, bandwidth, load, or system capacity [21,22]. Several publications are available that consider the CAC in the OFDM network [23,24]. The algorithms of these publications admit new users based on the analysis of the current status of the queues of the active users.
In the Mobile WiMAX network the most suitable scheme is the one maximizing network capacity while satisfying QoS requirements for all the admitted users. Such scheme maximizes operator’s revenue and guarantees user’s satisfaction. We propose such CAC algorithms based on our system load model.
In subsection, we also introduce another network traffic controlling mechanics called token bucket. It works well for the “bursty” traffic. Two parameters are necessary in the token bucket mechanism: bucket size B and token rate r. Figure 3.5 shows how it works. Each token presents a unit of bytes or a packet data unit. A packet is not allowed to be transmitted unit it processes a token. Therefore, over a period of time t, the maximum data volume to be sent will be
Figure 3.5 Token Bucket mechanism
We adopt the token bucket mechanism to schedule the packets in 802.16 network environments.
A new rtPS connection requests with parameters , and current network parameters are: Cuplink, CUGS, CrtPS, CnrtPS, f. CNRT = Cuplink – CUGS – CrtPS, CNRT,new=CNRT - ri, CrtPS,new = CrtPS + ri. below is the admission control procedure:
Check for available bandwidth: if , there is available bandwidth for the new rtPS connection. Otherwise reject the new connection.
Check for delay guarantees: if , we can provide delay guarantees for the new rtPS connection. Otherwise reject the new connection.
Check for delay violations of existing rtPS connections: for each rtPS connection i, delay guarantee is maintained if . If for any connection this condition is not satisfied, reject the new connection.
Update CrtPS: CrtPS ß CrtPS + ri.
Pass token bucket parameters ri, bi to the traffic policing module.
Traffic policing: each SS will have a traffic policing module which monitors violations of QoS contracts by admitted connections, using the token bucket mechanism. The token bucket parameters for each connection are received from the admission control module.
Chapter 4. Simulation Results
We have developed two simulation models in Matlab: one for theoretical performance of the single queue preemptive and non-preemptive EDF of rtPS and the other for simulation of non-preemptive EDF scheduling of rtPS services in WiMAX
4.1 Theoretical Performance of single queue EDF scheduling algorithms
As previously mentioned, there is no way to identify the exact values of λ, µ, θ for particular application scenario, therefore, to simulate the results of the theories above, it is necessary to use the normalised values: ρ= λ/µ as the normalized input rate, * θ as the nomalized loss rate, and µ*θ as the normalized delay. Hence, the formulas (20, 21, 22) become:
We choose λ/µ = (0,3] and take 30 values within this range. µθ = 4 and 8. The results to be achieved are as follows:
Figure 4.1 P-EDF analytic result with µθ=4
Figure 4.2 P-EDF analytic result with µθ=8
As above, formulae (26,27,28) become:
The results to be achieved are as follows:
Figure 4.3 Loss probability analytic
4.2 Simulation of NP-EDF scheduling algorithm for rtPS services in WiMAX
The next step is to describe the model using Matlab for calculating loss rate according to UPS method mentioned above.
The arrival bits (during the frame time f) from the previous frame is from the output of the Information module:
ai[(n-1)f, nf] = qi(nf) + si[(n-1)f, nf] – qi((n-1)f) (4.1)
Note: Arrival rate = arrival bits/f and service rate = service bits/f. Packet has a fixed number of bits that go together (same delay d), so let packet =1 bit.
We can start the simulation an Uplink N-EDF queue at the SSi, at t=f (or n=1) using equation (4.1) by:
generate a random value for ai(0,f) from a Poisson arrival distribution with average λ bits/frame (equal the token bucket rate, convert it to bits/frame, use frame time = 5ms for WiMAX). We make delay θ and si(.) fixed for simulation, e.g. 60ms or 12f and 25Kbits/frame.
assuming the initial condition that queue qi(0)= 0 and service bits si(0,f)=0, i.e. from (1bis) the queue is qi(f) = ai(0,f) .
In WiMAX, the SSi will feedback this queue size to the BS which then uses equation (4.1) to calculate the bandwidth allocation for SSi in the following/next frame, i.e. si(f, 2f) bits/frame, but here we assume si(.) is fixed.
If si (f,2f.) < qi(f) then [qi(f) – si(f,2f)] will be left in the queue waiting to be served in the next frame.
generate the next Poisson arrival ai(f, 2f),
calculate qi(2f) from (4.1),
qi(2f)= a(f,2f)-s(f,2f)+q(f) = new arrival a(.) + what is left in the queue
repeat step 4) to get ai(2f, 3f) and qi (3f), and so on …..
The arrival time of the bits ai((n-1)f, nf) is assumed to be at ((n-1)f), i.e. arriving at the start of the frame (worst case). Therefore the worst case departure time has to be in the frame as determined below:
Deadline transmitting frame is = [(n-1)f+(mi-1)f, nf+(mi-1)f] (2)
And the result which has been achieved as:
Figure 4.4 Bandwidth allocation for rtPS connection.
Figure 4.5 Arrival and service curve for rtPS connection.
4.3 Conclusion
This thesis has performed several algorithms for scheduling for real-time service flows for WiMAX uplink, according to the mechanism which are preemptive EDF and non-preemptive EDF, this thesis has as well shown a method to scheduling packet uplink and estimating deadline method for considering the probability of missing deadline in M/M/1/EDF+M queue model. Besides, we have introduced token bucket, which is one method using mechanism network traffic control in determining missing deadline .
References
[1] Deepak Pareek. WiMAX, Taking wireless to WiMAX. Auerbach Publications, Taylor & Francis Group.
[2] T. Gaden. Intel pumps $37 million into Unwired. Whirlpool Broadband Multimedia, August 2005.
[3] W.Stalling. Data and Computer Communications. Prentice Hall, NewJersey, 7 edition, 2004.
[4] Deepankar Medhi, Karthikeyan Ramasamy. Network routing algorithms, protocols, and architectures. Morgan Kaufmann Publishers.
[5] IEEE 802.16 Working Group on Broadband Wireless Access. IEEE Standard for Local and Metropolitan Area Networks. Part 16: Air Interface for fixed broadband wireless access systems. IEEE 2004.
[6,7] Loutfi Nuaymi. WiMAX: Technology for Broadband Wireless Access. John Wiley
[8] Yang Xiao. WiMAX /MobileFi Advanced Research and Technology. Auerbach Publications
[9,10] G.S.V. Radha Krishna Rao, G. Radhamani. WiMAX, A wireless technoloty revolution. Taylor & Francis Group
[11] EN 50170. General purpose field communication system. In European Standard, CENELEC, 1996.
[12] CAN-CIA. Can specification 2.0 part B. 1992.
[13] M.A. Livani and J. Kaiser. EDF Consensus on CAN Bus Access for Dynamic Real-Time Application. Proceeding of IEEE workshop on parallel and distributed computing systems in conjuction with 12th international parallel processing symposium / 9th symposium on parallel and distributed processing.
[14] S.Dolev and A. Keizelman. Non-preemptive real-time scheduling of multimedia tasks. Real-time Systems.
[15] X.Castillo, S.R. McConnel, and D.P. Siewiorek. Derivation and Caliberation of a Transient Error Reliability Model. IEEE Trans. On Computers, 31(7): 658-671, 1982
[16] R.K. Iyer, D.J. Rossetti, and M.C. Hsueh. Measurement and Modeling of Computer Reliability as Affected by System Activity. ACM trans. On computer systems. 4(3):214-237, 1986
[17] D.P. Siewiorek, V.Kini, H.Mashburn, S.McConnel, and M.Tsao. A case study of C.mmp, Cm* and C.vmp: Part I- experiences with fault tolerance in multiprocessor systems. Proceedings of the IEEE, 66(10): 1178-1199, 1974.
[18] M.Kargahi and A.Movaghar. A method for performance analysis of Earliest-Deadline-First Scheduling policy. Proceeding of IEEE International Conference on Dependable Systems and Networks.
[19] A. Movaghar. On queueing with customer impatience until the beginning of service. Queueing systems.
[20] J.Hong, X.Tan, and D.Towsley. A Performance Analysis of Minimum Laxity and Earliest Deadline Scheduling in a Real-time system. IEEE Transactions on Computers, 38(12): 1736-1744, 1989.
[21] D.Y.Barrer. Queueing with Impatient Customers and Ordered Service. Oper. Res, 5:650-656, 1957.
[22] Ancker, C.J.; Gafarian, A.V Queueing with impatient customers who leave at random. J.Industrial Eng. 1962, 3, 86-87
[23] A. Movaghar. On queueing with customer impatience until the end of service. Proceedings of IEEE international computer performance and dependability symposium.
[24] K. Wongthavarawat and A. Ganz, “Packet scheduling for QoS support in IEEE 802.16 broadband wireless access systems,” Int. J. Commun. Syst., vol. 16, pp.81-96, 2003.
Các file đính kèm theo tài liệu này:
- services in WiMAX. .doc