Skip to main content

Novel user centric, game theory based bandwidth allocation mechanism in WiMAX

Abstract

Bandwidth allocation plays a crucial role in ensuring overall quality of service in WiMAX. WiMAX supports non-contention based bandwidth allocation mechanism, where the responsibility of bandwidth allocation lies with the base station. In this paper, a novel user- centric, Game Theory-based bandwidth allocation algorithm is proposed. The Users/Mobile Station shall be grouped in pairs and bandwidth allocation is performed alternately between the two users of a group. In any given frame only one user from a pair is allocated bandwidth. The bandwidth thus allocated shall satisfy the requirements of the user/mobile station for two consecutive frames. Since, in any given frame, only one user/MS is allocated the bandwidth, the proposed algorithm reduces the frame overhead, thereby saving precious bandwidth that can be utilized to improve throughput. Simulation results show a 50% decrease in the uplink frame overhead and about 8% improvement in the uplink throughput per user/mobile station.

Introduction

IEEE 802.16e [13] (also called WiMAX) is a combination of layer-2 and layer-1 protocols that provide fixed and mobile broadband wireless solution. WiMAX provides quality of service (QoS) by segregating user data into five different service class. The decision on the appropriate service class is made based on the quality of service requirements, like delay, jitter and throughput for the traffic. The first service class called Unsolicited Grant Services (UGS) is designed to support real time data streams that generate fixed size packets at periodic intervals. Voice over IP without silence suppression is an example for traffic that is categorized as UGS. The second service class called Real Time Polling Services (RTPS) supports real time data streams that generate variable sized packets on a periodic basis. For example, an MPEG video. The third service class also called Extended Real Time Polling Services (eRTPS) supports real-time service flows that generate variable sized data packets at periodic intervals, for example, VoIP with silence suppression. The fourth service class is called Non Real Time Polling Services (nRTPS). nRTPS supports delay-tolerant data streams that generate variable size data packets. An example for one such type of traffic is the file transfer protocol data (FTP). The last service class, also called Best Effort (BE), supports data streams which do not require any service level, for example Web browsing, Email etc.

The service flow representing each of these service classes is mapped to a unique connection between the Base Station (BS) and the Mobile Station (MS). BS employs call admission control algorithm to decide whether it can admit a connection. The connection admission decision is made based on the QoS needs of the connection and the current network load. Upon admission of the connection, the MS requests bandwidth based on the amount of data accumulated in each of these service flows at MS. On receiving bandwidth request from different MS for the active connections, BS schedules the bandwidth for these connections.

Bandwidth allocation has extensively been studied and researched by researchers in the academia and in the wireless communication industry. In [46] the authors propose priority- based inter class scheduling algorithms. One order of prioritizing and allocating bandwidth is UGS followed by eRTPS, RTPS, NRTPS and BE respectively. The QoS parameters determine the priority of the connections. Queue length-based inter-class bandwidth allocation is proposed in [7]. A counter-based inter-class bandwidth allocation mechanism is proposed in [8]. Weighted round robin (WRR) algorithm-based intra-class bandwidth allocation algorithm is studied in [9]. Delay-based bandwidth allocation algorithms are proposed in [10, 11], where the delay requirements of the connections are considered for bandwidth allocation. Channel condition- based bandwidth allocation algorithm is proposed in [12], where a connection facing bad conditions is deferred from transmission till conditions improve, and in return the connection is provided with credit which can be redeemed later. In [13, 14], Carrier to interference ratio is used to allocate bandwidth to the connections. In [15], the authors propose a bandwidth allocation strategy that uses multiple bandwidth allocation algorithms like EDF for allocating bandwidth.

In this paper a novel user-centric, Game Theory-based bandwidth allocation algorithm is proposed. The users/MS in the network are grouped in pairs and allocated bandwidth alternately. In each frame only one user/MS from a pair is allocated bandwidth. The bandwidth thus allocated shall be equal to its need for two consecutive frames. This reduces the overhead in the frame and frees up additional bandwidth that can be used to improve the throughput of users. The paper is organized as below: The section titled “Proposed User-centric, Game Theory-Based Algorithm” describes the proposed algorithm. The section titled “Theoretical Analysis”, provides a theoretical analysis of the proposed algorithm. The section titled “Results and Discussion” describes in detail, the simulations performed and a brief discussion on the results of simulation and finally the section titled “Conclusion” summarizes the paper.

Proposed User-centric, Game Theory-Based Algorithm

Bird’s eye view of the proposed algorithm is given below:

Each user/MS generates data that is categorized into one of the five service classes based on the type of data. For example, video traffic shall be categorized as RTPS and browsing traffic as BE. MS shall request for bandwidth for each of its service flows (represented as connections). BS shall allocate bandwidth to these connections. “Method 1” and “Method 2” are the two proposed methods of Bandwidth allocation. In this paper the term “User” and “MS” shall be used alternately.

Method 1: Game Theory based bandwidth allocation mechanism applied to both RTPS and nRTPS connections

On receiving the bandwidth request from an MS for a connection, BS shall calculate the new packet arrival for that connection as given in eq (1) and eq (2)

If,

qt-f = Queue Length at time t-f

BWt-f = Uplink bandwidth allocated to the connection in the frame during the frame duration {t-f, t}

qt = Queue Length at time t.

Nt = New packet arrival during time {t-f, t}.

Leftover packets in the time interval {t-f, t} is given as:

N left = q t - f - BW t - f
(1)

New packet arrival during the timeframe {t-f, t} is given as:

N t = q t - N left = q t q t - f BW t - f = q t q t - f + BW t - f
(2)
BW t + d - f , t + d = N t
(3)

If,

d = Maximum delay the packet can tolerate for packets Nt . The value of “d” is negotiated between BS and MS during connection admission process.

Fr{t+d-f, t+d} = Frame before which the Nt packets need to be transmitted to avoid delay

BW{t+d-f, t+d} = Bandwidth that has to be allocated to the connection in the frame Fr{t+d-f, t+d} to avoid delay

then,

BS shall maintain a table to keep track of the deadline bandwidth requirements for each connection of a service class. Table 1 and Table 2 describe the deadline bandwidth requirements of RTPS and nRTPS connections for each MS.

Table 1 Deadline bandwidth requirements for RTPS connections
Table 2 Deadline bandwidth requirements for nRTPS connections

A game is played between the MS in the network. BS shall group the MS in pairs. Let MS1 and MS2 be one such pair. In each frame, bandwidth equal to the deadline bandwidth requirement of the connections needs to be allocated to each MS to avoid delay and jitter. The proposed game theory based bandwidth allocation algorithm alternates between the two MS in a given pair as explained below. Let B W fr t + d - f , t + d MS 1 , RTPS be the deadline bandwidth requirement for RTPS connection of MS1 to be satisfied by frame X, B W fr t + d , t + d + f MS 1 , RTPS be the deadline bandwidth requirement for RTPS connection of MS1 to be satisfied by frame X+1, B W fr t + d - f , t + d MS 1 , nTPS be the deadline bandwidth requirement for nRTPS connection of MS1 to be satisfied by frame X and B W fr t + d , t + d + f MS 1 , nRTPS be the deadline bandwidth requirement for nRTPS connection of MS1 to be satisfied by frame X+1. Then, in the frame X MS1 shall be allocated bandwidth as shown in Eq. (4).

BW allot fr t + d - f , t + d MS 1 = B W fr t + d - f , t + d MS 1 , RTPS + B W fr t + d , t + d + f MS 1 , RTPS + B W fr t + d - f , t + d MS 1 , nRTPS + B W fr t + d , t + d + f MS 1 , nRTPS
(4)

i.e. MS1 receives bandwidth equal to its RTPS and nRTPS deadline requirements for frame X. Additionally, MS1 receives bandwidth equal to its RTPS and nRTPS requirements for frame X+1 as well. This additional bandwidth is obtained from MS2. Thus, in frame X, MS1 receives bandwidth equal to its need for frame X and X+1. Hence, during the bandwidth allocation for frame X+1, MS1 will not be allocated any bandwidth as its requirements for X+1 are satisfied in the frame X.

In Frame X+1, MS2 shall be allocated bandwidth as given below. Let B W fr t + d , t + d + f MS 2 , RTPS be the deadline bandwidth requirement for RTPS connection of MS2 to be satisfied by frame X+1, B W fr t + d + f , t + d + 2 f MS 2 , RTPS be the deadline bandwidth requirement for RTPS connection of MS2 to be satisfied by frame X+2, B W fr t + d , t + d + f MS 2 , nRTPS be the deadline bandwidth requirement for nRTPS connection of MS2 to be satisfied by frame X+1 and B W fr t + d , t + d + f MS 2 , nRTPS be the deadline bandwidth requirement for nRTPS connection of MS2 to be satisfied by frame X+2. Then, in the frame X+1, MS2 shall be allocated bandwidth as in Eq. (5).

BW allot fr t + d , t + d + f MS 2 = B W fr t + d , t + d + f MS 2 , RTPS + B W fr t + d + f , t + d + 2 f MS 2 , RTPS + B W fr t + d , t + d + f MS 2 , nRTPS + B W fr t + d + f , t + d + 2 f MS 2 , nRTPS
(5)

i.e. Bandwidth allocated to MS2, in frame X+1, is equal to its requirements for frame X+1 and X+2. Hence, during the bandwidth allocation for frame X+2, MS2 shall not be considered as its X+2 requirements have been satisfied in frame X+1. In frame X+2, MS1 shall be allocated bandwidth equal to its need for X+2 and X+3. This game shall continue as long as MS1 and MS2 remain as a pair. Eq. (4) and (5) shall apply to all MS that have been paired together.

Therefore, in any given frame, only one MS from a pair shall be allocated bandwidth. The bandwidth thus allocated to an MS shall be equal to its deadline need for the current frame and the next frame. The other MS shall be allocated bandwidth in the subsequent frame which, again, would be equal to the MS’s current frame requirement and the next frame requirement. Hence the number of UL Bursts in any frame is reduced by half. This in turn results in the number of ULMAP IEs being reduced by half (In a WiMAX frame, bandwidth allocated to an MS is called the UL Burst. The start and end symbol/sub-channel for an ULMAP is stored in a unique structure called ULMAP IE). It is known that ULMAP IEs are an overhead in a WiMAX frame. Hence, reducing the number of ULMAP IEs results in freeing up precious bandwidth. The bandwidth thus saved can be used in one of the following ways:

  1. (a)

    The saved bandwidth can be allocated to delay tolerant connections like BE

  2. (b)

    The saved bandwidth can be used to accept new connection requests if the saved bandwidth meets the QoS needs of the new connection.

  3. (c)

    The saved bandwidth can be used to meet the current as well as future requirements of existing RTPS, nRTPS connections.

If either (a) or (c) is adopted then the bandwidth allocated to an MS can be written as:

BW allot fr t + d - f , t + d MSi = B W fr t + d - f , t + d MSi , RTPS + B W fr t + d , t + d + f MSi , RTPS + B W fr t + d - f , t + d MSi , nRTPS + B W fr t + d , t + d + f MSi , nRTPS + size of UlMapIE
(6)

Method 2: Game Theory-based bandwidth allocation used for RTPS and Weighted Round Robin (WRR) used for nRTPS connections

In this method RTPS connections are scheduled using proposed algorithm as described in method 1. nRTPS connections shall be allocated bandwidth based on Weighted Round Robin (WRR) instead of proposed algorithm.

Bandwidth Allocation (Refinement)

This section further refines the bandwidth that was allocated to an MS as per Eq. (6). In Eq. (6) an MS is allocated bandwidth equal to the deadline requirements. However the deadline requirement is not checked against the minimum reserve traffic rate for the connection. A check is not done against the available bandwidth either. When the above two conditions are factored in, the actual bandwidth that the MS deserves for its RTPS connection for the frame X is calculated as in Eq. (7).

BWde s frX MSi , RTPS = B W frX MSi , RTPS , if B W frX MSi , RTPS < availBW and B W frX MSi , RTPS < MRTR / FrameRate MRTR FrameRate , if MRTR FrameRate < B W i , frX MSi , RTPS < availBW availBW , if availBW < B W i , frX MSi , RTPS < MRTR FrameRate MRTR FrameRate , otherwise
(7)

In the above equation, MRTR = Minimum Reserve Traffic Rate and Frame Rate = Frames Per Second. The RTPS connection is allotted bandwidth which is equal to its deadline requirement, if the requirement is less than available bandwidth and less than per-frame MRTR. If the required bandwidth is more than per-frame MRTR but less than the available bandwidth in the frame then the connection is allotted bandwidth equal to its per-frame MRTR. If the bandwidth requirement is more than the available bandwidth but less than per-frame MRTR then the available bandwidth is allotted to the connection. In all other scenarios, the connection is allotted bandwidth equal to its per-frame MRTR.

If EDF is used for bandwidth allocation for nRTPS connections then the actual bandwidth deserved by the nRTPS connection is calculated as in Eq. (8).

BWde s frX MSi , nRTPS = B W frX MSi , nRTPS , if B W frX MSi , nRTPS < availBW and B W frX MSi , nRTPS < MRTR / FrameRate MRTR FrameRate , if MRTR FrameRate < B W i , frX MSi , nRTPS < availBW availBW , if availBW < B W i , frX MSi , nRTPS < MRTR FrameRate MRTR FrameRate , otherwise
(8)

If WRR is used for bandwidth allocation for nRTPS connections then the actual bandwidth that the MS deserves for its nrtps connection for the frame X is calculated as in (9).

BWde s frX MSi , nRTPS = B W frX MSi , nRTPS , if B W frX MSi , nRTPS < availBW and B W frX MSi , nRTPS < MRTR / FrameRate MRTR FrameRate , if MRTR FrameRate < B W i , frX MSi , nRTPS < availBW availBW NoOfMSLeft , if availBW < B W i , frX MSi , nRTPS < MRTR FrameRate MRTR FrameRate , otherwise
(9)

Each nRTPS connection is assigned equal weight. Based on (7) and (8)/(9), the bandwidth allocated to an MS in a frame (i.e. Eq. (6)) can be re-written as in Eq. (10).

BWallot frX MSi = BWde s frX MSi , RTPS + BWde s frX + 1 MSi , RTPS + BWde s frX MSi , nRTPS + BWde s frX + 1 MSi , nRTPS + size of UlMapIE
(10)

Pairing of MS

Pairing of MS plays a crucial role in the proposed algorithm. Since in each frame, one MS sacrifices its share of bandwidth for its partner, appropriate pairing plays an essential factor. Improper pairing can result in packets being served late leading to delays and packet drops. An ideal scenario would be the one where in for each MSa, there exists MSb whose cumulative bandwidth needs match that of MSa as shown in Eq. (11).

M S a M S 1 , M S 2 , M S n M S b M S 1 , M S 2 , M S n : B W f r X MSa , RTPS + B W f r X + 1 MSa , RTPS + B W f r X MSa , nRTPS + B W f r X + 1 MSa , nRTPS = B W f r X MSb , RTPS + B W f r X + 1 MSb , RTPS + B W f r X MSb , nRTPS + B W f r X + 1 MSb , nRTPS
(11)

However, this may not be true all the time, as the packets arrive at random intervals. Hence BS needs a pairing algorithm to pair the MS in an efficient manner. There can be two ways of pairing the MS.

  •  Static Pairing

  •  When an MS (say MSa) requests for connection admission, among other things, it specifies the quality of service requirements of the connection which includes the minimum reserve traffic rate (which is equal to the average bandwidth need of the connection). On receiving the connection admission request, BS decides to admit the connection if it can satisfy the QoS of the connection. If BS decides to admit the connection, BS shall check if it can pair the MS with another MS (say MSb) of similar bandwidth need. If such an MS is found then the BS pairs MSa with MSb. Thus, in case of static pairing, the pairing decision is done at the time of connection admission. The pairing is retained till the connections are active. Static pairing spares the BS from periodically executing the pairing algorithm.

  •  Dynamic Pairing

  •  Dynamic pairing algorithm involves pairing of MS at the frame level. Dynamic pairing is a two-step process.

Step 1 Initial Pairing

Initially, the bandwidth requirement for only one frame is known to MS. Hence pairing decision is made by the BS, based on the bandwidth needs of first frame. Each MS sends bandwidth requirement for its RTPS and nRTPS connections.

Complexity Analysis

The bandwidth calculation step takes Θ(n) time. Sorting using Merge sort or Quicksort takes Θ(nlogn) and the final pairing takes Θ(n) time. Hence the time complexity of initial pairing ≈ Θ(nlogn).

Post Pairing

Once an MS (say MSa) is paired with another MS (MSb), both MSa and MSb can have data to be transmitted as shown in Table 3 and Table 4:

Table 3 RTPS data at MSa
Table 4 RTPS aata at MSb

Thus, in the first frame, one of the MS (say MSb) shall be allocated bandwidth equal to its one frame requirement and the other MS (say MSa) shall be allocated bandwidth equal to its two Frame requirement. From the subsequent frame, MS shall be allocated bandwidth in an alternate fashion as per the proposed algorithm.

Step 2 Re-pairing once every “p” pairs

Once the MS are paired using the “initial pairing” algorithm, BS shall perform bandwidth allocation as per Method 1 or Method 2 for the next “p” frames (The value of “p” is determined experimentally).

During these “p” frames, each MS keeps sending its bandwidth requirement for each of its RTPS, nRTPS and BE connections. BS shall store these values and calculate the average BW requirements for each MS for the P frames as in Eq (12):

AvgBWReq a = i = 1 p B W f r i MSa , RTPS + B W f r i MSa , nRTPS p
(12)

AvgBWReq[a] is the average requirement for MSa for the past “p” frames. After “p” frames, the MS shall be re-paired. A combination of AvgBWReq for the past “p” frames and the current bandwidth need of (p+1)th frame shall be used for re-pairing. Bandwidth request value is calculated based on Eq. (13):

BWReq a = α B W f r p + 1 MSa , RTPS + B W f r p + 1 MSa , nRTPS + 1 - α AvgBWReq a
(13)

Here α is the smoothing factor that decides the weightage given to the current bandwidth requirement compared to the average bandwidth requirements. Value of α shall be determined experimentally.

Step 2.2 Sort the bandwidth need in non-increasing order of BWReq.

Step 2.3 Check if pairs can be formed between the neighbors in the sorted list:

Nash Equilibrium

Nash Equilibrium is a condition where a player (say player A) chooses its best option taking into account the other player’s (say player B) decision. And the other player (Player B) makes their best decision, taking into account the first player’s (Player A) decision.

Table 5 shows the options for the two players. If Player A (i.e. MS1) chooses to receive bandwidth in every frame, then it shall be given bandwidth equal to the deserved bandwidth. However, if Player A choses to receive bandwidth in every alternate frame, then it shall receive the deserved bandwidth, and additionally, it shall also receive the bandwidth saved from the ULMapIE. Hence, it is in the interest of Player A to choose the second option. Knowing that player A has chosen the second option, it is in the interest of Player B to choose the second option as well. By choosing the second option player B also stands to gain the additional bandwidth saved from UlMapIE. Hence, the Nash equilibrium in this case is the fourth quadrant of Table 5.

Table 5 Decision table for both players in the game

Packet Scheduling at each MS

BS allocates bandwidth to each MS on a Grant Per Subscriber Station (GPSS) basis. It is the responsibility of each MS to distribute this bandwidth among its connections. Each MS (say MSi) shall maintain the deadline table for its RTPS connection as in Table 6.

Table 6 RTPS deadline table at each MS for RTPS connection

Packet scheduling shall follow the below algorithm:

Step 1: RTPS data scheduled will be equal to BWdes frX MSi , RTPS and BWdes frX + 1 MSi , RTPS as shown in Eq (14).

BWSched RTPS = BWde s frX MSi , RTPS + BWde s frX + 1 MSi , RTPS
(14)

Step 2: Calculate leftover bandwidth as given in Eq (15):

BWtotal = BWtotal - BWSched RTPS .
(15)

Step 3: Schedule the nRTPS packets for the MS from BWtotal. If proposed algorithm (i.e. Method 1) was used to allocate bandwidth for nRTPS connection then amount of data scheduled for nRTPS connections is as given in Eq (16). The leftover bandwidth is calculated as in (17).

BWSched nRTPS = BWde s frX MSi , n RTPS + BWde s frX + 1 MSi , nRTPS
(16)
BWtotal = BWtotal - BWSched nRTPS
(17)

Step 4: If bandwidth is still available, then schedule the BE packets.

Theoretical Analysis

Let the total bandwidth be 20 Mbps, downlink to uplink ratio is 1:1 and the frame duration be 5ms. RTPS traffic arrival rate be 100 Kbps (including the headers like TCP, IP, MAC headers) and the maximum delay tolerable by RTPS traffic be 100ms. Let nRTPS arrival rate be 80 kbps. Minimum reserve traffic rate for RTPS and nRTPS be 100 kbps. Each UlmapIE is composed of the fields: cid, start time, sub channel index, uiuc, duration and mid-amble repetition index. Hence the size of UlmapIE will be nine bytes.

Case 1: Network contains MS with RTPS traffic.

If the network contains MS with only RTPS connections then the maximum number of MS that the network can support shall satisfy is as below:

Number of MS Supported = Uplink Bandwidth Minimum Reserve Traffic Rate = 100

Total Bandwidth = 20 Mbps

Downlink:Uplink ratio = 1:1

Hence, Uplink Bandwidth = 10 Mbps.

Minimum Reserve Traffic Rate = 100 kbps

Suppose all the hundred MS are grouped in pairs then the amount of overhead saved per frame is as below:

Total Groups Formed = 50

Number of ULMAP IE saved per Frame = 50

Overhead bandwidth saved per Frame = 50 * 9 = 450 Bytes per frame

= 3600 bits per frame

Now, Frame Duration = 5 ms.

Hence, Number of Frames per Second = 200

Total Bandwidth Saved = 720 kbps.

BS can admit seven more RTPS connections using the saved bandwidth. Assuming a voice call with data rate of 64kbps (including headers), BS can accept 11 additional UGS connections from the saved bandwidth. With an average data arrival rate of 32Kbps (including headers) then the BS can accept 22 new BE connections.

Case 2: Network contains MS with both RTPS and nRTPS data.

Number of MS Supported = Uplink Bandwidth RTP S MRTR + nRTP S AR 54

Uplink Bandwidth = 10 Mbps.

RTPS Minimum Reserve Traffic Rate = 100 kbps.

nRTPS arrival rate = 80 kbps.

Suppose all the 54 MS are grouped in pairs then the amount of overhead saved per frame is as below:

Total Groups Formed = 27

Number of ULMAP IE saved per Frame = 27

Overhead bandwidth saved per Frame = 27 * 9 = 243 Bytes per frame

= 1944 bits per frame

Now, Frame Duration = 5 ms

Hence, Number of Frames per Second = 200

Total Bandwidth Saved = 388 kbps.

BS can admit three additional RTPS connections or 6 new UGS connections at a data arrival rate of 64kbps (including headers) or about 12 new BE connections at an arrival rate of 32kbps.

Case 3: Network contains MS with RTPS connections where not all connections are paired.

Since RTPS connections generate variable bit rate data. It may not be possible to pair all the MS. From Figure 1 it can be seen that on an average about 80% of the MS get paired. If the network has 40 MS then 32 MS get paired.

Hence, Total Groups Formed = 16

Number of ULMAP IE saved per Frame = 16

Overhead bandwidth saved per Frame = 16 * 9 = 144 Bytes per frame

= 1152 bits per frame

Now, Frame Duration = 5 ms

Hence, Number of Frames per Second = 200

Total Bandwidth Saved = 230 kbps.

Figure 1
figure 1

Number of paired MS v/s number of MS.

BS can admit 2 additional RTPS connections or 3 new UGS connections or about 7 new BE connections.

Results and Discussion

Simulations were carried out on Matlab [16]. Simulation parameters are given in Table 7.

Table 7 Simulation parameters

Simulation was carried out to find the average frame overhead. Average overhead is calculated as in Eq. (18):

FrameOverhead = 1 - FrameUtilization = 1 - FrameSize - AvgUlMapSizePerFrame Framesize * 100 %
(18)

Where,

AvgUlMapSizePerFrame = n * AvgUlmapIESize
(19)

Here n = number of UlmapIE in the frame.

Figure 2 shows the simulation results for EDF and the proposed GTBA (Game Theory-based Bandwidth allocation Algorithm). It can be observed that the average frame overhead for GTBA is about 40-50% less compared to EDF. Since, in GTBA, each MS pairs with another MS, at any given time roughly half the number of MS are allocated bandwidth (UL Burst) in the frame. For every UL Burst, there shall be an UL MAP IE entry in ULMAP. Since UL Bursts are reduced roughly be half, the UL MAP IEs are also reduced by half. This results in less frame overhead.

Figure 2
figure 2

Average frame overhead (%) for GTBA and EDF v/s number of MS.

Simulation was carried out to find the uplink overhead for every bit of uplink data. Figure 3 shows the simulation results. From the figure it is clear that the uplink overhead is about 50% less for GTBA compared to EDF. The bandwidth saved by reducing the ULMAP IE overhead is redistributed among the MS.

Figure 3
figure 3

Uplink overhead per bit of UL data v/s number of MS.

Figure 4 shows the improvement in throughput observed by utilizing the saved bandwidth. From the figure it can be observed that each MS achieves throughput improvement of up to 8Kbps for its nRTPS connections using GTBA. As shown in Eq (10), the bandwidth saved by reducing overhead gets allocated to the MS. MS uses the bandwidth to send additional data. This improves the throughput of the MS. In -Figure 4 it can be seen that initially EDF and GTBA are able to achieve similar throughput. This is because there is sufficient bandwidth to support all users/MS. However, when the number of MS goes beyond 20, there isn’t sufficient bandwidth to support the users. In such a scenario, the bandwidth saved by reducing the frame overhead results in higher throughput for MS under GTBA.

Figure 4
figure 4

Average throughput for nRTPS connections (kbps) v/s number of MS.

Since there is an improvement in throughput, correspondingly there is a reduction in the data drop. Simulation was carried out to find the amount of data drop for nRTPS connections. Figure 5 shows the simulation results. From Figure 5 it can be observed that the average data drop for nRTPS connection is higher for EDF compared to the proposed GTBA algorithm.

Figure 5
figure 5

Data drop rate (kbps) v/s number of MS.

Simulation of Threshold Value (ξ)

Since the arrival rate and arrival time of packets vary, the bandwidth requests can vary for each MS, it may not be possible to pair all the MS in the network. Simulations were carried out to find out the possibility of pairing MS in a network. Two MS can be paired only if the difference in their bandwidth requirement does not cross the threshold value as explained in step 2.3 of dynamic pairing. Simulations were carried out to calculate the threshold value. As per [17], for viewers to have an acceptable viewing experience, the RTPS traffic (MPEG video) packet loss should be less than 1 packet per minute. Assuming maximum UDP packet size to be 512 bytes, simulations were carried out to find the threshold value. Simulation results show that at an average arrival rate of 100 Kbps with the number of MS having RTPS traffic at 40, the threshold value (ξ) should be less than 60 bits. Thus, two MS can be paired if their bandwidth requirements vary between 0–60 bits per frame.

Pairing Probability

With the difference between bandwidth requirement ranging from 0–60 bits, simulations were carried to find the pairing probability between the MS. Figure 1 shows the pairing probability versus the number of MS. From Figure 1 it can be observed that initially the number of MS forming pairs is less since finding a suitable MS matching the threshold criteria is also less. However, as the number of MS increases, the probability of pairing increases since an MS can find another MS whose bandwidth requirements are within the threshold value ξ.

Simulation of P Value

Once an MS is paired with another MS, they continue to generate data. Mobile stations continue to remain as a pair till they go out of synchronization. This is the state when the difference of packet generation is equal to the size of one UDP packet. Once they go out of synchronization the mobile stations need to be re-paired as they may no longer be an appropriate pair. “P” represents the number of frames for which two MS remain a pair. With an average data rate of 100 kbps each for RTPS and nRTPS connections, 20 rounds of simulations were carried out. Each MS in a pair generates data at the rate of 0 to 200 Kbps. Simulation results show that the average number of frames for which two MS shall remain as a pair is equal to 75. Post 75 frames, the mobile stations need to be re-paired.

Re-pairing of MS

With the value of P set as 75 and the number of MS set to 40, simulations were carried out to find out the number of MS that can be re-paired after P (i.e. 75) frames. Simulations were carried out with different values of smoothing factor α (eqn. 13). Figure 6 shows the simulation results. From Figure 6 it is clear that when the value of α is less, pairs are formed based on historic data arrival. This results in formation of fewer pairs. As the value of smoothing factor α increases, prominence is given to instantaneous bandwidth requirements. This leads to higher pairing of MS. However, it may not result in appropriate pairs.

Figure 6
figure 6

Total re-pairs (%) v/s alpha (a).

Since data arrival at RTPS and nRTPS queue can be infrequent, simulations were carried out with scenarios were MS do not have packets to transmit in certain frames. A comparison between EDF and proposed GTBA is performed when each MS generates packets of random sizes and each packet arrives at a random time. Figure 7 shows the simulation results for the average number of ULMAP IE per frame (rounded to the next integer). From Figure 7 it is clear that GTBA has fewer ULMAP IE per frame compared to EDF.

Figure 7
figure 7

Average ULMAP IE per frame v/s number of MS.

With random data arrivals and each packet having a random size, simulations were carried out to find the uplink overhead for every bit of uplink data transmitted. Figure 8 shows the uplink overhead for every bit of uplink data transmitted. It can be seen that GTBA has lower overhead compared to EDF.

Figure 8
figure 8

Uplink overhead for every bit of uplink data v/s number of MS.

Figure 9 shows nRTPS throughput for EDF and GTBA. Again the inter-arrival time between packets is random and the size of packets is also random in nature. Initially the performance of EDF is marginally better compared to GTBA. However, as the number of MS increases, the saved bandwidth contributes to the improved throughput for GTBA. EDF performs better when the number of MS is less, as each MS does not generate packets all the time, and hence there are far fewer MS requesting for bandwidth. Therefore the UL MAP IEs are reduced for EDF when there are fewer MS in the network. However, as the number of MS increases, even if the data generation is infrequent, because of the number of MS, the overall bandwidth requirement per frame increases. This results in more MS being allocated bandwidth per frame, which in turn results in additional frame overhead, and hence results in a decrease in throughput.

Figure 9
figure 9

Average throughput (Kbps) v/s number of MS.

Figure 10 shows the average data drop rate for nRTPS connections. Initially, EDF performs marginally better when there is sufficient bandwidth. As the number of MS increases, GTBA records an improved performance compared to EDF.

Figure 10
figure 10

Data drop rate (Kbps) v/s number of MS.

Conclusion

In this paper a unique, user-centric, Game Theory-based bandwidth allocation algorithm is proposed. The paper aims to improve the overall quality of service by reducing the frame overhead by allocating bandwidth in an efficient manner. The MS within the network under a BS are paired together. Let {MS i , MS j } be one such pair. In a frame, say frame X, one MS from the pair (say MS i ) is allocated bandwidth that would be equal to its two frame requirements (i.e. bandwidth need for frame X and frame X+1). In frame X+1, MS i does not participate in the bandwidth allocation process, instead the other MS from the pair, i.e. MS j , is allocated bandwidth equal to its need for frame X+1 and frame X+2. This method of alternate bandwidth allocation continues as long as MS i and MS j remain a pair. Hence in any given frame, only one MS from a pair participates in the bandwidth allocation process. This reduces the number of ULMAP entries in the frame. The bandwidth thus saved by reducing the ULMAP entries is ploughed back to the MS to improve their overall QoS.

The paper also proposes a packet scheduling algorithm that can be used at each MS to schedule the bandwidth among the active connections. The paper also describes a re-pairing algorithm that lets BS re-pair MS at regular intervals of time within the network. The frequency of re-pairing has also been described and analyzed in detail.

Simulation results show that by employing the proposed algorithm the frame overhead is roughly reduced by 50%. An 8-10% improvement in throughput is observed for each MS by using the proposed algorithm compared to the existing algorithms like EDF. Assuming that the improved throughput results in 100% Goodput, an 8-10% reduction in data drop was observed. Simulations were carried out to analyze the impact of re-pairing and the re-pairing rate for the MS. Simulation results reveal a healthy re-pairing rate and higher throughput compared to EDF even after regular re-pairing.

Method

This research work does not involve testing on humans or animals. Since the experimental results were obtained using network simulators which in no way relates to humans or animals, explicit approvals from any specific body has not been sought.

Authors Contributions

SPA carried out the conceptualization, jointly drafted the manuscript and reviewed the manuscript. NPK carried out the conceptualization, background study, simulated the concept, jointly drafted the manuscript and reviewed the manuscript. All authors read and approved the final manuscript.

Abbreviations

BE:

Best Effort

BS:

Base Station

EDF:

Earliest Deadline First

GPSS:

Grant Per Subscriber Station

GTBA:

Game Theory based Bandwidth allocation Algorithm

MRTR:

Minimum Reserve Traffic Rate

MS:

Mobile Station

nRTPS:

Non-real Time Polling Service

QoS:

Quality of Service

RTPS:

Real Time Polling Service

UGS:

Unsolicited Grant Service

UL:

Uplink

WRR:

Weighted Round Robin.

References

  1. IEEE 802.16 WG: IEEE Standard for Information Technology - Telecommunications and Information Exchange between Systems - LAN/MAN Specific requirements, Part 16: Air Interface for Fixed. Broadband Wireless Access Systems. 2004.

    Google Scholar 

  2. IEEE 802.16e WG: IEEE Standard for Information Technology - Telecommunications and Information Exchange between Systems - LAN/MAN Specific requirements, Part 16: Air Interface for Fixed and Mobile Broadband Wireless Access Systems. 2005.

    Google Scholar 

  3. IEEE 802.16 WG: IEEE Standard for Information Technology - Telecommunications and Information Exchange between Systems - LAN/MAN Specific requirements, Part 16: Air Interface for Fixed and Mobile Broadband Wireless Access Systems. 2009.

    Google Scholar 

  4. Wang Y, Chan S, Zukerman M, Harris RJ: Priority-Based fair Scheduling for Multimedia WiMAX Uplink Traffic. Proc. IEEE International Conference on Communications 2008, 301–305.

    Google Scholar 

  5. Moraes D, Maciel PD: Analysis and evaluation of a new MAC protocol for broadband wireless access. Proc. International Conference on Wireless Networks, Communications and Mobile Computing 2005, 188–193.

    Google Scholar 

  6. Lilei W, Huimin X: A new management strategy of service flow in IEEE 802.16 systems. Proc. IEEE Conference on Industrial Electronics and Applications 2008, 1716–1719.

    Google Scholar 

  7. Niyato D, Hossain E: Queue-aware uplink bandwidth allocation for polling services in 802.16 broadband wireless networks. Proc. IEEE Global Telecommunications Conference 2005, 5–9.

    Google Scholar 

  8. Chen J, Jiao W, Wang H: A service flow management strategy for IEEE 802.16 broadband wireless access systems in TDD mode. Proc. IEEE International Conference on Communications 2005, 3422–3426.

    Google Scholar 

  9. Cicconetti C, Lenzini L, Mingozzi E, Eklund C: Quality of service support in IEEE 802.16 networks. IEEE Netw 2006, 20(2):50–55. 10.1109/MNET.2006.1607896

    Article  Google Scholar 

  10. Stolyar AL, Ramanan K: Largest Weighted Delay First Scheduling: Large Deviations and Optimality. Ann Appl Probab 2001, 11(1):1–48.

    Article  MATH  MathSciNet  Google Scholar 

  11. Kim DH, Kang CG: Adaptive Delay Threshold-based Priority Queueing Packet Scheduling for Integrated Services in Mobile Broadband Wireless Access System. IEEE Comm Lett 2008, 12(4):241–243.

    Article  Google Scholar 

  12. Wong WK, Tang H, Guo S, Leung VPCM: Scheduling algorithm in a point-to-multipoint broadband wireless access network. Proc. IEEE 58th Vehicular Technology Conference 2003, 1593–1597.

    Google Scholar 

  13. Viswanath P, Tse DNC, Laroia R: Opportunistic beamforming using dumb antennas. IEEE Trans Inf Theory 2002, 48(6):1277–1294. 10.1109/TIT.2002.1003822

    Article  MATH  MathSciNet  Google Scholar 

  14. Singh V, Sharma V: Efficient and Fair Scheduling of Uplink and Downlink in IEEE 802.16 OFDMA Networks. Proc. IEEE Wireless Communications and Networking Conference 2006, 984–990.

    Google Scholar 

  15. Esmailpour A, Nasser N: Dynamic QoS-based bandwidth allocation framework for broadband wireless networks. IEEE Trans Veh Technol 2011, 60(6):2690–2700. doi:10.1109/TVT.2011.2158674 doi:10.1109/TVT.2011.2158674

    Article  Google Scholar 

  16. Matlab. 2013. . Accessed: 23 Mar., 2013 [http://www.mathworks.com/products/matlab/index.html]. Accessed: 23 Mar., 2013

  17. Packet loss and jitter effects on IPTV. 2012. http://www.netrounds.com/whats-new/netrounds-news/entries/entry/packet-loss-and-jitter-effects-on-iptvpacket-loss-and-jitter-effects-on-iptv

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Niharika P Kumar.

Additional information

Competing Interests

The authors declare that they have no competing interests.

Authors’ original submitted files for images

Rights and permissions

Open Access This article is distributed under the terms of the Creative Commons Attribution 2.0 International License (https://creativecommons.org/licenses/by/2.0), which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Reprints and permissions

About this article

Cite this article

Algur, S.P., Kumar, N.P. Novel user centric, game theory based bandwidth allocation mechanism in WiMAX. Hum. Cent. Comput. Inf. Sci. 3, 20 (2013). https://doi.org/10.1186/2192-1962-3-20

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1186/2192-1962-3-20

Keywords