Wireless MAC Using Non-Cooperative Analytics Methods

This chapter illustrates the use of non-cooperative games to optimize the performance of protocols for contention-prone shared medium access in wireless networks. Specifically, it proposes and analyses a set of new utility functions con- forming to a recently proposed generic game-theoretic model of contention control in wireless networks [5]. The functions ensure that the game has a unique non-trivial Nash equilibrium. Simulations carried out for IEEE 802.11 style networks indicate that the aggregate throughput of the network at the Medium Access Control (MAC) layer has improved while the collision overhead is reduced.

Dr Francesco Dergano
10 min readDec 27, 2020

Introduction

In most wireless networks, including ad-hoc and sensor networks, access to the shared wireless medium is random-access based. In absence of a central regulat- ing authority, it is only natural that multiple wireless nodes will attempt to access the medium simultaneously, resulting in packet collisions. IEEE 802.11 (here- after briefly called 802.11) is the most commonly followed standard in wireless networks. It defines a distributed mechanism called the distributed coordination function (DCF) to resolve contention among the contending nodes. It involves chan- nel sensing to assess the state of the shared medium and adjusting the channel access probability accordingly to minimize chances of collision. In 802.11, the channel access probability is determined by a contention window (CW) maintained by a node. DCF uses a binary feedback signal (collision or no collision) and sharply updates CW using binary exponential backoff to adapt to the contention.

However, various studies [1 — 4] report that this contention control scheme results in drastic reduction in throughput with increasing number of contending nodes. Every new transmission begins with the same channel access probability and thus the history of contention level in the network is not used. Moreover, doubling the contention window for every failed transmission only reduces the chances of transmission by the nodes that were already unsuccessful.

These inferences indicate that instead of using drastic update strategies, it is bet- ter to steadily guide the system to a state that is optimal with respect to the current contention level in the network. Since this state is optimal, it is sustainable and has high throughput, greater fairness and sparing collisions. This requires an analyti- cal framework where these desirable features can be mathematically characterized. Towards this end, we follow [5 — 8] and model the nodes as selfish players in a non- cooperative game [9]. Unlike common studies [10], [11], we do not reverse-engineer DCF as a game but use game theory as a tool for optimization. We define util- ity functions reflecting the gain from channel access and the loss from collision. The strategy set of each node is the set of allowable channel access probabilities of the node. We next use gentle update strategies that drive the network to its Nash equilibrium from which no player has an incentive to unilaterally deviate. This char- acterizes the desired stable operating point or the optimal state of the network. A continuous feedback signal is used to assess the contention level.

The main contribution of this work is to propose new utility functions that allow a large range of channel access probabilities (thus ensuring good performance in a wide spectrum of network sizes) and result in a unique non-trivial Nash equilib- rium. Non-triviality and uniqueness ensure that the equilibrium is efficient and leads to high short-term fairness. As we show through extensive simulations, the resulting design is able to provide far better results than DCF used in 802.11. Throughput gets higher, and collision overhead and time slots wasted in collisions are drasti- cally reduced in the new design. The nodes do not need to know the total number of nodes in the network nor is any message passing necessary. A completely distributed mechanism that allows each node to behave selfishly to maximize its own payoff is used to ensure a socially beneficial state. The chapter is organized as follows: Section 18.2 provides a background of this study, journeying through the recent past of game-theoretic models of medium access control in wireless networks. We also point out the differentiation and novelty of our work as regards related stud- ies in the literature. Section 18.3 presents the proposed game model and its many properties. Section 18.4 investigates distributed mechanisms to achieve the equi- librium of the game in real networks. The model is evaluated via simulations in Section 18.5. After a brief discussion in Section 18.6 conclusions follow in Section 18.7.

Background

Game theory [9] provides a tool to study situations in which there is a set of rational decision makers who take specific actions that have mutual, possibly conflicting, consequences. A game models an interaction among parties (called the players) who are rational decision makers and have possibly conflicting objectives. Each player has a set of actions called its strategies and with each strategy is associated a payoff function. Rationality demands each player maximize its own payoff function. Non-cooperative games are commonly used to model problems in medium access control in telecommunication networks. In such a game, the solution concept is a notion called a stable set or Nash equilibrium that identifies a set of strategies for all the participating players, from which no player has an incentive to unilaterally deviate as any unilateral deviation will not result in an increase in payoff of the deviating player.

In [12] the authors model the nodes in an Aloha network as selfish players in a non-cooperative Aloha game. They assume that the number of backlogged users is always globally known. This limitation is addressed in [13] where only the total number of users is known. More recent research includes modeling the DCF in 802.11 in a game-theoretic framework [11], and reverse engineering exponential backoff as a strategy [10] of non-cooperative games.

In contradistinction to these works, we do not explicitly reverse-engineer 802.11 or DCF into a game model but use ideas from non-cooperative game theory to optimize the performance of 802.11. We consider each node in a wireless net- work as having a set of strategies which are its channel access probabilities. Unlike common practices in optimization of 802.11 like [4], we do not assume that each node knows the number of users in the network. This is more reflective of prac- tical situations where such knowledge is difficult to acquire. Non game-theoretic approaches that do not depend on the nodes’ knowing the network size include the ones presented in [2] and [3]. We use selfishness to achieve optimized system-wide performance. In [5, 6, 8] the authors use game theory to study and optimize 802.11. Although they remain our motivation, their illustrative utility functions are differ- ent from ours. Their utility functions are well-behaved in a much more restricted domain than ours. Moreover, as we show, our utility functions give far better results for large network sizes. Supermodular games for contention control are presented in [7].

Game-Theoretic Model of Medium Access Control

The system we consider is a network of N nodes that are all able to hear one another. To facilitate analysis we will adopt a description of our access mechanism in terms of channel access probabilities. It has been shown in [1] that in a saturated regime the constant channel access probability p relates to the corresponding contention

Distributed Mechanisms to Achieve Nash Equilibrium

The non-cooperative game G is played repeatedly by the nodes, that is, it is a multi- stage game. A stage may be a single transmission or a sequence of K transmissions for a fixed K > 1.

Suppose that each node after playing a round observes the cumulative effect (in the sense of conditional collisional probability) of the actions of all players in the previous round. This knowledge may now be used by the node to select its strategy for the next round. Based on how this knowledge is used, there are many common techniques to push the system to its Nash equilibrium. Two more popular ones are:

  • 􏰑 Best response: This is the most obvious mechanism where at each stage every player chooses the strategy that maximizes its payoff given the actions of all other players in the previous round:
  • Gradient play: In this case, each player adjusts its persistent probability gradually in a gradient direction suggested by the observations of the effect of other players’ actions. Mathematically,

Performance Evaluation

In this section, we report results from numerical simulations performed to analyze the model with our utility functions. The simulation results are obtained in a satu- ration regime. The performance is compared with the standard DCF in basic access 802.11b. We choose to measure the channel access probabilities, aggregate through- put of all nodes, collision overhead incurred and the average number of time slots wasted in collisions per successful transmission.

Channel Access Probabilities

Figure 18.1 shows the variation of the probability with which a node accesses the wireless channel in the saturation regime in GDCF and DCF as the network size increases. In both protocols, the domain of the probability is the set [0.06061, 0.00195] since contention window varies from 32 to 1,024. Recollect from Sec- tion 18.3 that the probability with which any given node in GDCF accesses the channel at the NE is identical for all nodes. We observe that the channel access probabilities are always observed to be higher in DCF than in GDCF. This has an important consequence as will be evident in the ensuing discussion.

Aggregate Throughput

The aggregate throughput (see Fig. 18.2) which measures the total throughput of all nodes under saturation conditions is higher in DCF when the network size is small but it gets far lower than GDCF as the network size grows. This is because the chan- nel access probabilities are higher in DCF. In a small network, collisions are few and

so higher channel access probabilities translate to higher throughput. But as network size grows, collisions increase if all nodes aggressively access the channel, trigger- ing a drastic reduction in throughput. Since channel access probabilities are lower in GDCF, it supersedes DCF as network size grows. Consequently, large beds of wireless nodes achieve much greater throughput if they run GDCF instead of DCF.

Collision Overhead

We observe from Fig. 18.3 that the conditional collision probability is higher in case of DCF than in GDCF. This is expected as the channel access probabilities are lower in case of GDCF resulting in reduced medium contention even when the number of nodes increases. Indeed GDCF is designed with the aim of keeping the price of contention low.

Slots Wasted in Collisions

In 802.11, a large number of slots are wasted due to collisions. This seriously affects the performance and lifetime of the power-constrained devices in ad-hoc and sensor networks. Figure 18.4 makes clear that the number of slots wasted in collisions for every successful transmission is much higher in DCF than in GDCF, making the later attractive in these networks.

Discussion

The motivation for this work is [5]. So we make a brief comparison with it in this section. Our utility function produces unique non-trivial NE for a wide range of vi and wi while the one in [5] has more strict constraints on vi once wi is specified. This makes our utility functions admit a much larger game space and hence is supe- rior for widely varying network sizes since the channel access probability should be very small for very large network sizes and high for small network sizes. Table 18.1 presents a performance comparison as regards throughput and collision for large networks. Here, as in our experiments, we vary contention windows from 32 to 1,024 as is the default for DSSS PHY in 802.11 standard while in [5] (call it U Chen) the window ceiling is limited to 256 if the base is 32 and only powers of 2 are cho- sen. As expected, with these settings, our utility function exhibits particularly better performance for large network sizes.

Conclusion

This chapter describes an elegant game-theoretic model aimed at optimizing the aggregate throughput and distributing it fairly among the contending nodes in an 802.11 network, thus inducing a desirable state of the entire network. It also drastically brings down the collision overhead, reducing energy loss due to failed transmissions. We believe it contends as a serious and superior alternative to the traditional distributed coordination function in 802.11 MAC. The game-theoretic model uses selfish behavior of the participating players to steer the network to desirable system-wide characteristics. Our utility functions assure the existence of unique, non-trivial Nash equilibrium where all nodes possess identical channel access probabilities. This results in fairly distributed high throughput with low col- lision overhead. Rigorous analysis and simulations are used to prove the ideas. In future we intend to simulate the algorithm in settings with bit errors to gather more performance measures. Further, we intend to investigate other utility functions with similar characteristics and explore if this design can be used in real networks. We also intend to analyse the more generic features of the model.

--

--

Dr Francesco Dergano

CEO of @skydatasol (dormant) — Principal of @kamiwebproject — Lead Research Manager of The Antarctic National Security Framework — Full-Time Student