Keywords

1 Introduction

Among all issues that a wireless network may face interferences and their impact on the overall network performance are a major one as they occur at a very low system layer and are unpredictable. To tackle these issues three strategies are adopted in general: centralized decision-making intelligence, coordinated intelligence processing, and dynamic radio resource management.

A centralized decision-making intelligence processor, appliance or distributed, software or hardware, is necessary to ensure that network-wide applied radio strategy is coherent among all participating nodes in one hand, and that raw data used for decision-making itself is reliable on the other hand. The need for dynamic radio resource management arises from the fact that network configuration should react quickly to unpredictable changes in radio environment and trigger necessary actions to overcome any issues. These techniques may also help optimize overall network performance by processing transmission opportunities, as an example.

Dynamic Radio Resource Management (RRM) is the focus of this work and considers mainly these two inputs: lower layer physical inputs (RSSI, SNR, EIRP, noise, etc.) and upper layer service inputs (MAC layer, TCP/IP services and applications, etc.). Other complementary inputs may be provided by passive and active on-field site surveys. In addition to survey inputs, techniques such as Transmit Power Control, Dynamic Channel Assignment, Direction of Arrival estimation and Transmit Opportunity processing provide RRM with necessary data and tools to overcome interferences and adapt efficiently to changes.

Processing of this huge amount of raw data and estimation of transmit opportunity at every point under coverage area in a timely manner is very resource consuming. It depends on: the number of points of interest under coverage area and external or internal events that may require a recalculation of all the topology. In this work we discuss a Bézier surface technique for optimizing coverage area transmit opportunity map calculations in frequently changing dense environment.

In the upcoming section, we present a foundation on unified WIFI architectures, opportunity map calculations and NURBS surfaces. In Sect. 3, we discuss more formally the problem and in Sect. 4, we present our solution. Section 5, is dedicated to our solution results evaluation and interpretation. In the end, we conclude and further our work.

2 Theoretical Background

This section gives an overview of methods used to process transmit opportunity maps as they relate to unified WIFI architectures. Additionally they introduce NURBS, a generalization of Bézier surface technique, for the same purpose of establishing a transmit opportunity map.

2.1 Unified WIFI Architecture

Autonomous or standalone access points’ architectures do not scale well with high number of access points and mobile devices networks that require high-class quality of service and security. Controller-based ones gradually replace these architectures. “Unified” architectures are managed centrally by a decision-making entity and integrate well with other end-to-end network parts: LAN, WAN, etc. from a QoS and Security point of view. Industry implements such central decision making processors mainly in three ways: physical controller-based, virtual controller-based, or access point distributed-based. In the latter implementation access points take over the controller role. The first two implementations require a controller, a virtual or physical appliance that is reachable by all network access points.

A good example of unified WIFI architectures is Cisco vendor physical WLC 5500 series controller-based one. It defines a communication protocol, CAPWAP, used by access points to build protocol associations to the main controller. It defines another over-the-air communication protocol, OTA, for access points to exchange some proprietary and standard patterns for management or control purposes. In addition, it integrates, at access point level, a set of on-chip proprietary RRM new features: Clientlink or Cleanair that are meant to monitor and measure radio environment characteristics and report them back to the controller via the already established CAPWAP tunnels. Based on this gathered information, the controller decides on channel assignment and corresponding power level tunings network-wide. Its decision conform to a pre-configured set of policies that define many configurable variables such as acceptable signal strength levels, tolerable noise levels, range of usable power levels, range of usable channel frequencies, etc. Then raw data, gathered by controllers, is forwarded to both Cisco Prime Infrastructure (PI) and Mobility Service Engine (MSE) platforms for different purposes mainly interferences spotting and analytics. Generated heat maps represent interferences occurence estimation and analytics on customer’s presence at a covered location.

2.2 Transmit Opportunity Map

In this work we focus mainly on transmit opportunity map calculations that represent co-channel interferences and upper-layer SLA inputs. Co-channel interferences are considered to have more impact on network performance than cross-channel ones, and are inversely proportional to transmit opportunity. SLA inputs, corresponding to upper-layer services QoS measurements, help minimizing processing hysteresis and errors.

Methods, to calculate radio coverage characteristics (interferences, signal strength, etc.), could be categorized as:

  1. 1.

    predictive: distance [1,2,3], barycenter [4], direction based [5] or variants,

  2. 2.

    experimental that are based on on-field site surveys,

  3. 3.

    or hybrid that are a mix of both approaches.

In distance-based models distance estimation of two interfering nodes is necessary to evaluation the amount of interferences. Barycenter-based methods tend to reflect weighted impact of each transmitting point on the others. It results in partitioning of the overall coverage area into zones that are under unique control of each node and that are dependent on the weight each transmitting point may have over time which is not the case of distance based ones. Direction-based methods add more granularity and scalability to previous methods by reconsidering transmission in some areas that were considered as no-talk by other models. In addition, they hint more precisely on hidden interferers and maximize transmission opportunities by qualifying and multiplying transmit directions.

Experimentally based methods have the advantage of reflecting the real measurements. These methods are based on specialized equipments and products such as AirMagnet or Ekahau. However, they lack the ability to adapt to dynamically changing radio environments over time and require important human, financial and system resources.

To evaluate a method over another one, three criterions may be used: accuracy of transmit opportunity calculations at any point, calculation time, and recalculation time in case of network change.

Heuristic methods seem to offer the highest accuracy level. Barycenter-based models seem to have advantage over distance-based ones in terms of measurement accuracy. However, neither of them can scale with frequently changing network processing. In the upcoming sections, we explore a new method that is NURBS surface based, and evaluate how it can scale with frequently changing dense network deployments.

2.3 NURBS Surface

This subsection is an introduction to a widely used technique in computer aided graphical design fields that are NURBS surfaces which are a generalization of the much known Bézier-Bernstein and B-Spline curves. A NURBS surface of degree p and q is defined as:

$$\begin{aligned}&S(u,v) = \frac{\sum \limits _{i=0}^m \sum \limits _{j=0}^n N_{i,p}(u) N_{j,q}(v) w_{i,j} P_{i,j}}{\sum \limits _{i=0}^m \sum \limits _{j=0}^n N_{i,p}(u) N_{j,q}(v) w_{i,j}} \end{aligned}$$
(1)

where,

  • uv — are variables in [0, 1]

  • \(P_{i,j}, w_{i,j}\) — are control points and corresponding weights

  • mn — correspond to number of control polygons and points

  • \(\{t_0, t_1, ..., t_{m+p+1}\}, \{t_0, t_1, ..., t_{n+q+1}\}\) — are knots that correspond to control polygons and points respectively

  • pq — function degree that corresponds to number of polygon, point knots minus number of control polygons, points minus 1

\(N_{i,p}\) and \(N_{j,q}\) are the B-Spline basis functions that describe surface control polygons, that are matched by u variable, and curves control points that are matched by v variable. They are given by these formulas:

$$\begin{aligned}&N_{i,0}(t) = \left\{ \begin{array}{rcl} 1 &{} \text{ if } &{} t_{i}\le t < t_{i+1} \\ 0 &{} \text{ otherwise } \end{array}\right.&\end{aligned}$$
(2)
$$\begin{aligned}&N_{i,j}(t) = \frac{t-t_i}{t_{i+j}-t_i} N_{i,j-1}(t) + \frac{t_{i+j+1}-t}{t_{i+j+1}-t_{i+1}} N_{i+1,j-1}(t) \end{aligned}$$
(3)

where, t is a variable in [0, 1] and j an integer different from 0.

Our solution takes advantage of introduction of these new core concepts:

  1. 1.

    control points that are a special set of coverage area points that do influence radio characteristics,

  2. 2.

    weighting of control points that allows impact classification of these control points for a specific measurement or at upper-layer, control over the transmit opportunity,

  3. 3.

    knots that can be tight to the accuracy of our calculations: the more knots we work with and their distribution, the more accurate is our processing of environment attributes.

To ease our preliminary work we consider these simplifications: control polygons and control points numbers are the same, corresponding knots numbers are the same, and knot vector is uniform.

3 Problem Statement

In this section, we formally state the problem. For the rest of this paper let us define,

  • \(A_s, A_d, A_c\) — sets of network mobility devices: mobile stations and access points, measurement points referred as distribution points in this paper and points under the whole coverage area,

  • \(AP_i, STA_j\)\( i^{th}\) access point and \(j^{th}\) mobile station respectively,

  • \(P_{c,i}, P_{d,i}, P_{s,i}\) — are \( i^{th}\) points corresponding to A sets respectively on a 2-D plan grid,

  • O() — function, is the transmit opportunity at every coverage area point.

Each \(AP_i\) has a communication path to a WLAN controller for radio measurement and reporting purposes. In addition, we consider that each \(STA_i\) has a virtual communication path to the controller via corresponding \(AP_i\) of attachment and for the same purpose. \(P_{d,i}\) are defined as virtual points on grid such as to confine each \(STA_i\) or \(AP_i\) to the smallest possible square obtained by dividing uniformly the grid horizontally and vertically. It is to note that the number of \(P_{d,i}\) depend on the distribution of \(AP_i\) and \(STA_i\) on the grid and not on their number. \(A_s\) is a subset of \(A_d\) that is itself a subset of \(A_c\). To ease this preliminary work we consider that \(A_d\) set is sufficient for accurate O() calculations.

Let us define,

  • \(T_m, T_{ch}, T_{m,intf}, T_{m,cpu}\) — coverage map processing time, map change processing time, time to report a measure, and time to process a transmit opportunity,

  • \(T, T_n\) — whole cycle and n-cycle processing times,

  • NM — number of control points and mobility devices respectively.

\(T_m\) and \(T_{ch}\) are required for complete map calculation and recalculation in case of changes in environment. A requirement for map calculations is that all control points report data in a timely manner. In addition, changes to radio environment should be paced enough to allow stable transition from an old map condition to a new stable one. \(T_m\) could be further divided into two times: \(T_{m,intf}\) and \(T_{m,cpu}\). \(T_{m,intf}\) corresponds to the situation where a control point is an AP or STA and it includes necessary time for radio measurement at point level and reporting it back to the controller. On the other side \(T_{m,cpu}\) corresponds to the estimated time for transmit opportunity calculation by algorithms at controller level. \(T_{ch}\) may correspond to a periodic interval at which a new map calculation is triggered and necessary time for it. \(T_{m,intf}\) is a real-time measure and is dependent on vendor’s hardware and control plane network condition. Then this work focuses on \(T_{m,cpu}\) and \(T_{ch}\) times. \(T_{m,cpu}\) depends on the used algorithm at controller level.

If distance-based or beam direction-based, transmit opportunity at any given control point except from AP or STA could be approached by a weighted function of intersections with all other transmitting sources radio patterns. In case of barycenter-based algorithms, the transmit opportunity is more related to point localization within a discovered coverage zone.

Then,

$$\begin{aligned}&T_{m,cpu}(DISTANCE) = (N - M) * M * T_{intersection}&\end{aligned}$$
(4)
$$\begin{aligned}&T_{m,cpu}(BARYCENTER) = (M - 2) * T_{iteration} + N * T_{zone} \end{aligned}$$
(5)

where \(T_{intersection}\) is the required time for processing an intersection between two transmit patterns in the coverage area. \(T_{iteration}\) is the required time for an algorithm iteration. In case of Delaunay triangulation based calculations, iteration may correspond to a circumcircle center calculation. \(T_{zone}\) is the time required to locate a point in a defined zone and to deduce its corresponding transmit opportunity value.

The total time for a whole calculation cycle that include \(k-1\) change is equal to:

$$\begin{aligned}&T(DISTANCE) = T_{m,intf} + k * (N - M) * M * T_{intersection}&\end{aligned}$$
(6)
$$\begin{aligned}&T(BARYCENTER)= T_{m,intf}+ k * ((M - 2) * T_{iteration} + N * T_{zone}) \end{aligned}$$
(7)

Barycenter algorithms calculations time is negligible to distance-based one when N and M are low. For high N and M values, distance-based algorithms are more scalable. We notice also that barycenter algorithms performs better when the M is roughly a half of N. The upcoming section describe our solution aimed at scaling barycenter-based algorithms like processing times using generalized Bézier NURBS surfaces calculations. It aims also at reducing processing time for higher equivalent N and M numbers that is more relevant to distance-based algorithms like processing.

4 Our NURBS-Based Solution

Our transmit opportunity calculations are based on Bézier NURBS surfaces notions. Two algorithms run for this purpose. The first one, NTO-CP, processes transmit opportunity at every coverage area point. The second one, NTO-CH, processes changes to the current transmit opportunity map.

4.1 NTO-CP Algorithm

The aim of this algorithm is to calculate the transmit opportunity at every coverage area point and:

  1. 1.

    to reduce the number of control points and still obtain the same results,

  2. 2.

    optimize knots number corresponding to variables u and v.

Let’s A be the set of \(P_{i,j}\) control points. First, this set initializes to correspond to mobility devices: AP and STA as they have the ability to report raw radio data measurements and are, at the same time, the main source of interferences. To ease this work, let’s correspond \(P_{i,j}\) values to nodes transmission power levels. Then we reorder A set by increasing power levels, weight at maximum the first node, calculate S() at all the other A set nodes and, compare them to reported measures. If reported measures after and before weight change are the same and if S() at these points is the same, then we move this node from A set to a new set \(A_{ineff}\) of “ineffective” control points. \(A_{ineff}\) defines a monitoring points set that are coverage area points that do not have “effective” control over the transmission opportunity map but still monitor radio interface and report radio raw data. If S() is not the same as reported measures, we define a new hysteresis value, ERR or variance that can be seen as a calibration of the actual transmit opportunity function calculations. \(A_i\) correspond to the set of \(P_j\) node that are affected by \(P_i\) maximum weighting.

For this preliminary work we consider that knot vector is the same for u and v variables. We divide the coverage area into a maximum of three zones: one central and two suburbans. In each region, we elect a zone control point that matches these two criteria:

  1. 1.

    covers the all-corresponding zone,

  2. 2.

    and is the farthest point from the central zone or central zone control point.

Then we set these three zone control points transmit power level at maximum and we turn other control points to monitoring state that corresponds to the lowest transmit power level. We initialize next, knots number to match control points number. If the reported measures at these points are the same as the calculated ones we keep the current number, otherwise we double it until the acceptable hysteresis is satisfied. Further work may consider knots distributions that are different per zone and per direction u or v. The following algorithm details this procedure:

figure a

This procedure is done only at system initialization and subsequently for any newly added control point or at sufficiently large periodic time interval to guarantee that A set is refreshed with accurate information. The second part of calculations focuses on processing of control points zones that are meant at optimizing knots numbers for calculations accuracy. For the rest of this algorithm let us define:

  • \(w_{avg, reported}\) — the average of reported measure of \(P_i\) as seen by all other nodes,

  • \(P_{0, pseudo}\) — the nearst point of \(A-A_{ineff}\) from the processed pseudo-node,

  • \(Z_{0, pseudo}\) — the central zone control points set covered by pseudo control point at maximum weight,

figure b

Processing of effective control points requires one by one node weighting at maximum and measurement of its effect over other control points. This weighting may be: an increase of transmit power level, a high QoS classification, etc. a mix of them or other variables that are under system management control and that can maximize the opportunity function. Zones processing and determination of pseudo control points helps optimizing S(uv) knots number for calculation accuracy purposes.

figure c

We double the initial number of knots until S() calculation hysteresis is satisfied in each zone. We then update the current optimized knots number to match the maximum among all defined zones.

4.2 NTO-CH Algorithm

A change may affect a zone, multiple zones or the entire network. This algorithm task is to scope the change impact so that only pertaining set’s control points are processed to reflect the new change. It is to note that zones used in NTO-CH are different from ones used in NTO-CP algorithm, as the purpose is different. The idea here is to find an optimized number of zones that hints on the impact of a given change.

figure d

Not all changes are relevant and maybe classified to match one of this set of categories: minor, medium, or high. To ease this preliminary work only high effect changes are considered and all other changes are considered insignificant. A change may correspond to a newly reported RSSI or any other relevant variable.

4.3 Algorithm Time

Calculation time corresponds to one initialization and \(k-1\) changes. This time is compound of effective control processing time, optimum knots number processing time, and change processing time. Effective control point processing is unique to this method and it requires running S(), \(M*(M-1)\) times. Optimum knots number processing time corresponds to S() calculations at every zone control points and iterations until required accuracy is achieved. Then if \(\alpha \), \(\mu \), and \(\beta \), are number of iterations, number of zones and number of ineffective controls points respectively, the time is given by: \(\frac{\alpha }{\mu }* (M - \beta )\). It is to note that the first purpose of zoning is to allow parallel processing in every zone.

$$\begin{aligned}&T_{(NURBS)} = M^2 - (1 +\frac{k- \alpha }{\mu })M + k \eta N + (k \frac{\mu + \beta }{\mu }- \alpha \frac{\beta }{\mu }) \end{aligned}$$
(8)

where \(\eta \) is a value that represents the scope of the change.

We apply these numerical simplifications: \(\alpha =1\), \(\mu =3\), \(\eta =0.25\). \(\alpha =1\) as one iteration is sufficient for an acceptable accuracy with regards to other algorithms, and because at initialization, the number of knots could be set to a higher level. It influences only S() calculation time in case of higher knots number which is considered insignificant in this paper but may be considered in further work if necessary. \(\mu =3\) is more to allow parallel processing when computing zones and may correspond to non-overlapping channels. \(\eta =0.25\) supposes that most changes affect only specific zones and do not span multiple zones. It is sufficient for this preliminary work that shows our solution design advantages.

5 Results Evaluation and Conclusion

At this level, our evaluation is based on Matlab simulations of our solution, distance and barycenter based ones. Next step would consider implementing this solution on Linux based AP and STA in environments including vendor-working solutions. At the same time, we consider that at coding level we could get concluding results with regard to competitors and other related work.

First, in Fig. 1, we vary N and M numbers and observe the variation of processing times of the three methods. For higher N numbers our solution starts performing better than both other solutions when M is almost greater than \(10\%\) of N. At higher M values, barycenter-based solution is no more scalable. For lower M and N numbers barycenter-based solution is better than our solution and distance-based ones. Then it is more suitable for lower density and sparse deployments. For high-density deployments which correspond to higher N numbers and with lower M number our solution starts performing better than barycenter-based ones when N number is roughly four times the M number.

Fig. 1.
figure 1

Opportunity processing time comparison between our solution, distance and barycenter based algorithms

Next, we evaluate the impact of number of changes over our solution and the two other solutions. Figure 2 shows results for the three algorithms. Changes’ impact over barycenter-based algorithms is very noticeable for higher numbers of distribution points and increasing number of mobility devices. Distance-based algorithm seems to depend solely on the number of affecting changes. Our solution algorithm seems to depend slightly on both the number of changes and mobility devices. We see clearly that for lower number of mobility devices, mainly access points, and significant number of distribution points our solution performs better in case of frequent changes.

Fig. 2.
figure 2

Impact of number of changes over algorithms: (a) describes barycenter-based algorithm; (b) describes distance-based one; and, (c) describes our solution.

Based on evaluation results we conclude that:

  1. 1.

    our solution performs better than both other solutions when number of mobility devices is almost greater than \(10\%\) of the number of distribution points,

  2. 2.

    our solution performs better than barycenter-based one when N number is roughly four times the M number.

Further work may consider these elements to evaluate deeply our solution over the other ones:

  1. 1.

    accuracy of calculations,

  2. 2.

    feasibility of parallel multiple opportunity functions processing using our solution model,

  3. 3.

    zoning based on different point’s distributions, knot number...

  4. 4.

    handling of obstacles in radio environment and its impact on processing accuracy and time,

  5. 5.

    dynamic or in movement obstacles’ impact in terms of calculations accuracy and time.