Keywords

These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.

1 Introduction

Cooperative communications can provide promising solutions to satisfy the ever-increasing demand for wireless data transmission [21]. Therefore, it has been investigated from several perspectives. For instance, the authors have explored the communication throughput in broadcast channels by invoking information-theoretic tools in [2, 3, 6, 9, 13, 14, 22]. Particularly, considering one transmitter and two receivers, Cover obtained the achievable rate regions [3]. Then, this scheme was generalized to broadcast channels with many receivers in [2]. Furthermore, the authors in [14] defined the ergodic capacity regions for fading broadcast channels considering different spectrum-sharing techniques and derived the optimal resource allocation policies that maximize these regions. Besides, the authors examined parallel Gaussian broadcast channels and obtained the optimal power allocation policies that achieve any point on the capacity region boundary subject to a sum-power constraint [9].

In the aforementioned studies, the authors considered Gaussian input signaling. On the other hand, it is known that many practical systems make use of input signaling with discrete and finite constellation diagrams. In that regard, the authors in [7] studied two-user broadcast channels with arbitrary input distributions subject to an average power constraint and derived the optimal power allocation policies that maximize the weighted sum rate in low and high signal-to-noise ratio regimes. Similarly, the authors in [15] considered the mutual information in parallel Gaussian channels and derived the optimal power allocation policies. In addition, the authors in [17] explored the optimal power policies that minimize the outage probability in block-fading channels when arbitrary input distributions are applied under both peak and average power constraints. In these studies, the authors benefited from the fundamental relation between the mutual information and the minimum mean-square error (MMSE), which was initially established in [8].

In another line of research, cross-layer design concerns gained an increasing interest since many of the current wireless systems are to support delay-sensitive applications. Consequently, quality-of-service (QoS) requirements in the form of delay and buffer overflow were studied in wireless communications from data-link and physical layer perspectives. Effective capacity was proposed as a performance metric that provides the maximum constant data arrival rate at a transmitter buffer that can be supported by a given service (channel) process [24]. Subsequently, effective capacity was scrutinized in several different communication scenarios [1, 4, 10, 16, 18, 20]. For instance, effective capacity was examined in one-to-one transmission scenarios in wireless fading channels with feedback information [20], interference and delay constrained cognitive radio relay channels [16], multiple-input multiple-output channels [10], and multi-band cognitive radio channels [4]. Moreover, the authors in [18] studied the effective capacity of point-to-point channels and derived the optimal power allocation policies to maximize the system throughput by employing arbitrary input distributions under an average power constraint. More recently, we explored the effective capacity regions of multiple access channels with arbitrary input distributions and identified the optimal power allocation policies under average transmission power constraints [11].

In this paper, different than the aforementioned studies and our recent study [11], we focus on a broadcast channel scenario in which one transmitter employs arbitrarily distributed input signaling to convey data to two receivers under average power constraints and QoS requirements. We define the effective capacity region and provide an algorithm to obtain the optimal power allocation policies that maximize this region by enforcing the relation between the mutual information and the MMSE. Then, we express the optimal decoding regions in the space spanned by the channel fading power values. We finally justify our analytical results with numerical presentations.

Fig. 1.
figure 1

Channel model: A two-user broadcast channel in which one transmitter communicates with two receivers. The transmitter performs superposition coding, while each receiver performs successive interference cancellation with a certain order. The decoding order depends on the channel conditions, i.e., the magnitude of the squares of the channel fading coefficients, \(z_1 = |h_1|^2\) and \(z_2 = |h_2|^2\).

2 System Description

2.1 Channel Model

As shown in Fig. 1, we consider a broadcast channel scenario in which one transmitter communicates with two receivers. We assume that the transmitter is equipped with two data buffers each of which stores data to be transmitted to the corresponding receiver. The transmitter, regarding the instantaneous channel conditions and employing a superposition coding strategy with a given order, sends data from both buffers in frames of T seconds. During data transmission, the input-output relation between the transmitter and the \(j^\text {th}\) receiver at time instant t is given by

$$\begin{aligned} y_{j}(t) =h_j(t)x_{j}(t)\sqrt{P_{j}(t)}+h_j(t)x_{m}(t)\sqrt{P_{m}(t)}+ w_j(t)\quad \text {for }t = 1,2,\cdots , \end{aligned}$$
(1)

where \(j,m\in \{1,2\}\) and \(j\ne m\). Above, \(x_{j}(t)\) and \(x_{m}(t)\) are the channel inputs at the transmitter and carry information to the \(j^\text {th}\) and \(m^\text {th}\) receivers, respectively, and \(y_j(t)\) is the channel output at the \(j^\text {th}\) receiver. Moreover, \(w_j(t)\) represents the additive thermal noise at the \(j^\text {th}\) receiver, which is a zero-mean, circularly symmetric, complex Gaussian random variable with a unit variance, i.e., \({E}\{|w_j|^2\}=1\). The noise samples \(\{w_j(t)\}\) are assumed to be independent and identically distributed. Meanwhile, \(h_j(t)\) represents the fading coefficient between the transmitter and the \(j^\text {th}\) receiver, where \({E}\{|h_{j}|^{2}\}<\infty \). The magnitude square of the fading coefficient is denoted by \(z_j(t)\), i.e., \(z_j(t) = |h_j(t)|^2\). We consider a block-fading channel and assume that the fading coefficients stay constant for a frame duration of T seconds and change independently from one frame to another. We further assume that \(h_1\) and \(h_2\) are independent of each other and perfectly known to the transmitter and both receivers. Thence, the transmitter can adapt the transmission power policy and the transmission rate for each receiver accordingly. In addition, the transmission power at the transmitter is constrained as follows:

$$\begin{aligned} \mathbb {E}_{t}\{P_1(t)\}+\mathbb {E}_{t}\{P_2(t)\}\le \overline{P}, \end{aligned}$$
(2)

where \(P_1(t)\) and \(P_2(t)\) are the instantaneous power allocation policies for the \(1^{\text {st}}\) and \(2^{\text {nd}}\) receivers, respectively, i.e., \(\mathbb {E}_{x_{1}}\{x_{1}(t)\}\le P_1(t)\) and \(\mathbb {E}_{x_{2}}\{x_{2}(t)\}\le P_2(t)\), and \(\overline{P}\) is finite. We finally note that the available transmission bandwidth is B Hz. In the rest of the paper, we omit the time index t unless otherwise needed for clarity.

2.2 Achievable Rates

In this section, we provide the instantaneous achievable rates between the transmitter and the receivers given the input signal distributions. We can express the instantaneous achievable rate between the transmitter and the \(j^{\text {th}}\) receiver by invoking the mutual information between the channel inputs at the transmitter and the channel output at the \(j^{\text {th}}\) receiver. Given that \(h_{j}\) and \(h_{m}\) are available at the transmitter and the \(j^{\text {th}}\) receiver and that the \(j^{\text {th}}\) receiver does not perform successive interference cancellation, the instantaneous achievable rate is given as [5]

$$\begin{aligned} \mathcal {I}(x_{j};y_j)=\mathbb {E}\left\{ \log _{2}\frac{f_{y_j|x_{j}}(y_j|x_{j})}{f_{y_j}(y_j)}\right\} \quad \text {for }j \in \{1,2\}, \end{aligned}$$

where \(f_{y_j}(y_j)=\sum _{x_{j}}p_{x_{j}}(x_{j})f_{y_j|x_{j}}(y_j|x_{j})\) is the marginal probability density function (pdf) of the received signal \(y_j\) and \(f_{y_j|x_{j}}(y_j|x_{j})=\frac{1}{\pi } e^{-|y_j-h_jx_{j}\sqrt{P_{j}}|^2 }\). On the other hand, if the \(j^{\text {th}}\) receiver performs successive interference cancellation, i.e., the \(j^{\text {th}}\) receiver initially decodes \(x_{m}\) and then decodes its own data, we have the achievable rate as follows:

$$\begin{aligned} \mathcal {I}(x_{j};y_j|x_{m})=\mathbb {E}\left\{ \log _{2}\frac{f_{u_{j}|x_{j}}(u_j|x_{j})}{ f_{u_j}(u_j)}\right\} , \end{aligned}$$

where \(u_{j}=y_j-h_{j}x_{m}\sqrt{P_{m}}\). Above, \(f_{u_j}(u_j)=\sum _{x_{j}}p_{x_{j}}(x_{j})f_{u_j|x_{j}}(u_j|x_{j})\) is the marginal pdf of \(u_j\) and \(f_{u_j|x_{j}}(u_j|x_{j})=\frac{1}{\pi } e^{-|u_j-h_jx_{j}\sqrt{P_{j}}|^2 }\).

We assume that each receiver, regarding the channel conditions and the encoding strategy at the transmitter, performs successive interference cancellation with a certain order if it is possible to do so. For instance, if the decoding order is (jm) for \(j,m \in \{1,2\}\) and \(j \ne m\), the \(j^{\text {th}}\) receiver decodes its own data by treating the signal carrying information to the \(m^{\text {th}}\) receiver as interference. On the other hand, the \(m^{\text {th}}\) receiver initially decodes the data sent to the \(j^{\text {th}}\) receiver and subtracts the encoded signal from the channel output, and then decodes its own signal. Recall that both receivers perfectly know the instantaneous channel fading coefficients, \(h_{1}\) and \(h_{2}\), and the decoding order depends on the relation between the magnitude squares of channel fading coefficients \(z_{1}\) and \(z_{2}\). Therefore, we consider \(\mathcal {Z}\) as the region in the \((z_1,z_2)\)-space where the decoding order is (2,1) and \(\mathcal {Z}^{c}\), the complement of \(\mathcal {Z}\), as the region where the decoding order is (1,2). Noting that the transmitter can set the transmission rates to the instantaneous achievable rates, we can express the instantaneous transmission rate for the \(1^{\text {st}}\) receiver as

$$\begin{aligned}&r_1(z_1,z_2)= {\left\{ \begin{array}{ll} \mathcal {I}(x_1;y_1|x_{2}),&{} \mathcal {Z},\\ \mathcal {I}(x_1;y_1), &{}\mathcal {Z}^{c}, \end{array}\right. } \end{aligned}$$
(3)

and the instantaneous transmission rate for the \(2^{\text {nd}}\) receiver as

$$\begin{aligned}&r_2(z_1,z_2)= {\left\{ \begin{array}{ll} \mathcal {I}(x_2;y_2), &{}\mathcal {Z},\\ \mathcal {I}(x_2;y_2|x_{1}),&{}\mathcal {Z}^{c}. \end{array}\right. } \end{aligned}$$
(4)

The decoding regions can be determined in such a way to maximize the objective throughput.

2.3 Effective Capacity

Recall that the transmitter holds the data initially in the buffers. As a result, delay and buffer overflow concerns become of interest. Therefore, focusing on the data arrival processes at the transmitter, \(a_1\) and \(a_2\) in Fig. 1, we invoke effective capacity as the performance metric. Effective capacity provides the maximum constant data arrival rate that a given service (channel) process can sustain to satisfy certain statistical QoS constraints [24]. Let Q be the stationary queue length at any data buffer. Then, we can define the decay rate of the tail distribution of the queue length Q as

$$\begin{aligned} \theta =-\lim _{q\rightarrow \infty }\frac{\log _{e}\text {Pr}(Q\ge q)}{q}. \end{aligned}$$

Hence, for a large threshold \(q_{\max }\), we can approximate the buffer overflow probability as \(\text {Pr}(Q\ge q_{\max })\approx e^{- \theta q_{\max }}\). Larger \(\theta \) implies stricter QoS constraints, whereas smaller \(\theta \) corresponds to looser constraints. For a discrete-time, stationary and ergodic stochastic service process r(t), the effective capacity at the buffer is expressed as

$$\begin{aligned} -\lim _{t \rightarrow \infty } \frac{1}{\theta t}\log _e\mathbb {E}\{e^{-\theta S(t)}\}, \end{aligned}$$

where \(S(t) = \sum _{\tau = 1}^{t}r(\tau )\).

Since the transmitter in the aforementioned model has two different transmission buffers, we assume that each buffer has its own QoS requirements. Therefore, we denote the QoS exponent for each queue by \(\theta _j\) for \(j\in \{1,2\}\). Noting that the transmission bandwidth is B Hz, the block duration is T seconds, and the channel fading coefficients change independently from one transmission frame to another, we can express the effective capacity at each buffer in bits/sec/Hz as

$$\begin{aligned} a_{j}=-\frac{1}{\theta _{j}TB}\log _e\mathbb {E}\left\{ e^{-\theta _{j}TBr_j(z_1,z_2)}\right\} , \end{aligned}$$
(5)

where the expectation is taken over the space spanned by \(z_1\) and \(z_2\). Now, utilizing the definition given in [19], we express the effective capacity region of the given broadcast transmission scenario as follows:

$$\begin{aligned} \mathcal {C}_E(\varTheta )=&\bigcup _{r_1,r_2}\Big \{{C(\varTheta )}\ge \mathbf{0}:{C_j(\theta _{j})\le a_{j}}\Big \}, \end{aligned}$$
(6)

where \(\varTheta = [\theta _1,\theta _2]\) is the vector of decay rates, \({C(\varTheta )}=[C_{1}(\theta _{1}),C_{2}(\theta _2)]\) is the vector of the arrival rates at the transmitter buffers, and \(\mathbf 0\) is the vector of zeroes.

3 Performance Analysis

In this section, we concentrate on maximizing the effective capacity region defined in (6) under the QoS requirements for each transmitter buffer and the total average power constraint given in (2). Notice that the effective capacity region is convex [19]. Hence, we can reduce our objective to maximizing the boundary surface of the region and express it as follows [23]:

$$\begin{aligned} \max _{\begin{array}{c} \mathcal {Z},\mathcal {Z}^{c}\\ \mathbb {E}\{P_{1}\}+\mathbb {E}\{P_{2}\}\le \overline{P} \end{array}} \lambda _1 a_{1} + \lambda _2 a_{2}, \end{aligned}$$
(7)

where \(\lambda _1,\lambda _2\in [0,1]\) and \(\lambda _1+\lambda _2=1\). In order to solve this optimization problem, we first obtain the power allocation policies in defined decoding regions, \(\mathcal {Z}\) and \(\mathcal {Z}^{c}\), and then, we provide the optimal decoding regions.

3.1 Optimal Power Allocation

Here, we derive the optimal power allocation policies that maximize the effective capacity region (7) given \(\mathcal {Z}\) and \(\mathcal {Z}^c\). In the following analysis, we provide the proposition that gives the optimal power allocation policies:

Proposition 1

  The optimal power allocation policies, \(P_1\) and \(P_2\), that maximize the expression in (7) are the solutions of the following equalities:

$$\begin{aligned}&\frac{\lambda _1}{\psi _1}e^{-\theta _1 TBr_1(\mathbf z )} \frac{dr_1(\mathbf z )}{d P_1} + \frac{\lambda _2}{\psi _2} e^{-\theta _2 TBr_2(\mathbf z )} \frac{dr_2(\mathbf z )}{d P_1} = \varepsilon , \end{aligned}$$
(8)
$$\begin{aligned}&\frac{\lambda _2}{\psi _2} e^{-\theta _2 TBr_2(\mathbf z )} \frac{dr_2(\mathbf z )}{d P_2} = \varepsilon , \end{aligned}$$
(9)

for \(\mathbf z =[z_1,z_2] \in \mathcal {Z}\), and

$$\begin{aligned}&\frac{\lambda _1}{\psi _1} e^{-\theta _1 TBr_1(\mathbf z )} \frac{dr_1(\mathbf z )}{d P_1} = \varepsilon , \end{aligned}$$
(10)
$$\begin{aligned}&\frac{\lambda _1}{\psi _1} e^{-\theta _1 TBr_1(\mathbf z )} \frac{dr_1(\mathbf z )}{d P_2} + \frac{\lambda _2}{\psi _2} e^{-\theta _2 TBr_2(\mathbf z )} \frac{dr_2(\mathbf z )}{d P_2} = \varepsilon , \end{aligned}$$
(11)

for \(\mathbf z \in \mathcal {Z}^c\). Above, \(\psi _1 = \mathbb {E}_\mathbf z \big \{ e^{-\theta _1 TBr_1(\mathbf z )} \big \}\) and \(\psi _2 = \mathbb {E}_\mathbf z \big \{ e^{-\theta _2 TBr_2(\mathbf z )} \big \}\), and \(\varepsilon \) is the Lagrange multiplier of the average power constraint in (2).

Proof

Omitted due to the page limitation.

In Proposition 1, the derivatives of the transmission rates with respect to the corresponding power allocation policies are given as

$$\begin{aligned} \frac{dr_1(\mathbf z )}{d P_1}={\left\{ \begin{array}{ll} \frac{d\mathcal {I}(x_1;y_1|x_{2})}{d P_1},&{} \mathcal {Z},\\ \frac{d\mathcal {I}(x_1;y_1)}{d P_1}, &{}\mathcal {Z}^{c}, \end{array}\right. }\quad \text {and}\quad \frac{dr_2(\mathbf z )}{d P_2}={\left\{ \begin{array}{ll} \frac{d\mathcal {I}(x_2;y_2)}{d P_2}, &{}\mathcal {Z},\\ \frac{d\mathcal {I}(x_2;y_2|x_{1})}{d P_2},&{}\mathcal {Z}^{c}, \end{array}\right. } \end{aligned}$$

and

$$\begin{aligned} \frac{dr_m(\mathbf z )}{d P_j} = \frac{d \mathcal {I}(x_j;y_j)}{d P_j} - \frac{d \mathcal {I}(x_j;y_j|x_{m})}{d P_j}\quad \text {for }m,j\in \{1,2\}\text { and }m\ne j. \end{aligned}$$

In the following theorem, we provide the derivatives of the mutual information with respect to the power allocation policies:

Theorem 1

Let \(z_1\) and \(z_2\) be given. The first derivative of the mutual information between \(x_{j}\) and \(y_j\) with respect to the power allocation policy, \(P_j\), is given by

$$\begin{aligned}&\frac{d \mathcal {I}(x_j;y_j)}{d P_j}=z_j {MMSE}(x_j;y_j)+z_j \sqrt{\frac{P_m}{P_j}} {Re}(\mathbb {E}\{x_j x_m^{*}-\hat{x}_j(y_{j})\hat{x}^{*}_m((y_{j}))\}) \end{aligned}$$
(12)

for \(j,m \in \{1,2\}\) and \(j \ne m\). Above, \((\cdot )^{*}\) is the complex conjugate operation and \(\text {Re}(\cdot )\) is the real part of a complex number. Meanwhile, the derivative of the mutual information between \(x_{j}\) and \(y_{j}\) with respect to \(P_j\) given \(x_{m}\) is

$$\begin{aligned} \frac{d\mathcal {I}(x_j;y_j|x_{m})}{d P_j}=z_j{MMSE}(x_j;y_j|x_{m}). \end{aligned}$$
(13)

MMSE and MMSE estimate are defined as

$$\begin{aligned} {MMSE}(u;v|s)=1-\frac{1}{\pi }\int \frac{\big |\sum _{u}u p(u) f_{v|u,s}(v|u,s)\big |^2}{f_{v|s}(v|s)} \mathrm {d}v \end{aligned}$$

and \(\hat{u}(v)=\frac{\sum _{u}up(u)f_{v|u}(v|u)}{f_{v}(v)}\), respectively.

Proof

Omitted due to the page limitation.

As clearly noticed in (8)–(11), a closed-form solution for \(P_{1}\) or \(P_{2}\) cannot be obtained easily, which is mainly due to the tied relation between \(P_{1}\) and \(P_{2}\). For instance, \(P_1\) is a function of \(P_{2}\) as observed in (8) for \(z \in \mathcal {Z}\), whereas \(P_{2}\) is a function of \(P_{1}\) as seen in (11) for \(z \in \mathcal {Z}^c\). Therefore, we need to employ numerical techniques that consist of iterative solutions. Hence, in the following, we carry out an iterative algorithm that provides the optimal power allocation policies given decoding regions:

figure a

Given \(\lambda _j\) and \(\psi _j\) for \(j \in \{1,2\}\), it is shown in [18] that both (9) and (10) has at most one solution. We can further show that (8) has at most one solution for \(P_{1}\) when \(P_{2}\) is given, and that (11) has at most one solution for \(P_{2}\) when \(P_{1}\) is given. Consequently, we can guarantee that Steps 8, 9, 11 and 12 in Algorithm 3.1 will converge to a single unique solution. In addition, it is clear that (8) and (10) are monotonically decreasing functions of \(P_1\), and that (9) and (11) are monotonically decreasing functions of \(P_2\). Hence, in region \(\mathcal {Z}\), we first obtain \(P_{2}\) by solving (9) for given \(P_{1}\), and then we find \(P_{1}\) by solving (8) after inserting \(P_{2}\) into (8). Similarly, in region \(\mathcal {Z}^c\), we first obtain \(P_{2}\) by solving (11) for given \(P_{1}\), and then we find \(P_{1}\) by solving (10) after inserting \(P_{2}\) into (10). We can employ bisection search methods to obtain \(P_{1}\) and \(P_{2}\). In the above approach, when either \(P_{1}\) or \(P_{2}\) becomes negative, we set it to zero.

3.2 Optimal Decoding Order

Obtaining the optimal power allocation policies, we investigate the optimal decoding regions in this section. We initially notice that with no QoS constraints, i.e., \(\theta _1 = \theta _2 = 0\), the effective capacity region is reduced to the ergodic capacity region. In this case, the symbol of the receiver with the strongest channel is always decoded last [13]. Specifically, when \(z_j \ge z_m\) the symbol of the \(j^\text {th}\) receiver is decoded last. This result is based on the assumption of Gaussian input signaling. To the best of our knowledge, no such a result is obtained for broadcast channels when QoS constraints are applied, i.e., \(\theta _1 >0\) and \(\theta _2 > 0\), and/or when arbitrary input signaling is employed. In the following, we consider a special case of \(\theta _1 = \theta _2\) for \(\theta > 0\) and provide the optimal decoding order regions when arbitrary input distributions are employed by the transmitter:

Theorem 2

Let \(z_1\), \(z_2\) and \(\overline{P}\) be given. Define \(z_1^\star \) for any given \(z_2\ge 0\) such that the decoding order is (2,1) when \(z_2>z_1^{\star }\). Otherwise, it is (1,2). With arbitrary input distributions and power allocation policies at the transmitter, the optimal \(z_1^\star \) for any given \(z_2\) value is the solution of the following equality:

$$\begin{aligned} \mathcal {I}(x_{1},x_{2};y_{1},y_{2}|z_1^\star ,z_2) = \mathcal {I}(x_1;y_1|x_{2},z_1^\star ) + \mathcal {I}(x_2;y_2|x_{1},z_{2}). \end{aligned}$$
(14)

Proof

Omitted due to the page limitation.

Fig. 2.
figure 2

Effective capacity region boundary when BPSK input signaling is employed for different values of \(\overline{P}\) and K. The areas under the curves provide the effective capacity regions.

4 Numerical Results

In this section, we present the numerical results. Throughout the paper, we set the available channel bandwidth to \(B = 100\) Hz and the transmission frame duration to \(T = 1\) second. We further assume that \(h_1\) and \(h_2\) are independent of each other and set \(\mathbb {E}\{|h_1|^2\} = \mathbb {E}\{|h_2|^2\} = 1\). In addition, we assume that signals transmitted to both receivers are independent of each other, i.e., \(\mathbb {E}\{x_j x_m^{*}\} = 0\). Unless indicated otherwise, we set the QoS exponents \(\theta _1 = \theta _2 = 0.01\). We finally assume that both receivers have the same noise statistics, i.e., \(\mathbb {E}\{|w|^2\}= \mathbb {E}\{|w_1|^2\} = \mathbb {E}\{|w_2|^2\} = 1\), and we define the received signal-to-noise ratio at each receiver with \(\frac{\overline{P}}{{E}\{|w|^2\}}=\overline{P}\).

Fig. 3.
figure 3

Effective capacity region boundary with different modulation techniques and signal-to-noise ratio values when \(K = -6.88\) dB.

Fig. 4.
figure 4

Effective capacity region boundary with different modulation techniques and decay rate parameters, \(\theta =\theta _{1}=\theta _{2}\) when \(K = -6.88\) dB and \(\overline{P} = 5\) dB.

In Fig. 2, we initially consider binary phase shift keying (BPSK) employed at the transmitter for both receivers and investigate the effect of channel statistics on the effective capacity region in Rician fading channels with a line-of-sight parameter K, which is the ratio of the power in the line-of-sight component to the total power in the non-line-of-sight components. The empirical values of K are determined to be −6.88 dB, 8.61 dB and 4.97 dB for urban, rural and suburban environments at 781 MHz, respectively [12]. Considering these K values, we obtain results for different signal-to-noise ratio values, i.e., \(\overline{P}=0\) and \(-5\) dB. We can clearly see that the effective capacity region broadens as K increases because the line-of-sight component becomes more dominant with increasing K. We also observe that the effect of K is more apparent when the signal-to-noise ratio is greater.

Subsequently, setting \(K=-6.88\) dB, we investigate the effect of different signal modulation techniques with different signal-to-noise ratio values in Fig. 3. We consider BPSK, quadrature amplitude modulation (QAM) and Gaussian input signaling. We can easily notice the superiority of the Gaussian input signaling over the others, while BPSK has the lowest performance. However, the performance gap is reduced with decreasing \(\overline{P}\). Lastly, we explore the effect of the QoS exponent \(\theta \) on the effective capacity performance in Fig. 4. We set \(\overline{P} = 5\) dB and \(K = -6.878\) dB and plot results for different modulation techniques. Increasing \(\theta \) results in a smaller effective capacity region as the system is subject to stricter QoS constraints.

5 Conclusion

In this paper, we have examined optimal power allocation policies that maximize the effective capacity region of a two-user broadcast transmission scenario with arbitrarily distributed input signals. We have invoked the relation between MMSE and the first derivative of mutual information with respect to transmission power. We have proposed an iterative algorithm that converges to optimal power allocation policies given decoding regions under an average power constraint. Obtaining power allocation policies, we have further characterized decoding regions for successive interference cancellation at the receivers. Through numerical solutions, we have substantiated our results. In general, there is an apparent superiority of Gaussian input signaling over the other modulation techniques, whereas the gap between Gaussian input signaling and the others decreases with decreasing signal-to-noise ratio. Therefore, it is reasonable to employ simple modulation techniques in low signal-to-noise ratio regimes.