[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Article in Journal
Advancing Mathematical Epidemiology and Chemical Reaction Network Theory via Synergies Between Them
Previous Article in Journal
A Construction of Optimal One-Coincidence Frequency-Hopping Sequences via Generalized Cyclotomy
Previous Article in Special Issue
Precise Error Performance of BPSK Modulated Coherent Terahertz Wireless LOS Links with Pointing Errors
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Review

Overview of Tensor-Based Cooperative MIMO Communication Systems—Part 2: Semi-Blind Receivers

by
Gérard Favier
1,*,† and
Danilo Sousa Rocha
2,†
1
I3S Laboratory, Côte d’Azur University, 06903 Sophia Antipolis, France
2
Federal Institute of Education, Science, and Technology of Ceará, Fortaleza 60040-531, Brazil
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2024, 26(11), 937; https://doi.org/10.3390/e26110937
Submission received: 20 August 2024 / Revised: 10 October 2024 / Accepted: 24 October 2024 / Published: 31 October 2024
(This article belongs to the Special Issue Wireless Communications: Signal Processing Perspectives)
Figure 1
<p>Organization of the paper.</p> ">
Figure 2
<p>Nested tensor decompositions based on TD and CPD.</p> ">
Figure 3
<p>TTD of a <italic>P</italic>th-order tensor, <inline-formula><mml:math id="mm1130"><mml:semantics><mml:mrow><mml:mi mathvariant="script">X</mml:mi><mml:mo>∈</mml:mo><mml:msup><mml:mi mathvariant="double-struck">K</mml:mi><mml:msub><mml:munder><mml:mi>I</mml:mi><mml:mo>̲</mml:mo></mml:munder><mml:mi>P</mml:mi></mml:msub></mml:msup></mml:mrow></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 4
<p>Graph of the TTD-4 model for a fourth-order tensor <inline-formula><mml:math id="mm1131"><mml:semantics><mml:mrow><mml:mi mathvariant="script">X</mml:mi><mml:mo>∈</mml:mo><mml:msup><mml:mi mathvariant="double-struck">K</mml:mi><mml:msub><mml:munder><mml:mi>I</mml:mi><mml:mo>̲</mml:mo></mml:munder><mml:mn>4</mml:mn></mml:msub></mml:msup></mml:mrow></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 5
<p>Graph of the GTTD-(2,4,4,2) model for a sixth-order tensor <inline-formula><mml:math id="mm1132"><mml:semantics><mml:mrow><mml:mi mathvariant="script">X</mml:mi><mml:mo>∈</mml:mo><mml:msup><mml:mi mathvariant="double-struck">K</mml:mi><mml:msub><mml:munder><mml:mi>I</mml:mi><mml:mo>̲</mml:mo></mml:munder><mml:mn>6</mml:mn></mml:msub></mml:msup></mml:mrow></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 6
<p>NCPD-4 model as (<bold>a</bold>) a nesting of two CPD-3 models and (<bold>b</bold>) a cascade of two CPD-3 models.</p> ">
Figure 7
<p>NTD-4 model as (<bold>a</bold>) a particular TTD and (<bold>b</bold>) a cascade of two TD-(2,3) models.</p> ">
Figure 8
<p>Graph of the NTD-4 model for a fourth-order tensor <inline-formula><mml:math id="mm1133"><mml:semantics><mml:mrow><mml:mi mathvariant="script">X</mml:mi><mml:mo>∈</mml:mo><mml:msup><mml:mi mathvariant="double-struck">K</mml:mi><mml:msub><mml:munder><mml:mi>I</mml:mi><mml:mo>̲</mml:mo></mml:munder><mml:mn>4</mml:mn></mml:msub></mml:msup></mml:mrow></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 9
<p>Graph of the NTD-6 model for a sixth-order tensor <inline-formula><mml:math id="mm1134"><mml:semantics><mml:mrow><mml:mi mathvariant="script">X</mml:mi><mml:mo>∈</mml:mo><mml:msup><mml:mi mathvariant="double-struck">K</mml:mi><mml:msub><mml:munder><mml:mi>I</mml:mi><mml:mo>̲</mml:mo></mml:munder><mml:mn>6</mml:mn></mml:msub></mml:msup></mml:mrow></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 10
<p>Graph of the NGTD-7 model for a seventh-order tensor <inline-formula><mml:math id="mm1135"><mml:semantics><mml:mrow><mml:mi mathvariant="script">X</mml:mi><mml:mo>∈</mml:mo><mml:msup><mml:mi mathvariant="double-struck">K</mml:mi><mml:msub><mml:munder><mml:mi>I</mml:mi><mml:mo>̲</mml:mo></mml:munder><mml:mn>7</mml:mn></mml:msub></mml:msup></mml:mrow></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 11
<p>Graph of the NGTD-5 model for a fifth-order tensor <inline-formula><mml:math id="mm1136"><mml:semantics><mml:mrow><mml:mi mathvariant="script">X</mml:mi><mml:mo>∈</mml:mo><mml:msup><mml:mi mathvariant="double-struck">K</mml:mi><mml:msub><mml:munder><mml:mi>I</mml:mi><mml:mo>̲</mml:mo></mml:munder><mml:mn>5</mml:mn></mml:msub></mml:msup></mml:mrow></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 12
<p>Two families of TD- and CPD-based decompositions.</p> ">
Figure 13
<p>Classification of relay systems according to the coding scheme and tensor model.</p> ">
Figure 14
<p>One-way, two-hop cooperative system.</p> ">
Figure 15
<p>Tucker train model of a two-hop relay system using TSTF codings.</p> ">
Figure 16
<p>Tucker train model of a two-hop relay system using TST codings.</p> ">
Figure 17
<p>NCPD-5 model for the DKRSTF system as a cascade of three CPD-3 models.</p> ">
Figure 18
<p>NCPD-4 model for the SKRST system.</p> ">
Figure 19
<p>Plan of simulations for performance comparison.</p> ">
Figure 20
<p>SER comparison with different receivers for STST and SKRST.</p> ">
Figure 21
<p>Comparison of (<bold>a</bold>) computation time for ZF, KronF/KRF, and ALS receivers and (<bold>b</bold>) number of iterations for convergence of ALS receivers for STST and SKRST.</p> ">
Figure 22
<p>NMSE of estimated channels with the KronF/KRF and ALS receivers for STST and SKRST: (<bold>a</bold>) <inline-formula><mml:math id="mm1137"><mml:semantics><mml:msup><mml:mover accent="true"><mml:mi mathvariant="bold">H</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mi>R</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msup></mml:semantics></mml:math></inline-formula> and (<bold>b</bold>) <inline-formula><mml:math id="mm1138"><mml:semantics><mml:msup><mml:mover accent="true"><mml:mi mathvariant="bold">H</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover><mml:mrow><mml:mo>(</mml:mo><mml:mi>R</mml:mi><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msup></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 23
<p>Impact of time-spreading lengths with ZF receivers of STST and SKRST.</p> ">
Figure 24
<p>Impact of numbers of antennas with ZF receivers of (<bold>a</bold>) SKRST and (<bold>b</bold>) STST.</p> ">
Figure 25
<p>SER comparison for the DKRSTF, STSTF, and TSTF systems with <inline-formula><mml:math id="mm1139"><mml:semantics><mml:mrow><mml:msub><mml:mi>M</mml:mi><mml:mi>R</mml:mi></mml:msub><mml:mo>≥</mml:mo><mml:msub><mml:mi>M</mml:mi><mml:mi>T</mml:mi></mml:msub></mml:mrow></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 26
<p>Impact of the number <italic>Q</italic> of symbol matrices in combined codings with ZF receivers.</p> ">
Figure 27
<p>Impact of AF/DF protocols on SER performance of STST and SKRST.</p> ">
Figure 28
<p>Impact of AF/DF protocols on NMSE of estimated channels for STST and SKRST: (<bold>a</bold>) <inline-formula><mml:math id="mm1140"><mml:semantics><mml:msup><mml:mover accent="true"><mml:mi mathvariant="bold">H</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mi>R</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msup></mml:semantics></mml:math></inline-formula> and (<bold>b</bold>) <inline-formula><mml:math id="mm1141"><mml:semantics><mml:msup><mml:mover accent="true"><mml:mi mathvariant="bold">H</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover><mml:mrow><mml:mo>(</mml:mo><mml:mi>R</mml:mi><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msup></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 29
<p>SER comparison for all considered relay systems.</p> ">
Figure 30
<p>NMSE of estimated channels for all considered relay systems: (<bold>a</bold>) <inline-formula><mml:math id="mm1142"><mml:semantics><mml:msup><mml:mover accent="true"><mml:mi mathvariant="bold">H</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover><mml:mrow><mml:mo>(</mml:mo><mml:mi>S</mml:mi><mml:mi>R</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msup></mml:semantics></mml:math></inline-formula> and (<bold>b</bold>) <inline-formula><mml:math id="mm1143"><mml:semantics><mml:msup><mml:mover accent="true"><mml:mi mathvariant="bold">H</mml:mi><mml:mo stretchy="false">^</mml:mo></mml:mover><mml:mrow><mml:mo>(</mml:mo><mml:mi>R</mml:mi><mml:mi>D</mml:mi><mml:mo>)</mml:mo></mml:mrow></mml:msup></mml:semantics></mml:math></inline-formula>.</p> ">
Figure 31
<p>Comparison of considered relay systems in terms of (<bold>a</bold>) NMSE of reconstructed received signals and (<bold>b</bold>) computation time.</p> ">
Versions Notes

Abstract

:
Cooperative MIMO communication systems play an important role in the development of future sixth-generation (6G) wireless systems incorporating new technologies such as massive MIMO relay systems, dual-polarized antenna arrays, millimeter-wave communications, and, more recently, communications assisted using intelligent reflecting surfaces (IRSs), and unmanned aerial vehicles (UAVs). In a companion paper, we provided an overview of cooperative communication systems from a tensor modeling perspective. The objective of the present paper is to provide a comprehensive tutorial on semi-blind receivers for MIMO one-way two-hop relay systems, allowing the joint estimation of transmitted symbols and individual communication channels with only a few pilot symbols. After a reminder of some tensor prerequisites, we present an overview of tensor models, with a detailed, unified, and original description of two classes of tensor decomposition frequently used in the design of relay systems, namely nested CPD/PARAFAC and nested Tucker decomposition (TD). Some new variants of nested models are introduced. Uniqueness and identifiability conditions, depending on the algorithm used to estimate the parameters of these models, are established. Two families of algorithms are presented: iterative algorithms based on alternating least squares (ALS) and closed-form solutions using Khatri–Rao and Kronecker factorization methods, which consist of SVD-based rank-one matrix or tensor approximations. In a second part of the paper, the overview of cooperative communication systems is completed before presenting several two-hop relay systems using different codings and configurations in terms of relaying protocol (AF/DF) and channel modeling. The aim of this presentation is firstly to show how these choices lead to different nested tensor models for the signals received at destination. Then, by capitalizing on these models and their correspondence with the generic models studied in the first part, we derive semi-blind receivers to jointly estimate the transmitted symbols and the individual communication channels for each relay system considered. In a third part, extensive Monte Carlo simulation results are presented to compare the performance of relay systems and associated semi-blind receivers in terms of the symbol error rate (SER) and channel estimate normalized mean-square error (NMSE). Their computation time is also compared. Finally, some perspectives are drawn for future research work.

1. Introduction

During the last decade, new technologies have emerged for future sixth-generation (6G) wireless communications and networks. Among these promising technologies, we can mention massive multiple-input multiple-output (MIMO) antenna arrays, three-dimensional (3D) polarized antennas, high-frequency millimeter-wave (mmWave), i.e., terahertz (THz), communications, large intelligent surfaces (LISs), and holographic beamforming (HBF) antennas.
To improve signal coverage, mobile connectivity, latency, reliability, and energy consumption, as well as the quality of service (QoS) of future wireless networks, cooperative communication systems are the subject of tremendous research interests, with the aim of designing network architectures integrating ground, space, air, and underwater networks. These cooperative systems can be classified into the following three basic categories, namely relay-, IRS- (intelligent reflecting surface, also known as reconfigurable intelligent surface (RIS)) and UAV- (unmanned aerial vehicle, i.e., drones) aided systems. For a comparison of relay- and IRS-assisted wireless systems, the reader can refer to [1,2,3,4]. These types of assistance can be combined as, for instance, in [5,6], where IRS passive reflection is combined with active DF (decode and forward) relaying to improve the coverage and rate performance of conventional IRS-assisted systems.
Wireless propagation channel impairments such as multipath fading, delay, and Doppler spreads, which strongly depend on the environment (urban, rural, and indoor), degrade the quality of reception to recover the transmitted information symbols. Enhanced performance of wireless communication systems can be achieved by combining diverse techniques in multiple domains (space, time, frequency, chip, polarization, etc.) in order to exploit, at the receiver, several versions of transmitted symbols. Such symbol repetition can be achieved via the use of multi-antennas at the transmitter and receiver (space diversity), a repetition of the transmission of the same symbols over several time slots (time diversity), and the use of different subcarriers (frequency diversity). Diversity can also be introduced by means of specific codings, such as space-time (ST), space-frequency (SF), and space-time-frequency (STF) codings.
High-order tensors, also known as multiway arrays, are well suited to taking into account multiple types of diversity via the coding of information symbols to be transmitted, representing multidimensional received signals, designing semi-blind receivers, which is to say without a priori knowledge of the channels, and processing blocks of received signals in order to jointly estimate communication channels and transmitted symbols. Note that semi-blind means that only a few pilot symbols are needed to eliminate the scaling ambiguities inherent to the tensor model of the communication system. Besides the obvious potential to represent, compress, analyze, merge, and classify multidimensional, multimodal, and often incomplete data, tensor decompositions have the property of essential uniqueness (up to trivial indeterminacies in terms of permutation and scaling factor ambiguities in the columns of matrix factors) under milder conditions than matrix decompositions, which offers greater flexibility for the choice of design parameters.
Since the pioneering works of [7,8,9,10,11,12] for psychometrics, phonetics, and chemometrics applications, blind source separation (BSS), and wireless communication systems, tensors have been extensively used in various fields of applications, like computer vision [13], ECG, and EEG applications [14,15,16], hyperspectral image classification and anomaly detection [17,18], traffic data completion [19,20,21], recommendation systems [22,23], nonlinear system modeling and identification [24], data mining and data fusion [25,26,27,28], and tensor networks to represent and classify very large arrays of data with applications in machine learning [29,30], among many other applications. For a more complete description of tensor-based signal processing applications, the reader is referred to [31,32,33].
In the context of wireless systems, many tensor-based approaches have been proposed to design both point-to-point systems and cooperative systems, with associated semi-blind receivers. Most tensor-based systems have been designed using the popular tensor decomposition known as PARAFAC (parallel factors analysis) [8] or CPD (canonical polyadic decomposition) [34]. However, over the last two decades, various new tensor models have emerged when designing communication systems, such as the CONFAC (constrained PARAFAC) [35,36], PARATUCK- ( N 1 , N ) [37], generalized PARATUCK [38], NCPD (nested CPD) [39,40], nested Tucker decomposition (NTD) [41], coupled NTD [42], and doubly coupled NCPD [43] models. These models mainly depend on the choice of coding, whether or not resource allocation is taken into account, assumptions about communication channels, and the presence or absence of relays, as will be shown in Section 5.
It is fundamental to note that, unlike most tensor-based applications, in the context of wireless communication systems, through construction, all the matrix factors of tensor models are physically interpretable in terms of symbol, coding, or channel matrices, with physical parameters such as fading coefficients, angles of departure and arrival (AoD/AoA), or time delays. In addition, certain matrix or tensor factors of these models are possibly structured, such as, for example, the Vandermonde structure of steering matrices and the orthonormal (Fourier) structure of coding matrices or matrix unfoldings of tensor codings.
The objective of this paper is fourfold:
  • To provide a self-contained overview of tensor models used to design wireless communication systems. After a reminder of tensor prerequisites, standard tensor decompositions are first recalled, more particularly the CPD/PARAFAC and Tucker decompositions, as well as some variants. Then, two important classes of tensor decompositions, namely the NCPD and NTD, are presented in an unified and original way, using a new representation by means of graphs and highlighting their link with the tensor train decomposition (TTD) [44]. This link is exploited to demonstrate the uniqueness property of NTD models. Some of the models to be used when designing new relay systems are introduced for the first time.
  • To present two families of algorithms that estimate the parameters of NCPD and NTD models: iterative algorithms based on alternating least squares (ALS) and closed-form solutions using Khatri–Rao and Kronecker factorization methods, denoted as KRF and KronF, which consist of SVD-based rank-one matrix or tensor approximations. These closed-form algorithms result from the fact that unfoldings of CPD and TD are expressed in terms of Khatri–Rao and Kronecker products of matrix factors, respectively.
  • To provide an overview of tensor-based cooperative MIMO communication systems from a semi-blind receiver perspective, with a focus on two-hop relay systems, as complement of our companion paper [45]. The goal of this presentation is first to show how the choices of coding, relay protocol (AF (amplify-and-forward) or DF), and assumptions made about communication channels (flat fading or frequency-selective fading channels) impact the modeling of relay systems. Then, assuming knowledge of coding tensors and matrices at the destination and exploiting the multilinear structure of the system model, we devise semi-blind receivers to jointly estimate the transmitted symbols and the individual channels for each considered relay system. Uniqueness conditions for the tensor model of each system and necessary conditions for parameter identifiability using the associated receivers are established.
  • To compare the performance of main two-hop relay systems and associated semi-blind receivers in terms of the symbol error rate (SER), channel estimate normalized mean-square error (NMSE), and computation time, by means of extensive Monte Carlo simulations, with the aim of showing how the best performance–complexity–identifiability condition tradeoff can be achieved.
The rest of the paper is organized in seven sections. In Section 2, some tensor prerequisites are recalled, including an introduction of index convention and the definition of basic tensor operations in order to make the content as self-contained as possible. Section 3 provides an overview of the main tensor models, with a focus on nested ones, namely the NCPD and NTD, highlighting their link with the TTD and proposing a new representation by means of graphs. New variants of nested models to be used for designing two-hop relay systems are introduced. Uniqueness conditions are established for the nested models considered. In Section 4, several parameter estimation algorithms are described for these models, with particular attention given to closed-form algorithms. The identifiability conditions are established for both ALS and closed-form algorithms. Section 5 is devoted to a comprehensive survey of one-way two-hop relay systems.
After an overview of cooperative systems from a semi-blind receiver point of view, relay systems are classified according to the choice of coding scheme and the resulting tensor model for the signals received at destination. Three classes of coding are considered:
  • Tensor-based codings, including TSTF, STSTF, TST, and STST codings (TSTF = tensor space-time frequency; STSTF = simplified TSTF; TST = tensor space-time; STST = simplified TST);
  • Matrix-based codings, including DKRSTF and SKRST codings (DKRSTF = doubly Khatri–Rao space-time frequency; SKRST = simplified Khatri–Rao space-time);
  • STST and SKRST codings combined with MSMKron and MSMKR codings (MSMKron = multiple symbol matrices’ Kronecker product; MSMKR = multiple symbol matrices’ Khatri–Rao product), respectively.
In Section 6, by capitalizing on the tensor models of the systems, we design semi-blind receivers to jointly estimate transmitted symbols and individual communication channels. In Section 7, extensive Monte Carlo simulation results are presented to compare the performance of the relay systems considered in this overview and their associated semi-blind receivers in terms of SER, channel-estimate NMSE, and computation time. Finally, Section 8 presents some perspectives for future research work.
The paper is divided into two main parts, entitled “Tensor operations, models, and algorithms” and “Overview of two-hop relay systems and semi-blind receivers”, composed of three sections each, as detailed in the flow chart of Figure 1.
Notation and acronyms: Table 1 summarizes the notations used throughout the paper, and a comprehensive glossary of acronyms is provided in Appendix A.

2. Tensor Prerequisites

An Nth-order tensor, X K I ̲ N , of size I ̲ N I 1 × × I N , will be denoted as [ x i ̲ N ] = [ x i 1 , , i N ] , where i ̲ N { i 1 , , i N } , with K = R or C , depending on whether the tensor is real- or complex-valued. Each index, i n I n { 1 , , I n } , for n N { 1 , , N } , is associated with the nth-mode whose dimension is I n . In the context of wireless communications, each mode of the tensor of encoded or received signals is associated with a particular diversity (space, frequency, chip, time slot, or symbol period).
The identity tensor of order N and dimensions R, denoted as I N , R = [ δ r 1 , , r N ] , with r n R for n N , is a diagonal tensor whose diagonal elements are equal to 1 and other elements to 0. The generalized Kronecker delta is defined as follows:
δ r 1 , , r N = 1 if r 1 = = r N 0 otherwise .
Table 2 summarizes the notation used for sets of indices and dimensions [33].

2.1. Index Convention

We now introduce the index convention, which allows for eliminating the summation symbols in formulae involving multi-index variables. For example, i = 1 I a i b i is simply written as a i b i . Note that there are two differences relative to Einstein’s summation convention:
  • Each index can be repeated more than twice in an expression;
  • Ordered index sets are allowed.
For example, the index convention allows multiple sums to be abbreviated as follows:
i 1 = 1 I 1 i P = 1 I P x i 1 , , i P y i 1 , , i P = i ̲ P = 1 ̲ I ̲ P x i ̲ P y i ̲ P = x i ̲ P y i ̲ P
i 1 = 1 I 1 i P = 1 I P x i 1 , , i P p = 1 P a i p ( p ) = i ̲ P = 1 ̲ I ̲ P x i ̲ P p = 1 P a i p ( p ) = p = 1 P a i p ( p ) x i ̲ P ,
where 1 ̲ denotes a set of ones whose number is fixed by the index P of the set I ̲ P . The notation i ̲ P and I ̲ P allows us to simplify the expression of the multiple sums into a single sum over an index set, which is further simplified using the index convention.
The index convention can be interpreted in terms of two types of summation, one associated with row indices (subscripts), and one associated with column indices (superscripts), with the following rules [33,46]:
  • The order of the column indices is independent of the order of the row indices;
  • Consecutive row and column indices (or index sets) can be permuted.
In Table 3, we present examples of vector and matrix Kronecker products and a matrix product using index convention, where e i j e i ( I ) e j ( J ) , and e i j e i ( I ) ( e j ( J ) ) T .

2.2. Notion of Slice

If we fix N 1 indices of an Nth-order tensor, X K I ̲ N , we obtain a vector slice, called a fiber, and for N 2 fixed indices, we have a matrix slice. In Table 4, we define three types of vector and matrix slices for a third-order tensor, X K I × J × K .
Similarly, we can define vector, matrix, and third-order tensor slices for a fourth-order tensor, X K I × J × K × L , such as x , j , k , l K I , X , , k , l K I × J , X , , , l K I × J × K .

2.3. Matrix Unfoldings of a Tensor

A general matrix unfolding formula for an Nth-order tensor, X K I ̲ N , is given by [38]:
X S 1 ; S 2 = i 1 = 1 I 1 i N = 1 I N x i 1 , , i N n S 1 e i n ( I n ) n S 2 e i n ( I n ) T K J 1 × J 2 ,
where S 1 and S 2 are two disjoint ordered subsets of the set of modes S = N , composed of p and N p modes, respectively, with p N 1 , and J n 1 = I n n S n 1 , for n 1 = 1 and 2 . The subsets S 1 and S 2 contain the modes associated with the rows and the columns of the matrix unfolding X S 1 ; S 2 of X , respectively.
Such a matrix unfolding results from two mode combinations associated with the sets S 1 and S 2 , using the convention that the order of the dimensions in a product, p = 1 P I p I 1 I P , corresponding to a combination of P modes, follows the order of variation of the indices, with i 1 varying more slowly than i 2 , which, in turn, varies more slowly than i 3 , etc.
For example, in the flat mode-1 unfolding X I × K J of the third-order tensor X K I × J × K , index k varies more slowly than j, which implies x i j k = [ X I × K J ] i , ( k 1 ) J + j . Similarly, we have x i j k = [ X J × I K ] j , ( i 1 ) K + k = [ X K × J I ] k , ( j 1 ) I + i .
Transposing X I × K J achieves the tall mode-1 unfolding X K J × I = [ X I × K J ] T . For a third-order tensor, in addition to X K J × I , there are five other tall matrix unfoldings, denoted as X J K × I , X K I × J , X I K × J , X I J × K , X J I × K .

2.4. Basic Tensor Operations

In Table 5, we recall the definitions of some basic operations: the outer product of vectors, the mode-p (multiple-mode-p) product of a tensor with a matrix (P matrices), and the modes- ( p , n ) product of two tensors, respectively denoted as ∘, × p , × p = 1 P and × p n .
Note that the outer product of P non-zero vectors u ( p ) K I p , p P , gives a rank-one Pth-order tensor of size I ̲ P .
The mode-p product of a tensor X K I ̲ P with matrices A K J p × I p , B K K p × J p and A ( p ) K J p × I p for p P satisfies the following properties:
X × p A × p B = X × p ( B A )
X × p = 1 P A ( π ( p ) ) = X × p = 1 P A ( p ) .
The last equality means that, for any permutation, π ( . ) , of the indices p P , the order of the mode-p products is irrelevant when the indices are all distinct.
The modes- ( p , n ) product of the tensors X K I ̲ P and Y K J ̲ N , with I p = J n = K , corresponds to a contraction along the modes p of X and n of Y .
The contracted product × p n is associative; i.e., for any tensors A K I ̲ P , B K J ̲ N and C K K ̲ Q , such that I p = J n and J m = K q , with m n , we have the following:
( A × p n B ) × m q C = A × p n ( B × m q C ) = A × p n B × m q C .
This doubly contracted product yields a tensor of order P + N + Q 4 .
When the indices ( m , n , p , q ) represent numbers of modes in the tensors, the property (6) is no longer valid because the result of the double product × p n and × m q depends on the order in which these products are calculated.
For instance, for A K I 1 × J 1 × R 1 , B K R 1 × I 2 × J 2 × R 2 and C K R 2 × I 3 × J 3 , the double contracted product can be written in two different ways:
( A × 3 1 B ) × 5 1 C = A × 3 1 ( B × 4 1 C ) = A × 3 1 B × 4 1 C K I 1 × J 1 × I 2 × J 2 × I 3 × J 3 .
On the left-hand side of this double equality, the product × 3 1 is performed first, and then, it is followed by the product × 5 1 . Meanwhile, in the second term of the equality, the product × 4 1 is first calculated, followed by the product × 3 1 . The last term in the equality corresponds to two-by-two contractions of adjacent blocks. The first writing will be used to represent the TTD model by means of Equation (12) with contraction operations performed from left to right, while the third writing will be used in Equation (13).

2.5. Inner Product and Frobenius Norm

In the case of two complex-valued tensors, A , B C I ̲ N , of order N and the same size, their Hermitian inner product is given by the following:
A , B = i 1 = 1 I 1 i N = 1 I N a i 1 , , i N b i 1 , , i N * = i ̲ N = 1 ̲ I ̲ N a i ̲ N b i ̲ N * .
We can also write it using the Hermitian inner product of vectorized forms of A and B :
A , B = vec H ( B ) vec ( A ) ,
where vec ( A ) and vec ( B ) are vectorizations associated with the same mode combination of A and B .
The Frobenius norm of A K I ̲ N is the square root of the inner product of the tensor with itself; i.e.,
A F = A , A 1 / 2 = i 1 = 1 I 1 i N = 1 I N | a i 1 , , i N | 2 ,
where | · | represents the absolute value or the modulus, depending on whether A is real ( K = R ) or complex ( K = C ).
Since the Frobenius norm is equal to the square root of the sum of the squares of the absolute value or modulus of all elements of the tensor, it is also given by the following:
A F = vec ( A ) 2 = A S 1 ; S 2 F ,
that means the Euclidean (if K = R ) or Hermitian (if K = C ) norm of one of its vectorized forms, or as the Frobenius norm of one of its matrix unfoldings A S 1 ; S 2 , defined in (3).

3. Overview of Tensor Models

In this section, we provide an overview of the tensor models that will be used in Section 5 when designing two-hop relay systems. We first present the standard Tucker decomposition (TD), the canonical polyadic decomposition (CPD), also known as PARAFAC [8] or CANDECOMP [47], with two variants TD- ( N 1 , N ) and GTD- ( N 1 , N ) , and the tensor train decomposition (TTD), with a new generalization that we will call generalized TTD (GTTD). Then, two basic nested decompositions, the so-called NCPD-4 and NTD-4 models, are described in detail. Three new nested models, called NTD-6, NGTD-5 and NGTD-7, are introduced for the first time. Figure 2 shows how the considered nested models are constructed by nesting two basic models: CPD-3 for NCPD-4; TD-(2,3) and TD-(2,4) for NTD-4 and NTD-6, respectively; and GTD-(2,4) and GTD-(2,5) for NGTD-5 and NGTD-7. For the NTD and GNTD models, an interpretation in terms of TTD and GTTD is highlighted.

3.1. TD and CPD Models

In Table 6 and Table 7, we first present various representations of the TD and CPD of a tensor of orders N and three: scalar writing, writings with mode-n products and outer products, and matrix unfoldings.
Comparing Table 7 with Table 6, we can conclude that a CPD can be viewed as a particular TD with an identity core tensor I N , R , implying R n = R for n N in the case of an Nth-order tensor and P = Q = S = R for a third-order tensor. This results in the sum of R rank-one tensors. When R is minimal, it is called the tensor rank or the canonical rank.
The TD and CPD models will be abbreviated as G ; A ( 1 ) , , A ( N ) ; R 1 , , R N and A ( 1 ) , , A ( N ) ; R , respectively, for an Nth-order tensor X K I ̲ N and G ; A , B , C ; P , Q , S and A , B , C ; R , respectively, for a third-order tensor X K I × J × K .
When R n , n N and P , Q and S are minimal, the N-uplet ( R 1 , , R N ) and the triplet ( P , Q , S ) are called the multilinear and trilinear ranks, respectively. There is a stable (non-iterative) algorithm, called higher-order singular value decomposition (HOSVD), for estimating multilinear rank and matrix factors in an orthogonal format, i.e., an orthogonal Tucker decomposition [48]. This algorithm is very often used to calculate a rank-one approximation of a tensor. A truncated version, called THOSVD, is also very useful for calculating a low multilinear rank approximation.
A variant of the TD model, called the Tucker- ( N 1 , N ) decomposition and abbreviated as TD- ( N 1 , N ) [36], is presented in Table 8, with special cases corresponding to the Tucker-(2,3) and Tucker-(1,3) models, also denoted as Tucker-2 and Tucker-1 in the literature. Note that, in this decomposition, N N 1 factor matrices are identity matrices chosen such that A ( n ) = I I n , which implies R n = I n , for n = N 1 + 1 , , N , and hence, G K R 1 × × R N 1 × I N 1 + 1 × × I N .
In Table 9, we illustrate the generalized Tucker- ( N 1 , N ) decomposition, denoted as GTD- ( N 1 , N ) , corresponding to a Tucker- ( N 1 , N ) one when some of the factors are tensors instead of matrices, first introduced in [38] to model a wireless communication system using tensor space-time-frequency (TSTF) coding. The case of a fourth-order tensor X K I ̲ 4 is considered, with ( N 1 , N ) = ( 2 , 4 ) , a fourth-order core tensor G K R 1 × R 2 × I 3 × I 4 , a third-order tensor factor A K I 1 × R 1 × I 3 , and a matrix factor B K I 2 × R 2 .

3.2. TTD Models

The tensor train (TT) format was proposed in [44] as a reduced parametric complexity tensor representation in order to alleviate the curse of dimensionality with standard tensor decompositions due to an exponential growth of their number of parameters when the order of the tensor increases. This format is very useful for representing high-order data tensors.
A TTD model of an Pth-order tensor X K I ̲ P consists of decomposing it into a train of two matrices, G ( 1 ) and G ( P ) , and ( P 2 ) third-order tensors G ( p ) , p { 2 , , P 1 } , with the following dimensions:
G ( 1 ) K I 1 × R 1 ; G ( P ) K R P 1 × I P ; G ( p ) K R p 1 × I p × R p ,
as illustrated in Figure 3.
The TTD can be written as a sequence, from left to right, of ( P 1 ) contractions between the factors G ( p ) , for p { 2 , , P 1 } , and G ( P ) with the tensor resulting from all contractions to the left of each factor, as follows:
X = ( G ( 1 ) × 2 1 G ( 2 ) ) × 3 1 G ( 3 ) × 4 1 × P 1 1 G ( P 1 ) × P 1 G ( P ) ,
with the modes- ( 1 , p ) products calculated from p = 2 to p = P .
Considering two-by-two contractions of adjacent blocks, the TTD can be written as follows:
X = G ( 1 ) × 2 1 G ( 2 ) × 3 1 G ( 3 ) × 3 1 G ( 4 ) × 3 1 × 3 1 G ( P 1 ) × 3 1 G ( P ) ,
where the products × 3 1 correspond to separate contractions of two adjacent tensors, G ( p ) and G ( p + 1 ) , along their common mode associated with the index r p for p { 2 , P 1 } .
The factors G ( p ) , p P are called TT cores, and the integer numbers R p , p P 1 are the TT-ranks. For p = 1 and p = P , the TT-cores are two matrices, which implies R 0 = R P = 1 . The TT-rank R p is given by the rank of the matrix unfolding X I 1 I p × I p + 1 I P .
The TTD can also be concisely written in a scalar way as follows:
x i ̲ P = r 1 = 1 R 1 r P 1 = 1 R P 1 g i 1 , r 1 ( 1 ) g r 1 , i 2 , r 2 ( 2 ) g r 2 , i 3 , r 3 ( 3 ) g r P 2 , i P 1 , r P 1 ( P 1 ) g r P 1 , i P ( P ) ,
or using row vector–matrix, matrix–matrix, and matrix–column vector products as follows:
x i ̲ P = g i 1 , ( 1 ) G , i 2 , ( 2 ) G , i 3 , ( 3 ) G , i P 1 , ( P 1 ) g , i P ( P ) ,
where g i 1 , ( 1 ) K 1 × R 1 and g , i P ( P ) K R P 1 are the i 1 th row vector of G ( 1 ) , and the i P th column-vector of G ( P ) , respectively, whereas G , i p , ( p ) K R p 1 × R p is the i p th lateral slice of G ( p ) , for p { 2 , , P 1 } , i.e.,
g i 1 , ( 1 ) = g i 1 , 1 ( 1 ) g i 1 , R 1 ( 1 ) , g , i P ( P ) = g 1 , i P ( P ) g R P 1 , i P ( P ) T G , i p , ( p ) = g 1 , i p , 1 ( p ) g 1 , i p , R p ( p ) g R p 1 , i p , 1 ( p ) g R p 1 , i p , R p ( p ) .
From Equation (15), we can conclude that TTD is not unique since it is invariant for all non-singular transformation matrices, T ( p ) K R p × R p , which transform the lateral slice G , i p , ( p ) of the core G ( p ) , for p { 2 , , P 1 } , in such a way that the following applies:
F , i p , ( p ) = T ( p 1 ) 1 G , i p , ( p ) T ( p ) ,
and, for p = 1 and p = P , F ( 1 ) = G ( 1 ) T ( 1 ) and F ( P ) = T ( P 1 ) 1 G ( P ) . Indeed, it is easy to verify that:
f i 1 , ( 1 ) F , i 2 , ( 2 ) F , i 3 , ( 3 ) F , i P 1 , ( P 1 ) f , i P ( P ) = g i 1 , ( 1 ) G , i 2 , ( 2 ) G , i 3 , ( 3 ) G , i P 1 , ( P 1 ) g , i P ( P ) = x i ̲ P .
The TTD can also be written using outer products of column vectors as:
X = r 1 = 1 R 1 r P 1 = 1 R P 1 g , r 1 ( 1 ) g r 1 , , r 2 ( 2 ) g r P 2 , , r P 1 ( P 1 ) g r P 1 , ( P ) T ,
with g , r 1 ( 1 ) K I 1 , g r P 1 , ( P ) K 1 × I P , and g r p 1 , , r p ( p ) K I p , for p { 2 , , P 1 } . In Table 10, we summarize the different writings of a TTD, with r 0 = r P = 1 .
Note that, for the TTD model, each carriage is characterized by only one inner index, i p . The outer indices, r p , are associated with modes common to two consecutive wagons. They represent the modes on which the tensor contractions operate.
The TTD model can be represented using a graph, as shown in Figure 4, for a fourth-order tensor, X K I ̲ 4 . The nodes, denoted as ( ) , represent matrices or third-order tensors in the train, and the edges are labeled by an index i p of X , or an index r p relative to the TT rank R p . The number of edges associated with a node is equal to two or three, depending on whether the node is a matrix or a third-order tensor, respectively. The number of inner indices i p is equal to the order of X .
TTD combines the advantages of CPD and TD in terms of the following: (i) parametric complexity, which is proportional to the tensor order P as for CPD, while it increases exponentially with the order P for TD, due to the dimensionality of the core tensor; and (ii) the existence of a stable parametric estimation algorithm, based on SVD calculations, as for TD.
In Table 11, we summarize the notations and the element x i ̲ N for the CPD, TD, and TTD models, and their parametric complexity is compared in terms of the size of matrix and tensor factors, assuming I n = I and R n = R for all n N .
A generalization of TTD is now introduced in defining a train of tensors of any order. For instance, a tensor X K I ̲ 6 can be decomposed into a train of two matrices and two fourth-order tensors, as illustrated with the graph in Figure 5. Such a generalized TTD will be denoted as GTTD-(2,4,4,2), where each number is associated with the order of the corresponding tensor in the train. This type of generalization will be encountered for modeling relay systems based on tensor codings. See Section 5.3.

3.3. Nested Tensor Models

We now present two families of nested tensor models, namely NCPD and NTD. In Section 3.3.1 and Section 3.3.2, the standard NCPD-4 and NTD-4 models are described for a fourth-order tensor X K I ̲ 4 . Then, in Section 3.3.3, two generalizations corresponding to NTD-6 and NGTD-7 models are introduced for sixth- and seventh-order tensors, respectively.
The NCPD-4 and NTD-4 models, summarized in Table 12, will be abbreviated as follows:
A ( 1 ) , B ( 1 ) , G , A ( 2 ) , B ( 2 ) ; R 1 , R 2   and   A ( 1 ) , G ( 1 ) , U , G ( 2 ) , A ( 2 ) ; R 1 , R 2 , R 3 , R 4 ,
where, by analogy with the TT ranks, the integers ( R 1 , R 2 ) and ( R 1 , R 2 , R 3 , R 4 ) will be called NCPD and NTD ranks, respectively. The TTD-4 model is also presented in Table 12.
The NCPD-4 and NTD-4 models were first highlighted in [39,41], respectively, in the context of a point-to-point communication system using a DKRSTF coding and a two-hop relay system using a TST coding.

3.3.1. NCPD-4 Model

The NCPD-4 model for X K I ̲ 4 can be interpreted as the nesting of two CPD-3 models of the following third-order tensors X ( 1 ) K I 1 × I 2 × R 2 and X ( 2 ) K R 1 × I 3 × I 4 :
X ( 1 ) = I R 1 × 1 A ( 1 ) × 2 B ( 1 ) × 3 G T
X ( 2 ) = I R 2 × 1 G × 2 A ( 2 ) × 3 B ( 2 ) ,
that share the common matrix factor G , as shown in the following equation:
x i ̲ 4 = x i 1 , , i 4 = r 1 = 1 R 1 r 2 = 1 R 2 a i 1 , r 1 ( 1 ) b i 2 , r 1 ( 1 ) g r 1 , r 2 a i 3 , r 2 ( 2 ) b i 4 , r 2 ( 2 ) ,
where the red color is associated with the sum over index r 1 in the CPD model of the tensor X ( 1 ) , while the blue color is used for the sum over index r 2 in the CPD model of the tensor X ( 2 ) , and the common factor of these two CPDs is in green.
This nesting of the CPD-3 models (19) and (20), represented in Figure 6a, is characterized by five matrix factors A ( 1 ) , B ( 1 ) , G , A ( 2 ) , B ( 2 ) .
According to the definition of the CPD-3 models (19) and (20), we have the following:
x i 1 , i 2 , r 2 ( 1 ) = r 1 = 1 R 1 a i 1 , r 1 ( 1 ) b i 2 , r 1 ( 1 ) g r 1 , r 2
x r 1 , i 3 , i 4 ( 2 ) = r 2 = 1 R 2 g r 1 , r 2 a i 3 , r 2 ( 2 ) b i 4 , r 2 ( 2 ) .
Equation (21) can then be rewritten as follows:
x i ̲ 4 = r 1 = 1 R 1 a i 1 , r 1 ( 1 ) b i 2 , r 1 ( 1 ) x r 1 , i 3 , i 4 ( 2 ) = r 2 = 1 R 2 x i 1 , i 2 , r 2 ( 1 ) a i 3 , r 2 ( 2 ) b i 4 , r 2 ( 2 ) .
Let us define the matrix unfoldings X I 1 I 2 × R 2 ( 1 ) and X I 3 I 4 × R 1 ( 2 ) , which result from a combination of the first two modes of X ( 1 ) and the last two modes of X ( 2 ) , respectively. The writing (24) of x i ̲ 4 can be interpreted as two CPD-3 models associated with the following third-order contracted forms X c 1 K I 1 × I 2 × I 3 I 4 and X c 2 K I 1 I 2 × I 3 × I 4 of the fourth-order tensor X :
X c 1 = I R 1 × 1 A ( 1 ) × 2 B ( 1 ) × 3 X I 3 I 4 × R 1 ( 2 )
X c 2 = I R 2 × 1 X I 1 I 2 × R 2 ( 1 ) × 2 A ( 2 ) × 3 B ( 2 ) .
Figure 6b illustrates another interpretation of the NCPD-4 model, deduced from the contracted form (25), as a cascade of two CPD-3 models. A similar interpretation in terms of cascade can be deduced from the contracted form (26).
Matrix unfoldings associated with the CPD models of the third-order tensors X ( 1 ) , X ( 2 ) , X c 1 and X c 2 , defined in Equations (19), (20), (25) and (26), are deduced from the general formulae recalled in Table 7, as summarized in Table 13. Combining these matrix unfoldings leads to matrix unfoldings for the NCPD-4 model, as given in Table 14.
Applying the property vec ( A B C T ) = ( C A ) vec ( B ) to the last equation in Table 14 gives a vectorized form of the tensor X :
x I 3 I 4 I 1 I 2 = vec ( X I 1 I 2 × I 3 I 4 ) = ( A ( 2 ) B ( 2 ) ) ( A ( 1 ) B ( 1 ) ) vec ( G ) .
Note that the first four matrix unfoldings in Table 14 and the vectorized form (27) contain one isolated factor (to the right of the equations) from ( A ( 1 ) , B ( 1 ) , G , A ( 2 ) , B ( 2 ) ) . These unfoldings will be used separately and iteratively to estimate the matrix factors of the NCPD-4 model, with a five-step alternating least squares (ALS) algorithm, as presented in Section 4.2.
When some factors are known, the ALS algorithm can be simplified. That is the case in the context of relay system using the SKRST coding, where the factors ( B ( 1 ) , A ( 2 ) ) represent coding matrices assumed to be known at the receiver. Then, the other three factors ( A ( 1 ) , G , B ( 2 ) ) representing the symbol and channel matrices can be estimated using a three-step ALS algorithm. See Equations (67), (68) and (70).
From the cascade structure shown in Figure 6b, and, therefore, from Equation (25), we conclude that the tensor X can be represented by means of two coupled CPD-3 models associated with the contracted form X c 1 and the tensor X ( 2 ) , i.e., Equations (25) and (20). Similarly, the coupled CPD-3 models of the contracted form X c 2 and the tensor X ( 1 ) , i.e., Equations (26) and (19), can be used to represent the tensor X .
When ( B ( 1 ) , A ( 2 ) ) are assumed to be known, each CPD-3 model of X ( 1 ) , X ( 2 ) , X c 1 and X c 2 contains two unknown matrix factors that can be estimated using a closed-form algorithm based on the Khatri–Rao factorization (KRF) method, as first proposed in [49].
Exploiting each set of coupled CPD-3 models allows for the estimatation of the three unknown factors ( A ( 1 ) , G , B ( 2 ) ) by means of a two-step procedure based on KRF. The resulting two KRF-based closed-form algorithms, which use the unfoldings in green in Table 13, will be described in Section 4.2. See Table 18.
Remark 1. 
The essential uniqueness of a CPD model, i.e., uniqueness up to column permutation and scaling ambiguities in the factor matrices, has been the subject of numerous articles in the literature. From a modeling point of view, uniqueness means that the model A Π Λ ( A ) , B Π Λ ( B ) , C Π Λ ( C ) ; R is equivalent to the model A , B , C ; R , where Π is a permutation matrix, and ( Λ ( A ) , Λ ( B ) , Λ ( C ) ) are R × R diagonal matrices, such that their product is the identity matrix: Λ ( A ) Λ ( B ) Λ ( C ) = I R .
The following sufficient uniqueness condition has been established by Kruskal [50] for a third-order CPD model A , B , C ; R :
k A + k B + k C 2 R + 2 ,
where k A denotes the k-rank (also known as the Kruskal’s rank) of A , i.e., the largest integer k A such that every set of k A columns of A is linearly independent.
When a factor matrix (e.g., C ) is known, and the Kruskal condition (28) is satisfied, essential uniqueness is guaranteed without any permutation ambiguity and with only two diagonal ambiguity matrices ( Λ ( A ) , Λ ( B ) ) ) , such that Λ ( A ) Λ ( B ) = I R .
Applying Kruskal’s condition to the CPD models (19) and (20) leads to the following uniqueness conditions for the NCPD-4 model:
k A ( 1 ) + k B ( 1 ) + k G T 2 R 1 + 2
k A ( 2 ) + k B ( 2 ) + k G 2 R 2 + 2 .
As mentioned previously, in the context of relay systems, the factors ( B ( 1 ) , A ( 2 ) ) will be assumed to be known and full column rank, implying k B ( 1 ) = R 1 and k A ( 2 ) = R 2 . The Kruskal’s conditions (29) and (30) then become
k A ( 1 ) + k G T R 1 + 2 and k B ( 2 ) + k G R 2 + 2 ,
and the ambiguity relations for the NCPD-4 model (19) and (20) can be simplified as follows:
Λ ( A ( 1 ) ) Λ ( G T ) = I R 1 and Λ ( B ( 2 ) ) Λ ( G ) = I R 2 .

3.3.2. NTD-4 Model

In this section, we present the NTD-4 model for a fourth-order tensor, X K I ̲ 4 . This model, introduced in Table 12, corresponds to the nesting of two Tucker-(2,3) decompositions of the third-order tensors X ( 1 ) K I 1 × I 2 × R 3 and X ( 2 ) K R 2 × I 3 × I 4 :
X ( 1 ) = G ( 1 ) × 1 A ( 1 ) × 2 I I 2 × 3 U T
X ( 2 ) = G ( 2 ) × 1 U × 2 I I 3 × 3 A ( 2 ) ,
that share the matrix factor U , as illustrated in the following equation:
x i ̲ 4 = r 1 = 1 R 1 r 2 = 1 R 2 r 3 = 1 R 3 r 4 = 1 R 4 a i 1 , r 1 ( 1 ) g r 1 , i 2 , r 2 ( 1 ) u r 2 , r 3 g r 3 , i 3 , r 4 ( 2 ) a i 4 , r 4 ( 2 ) .
The red color is associated with the sums over indices r 1 and r 2 in the TD-(2,3) model of the tensor X ( 1 ) , while the blue color is used for the sums over indices r 3 and r 4 in the TD-(2,3) model of the tensor X ( 2 ) , and the common factor of these two TDs is in green.
Equation (35) is to be compared with Equation (21) of the NCPD-4 model. From this comparison, we deduce that the nesting of two CPD-3 models in the case of NCPD-4 is replaced with the nesting of two TD-(2,3) models for NTD-4.
The NTD-4 model can also be viewed as a particular TTD model, represented in Figure 7a and the graph in Figure 8. This TTD is composed of three matrix factors and two third-order core tensors: ( A ( 1 ) , G ( 1 ) , U , G ( 2 ) , A ( 2 ) ) .
Following the same approach as for NCPD-4, the fourth-order tensor X can be written using the following two third-order contracted forms X c 1 K I 1 × I 2 × I 3 I 4 and X c 2 K I 1 I 2 × I 3 × I 4 , which result from combinations of the last two modes of X ( 2 ) and the first two modes of X ( 1 ) , respectively:
X c 1 = G ( 1 ) × 1 A ( 1 ) × 2 I I 2 × 3 X I 3 I 4 × R 2 ( 2 )
X c 2 = G ( 2 ) × 1 X I 1 I 2 × R 3 ( 1 ) × 2 I I 3 × 3 A ( 2 ) .
These contracted decompositions are to be compared with Equations (25) and (26) for the NCPD-4 model where the core tensors G ( 1 ) and G ( 2 ) are replaced with the identity tensors I R 1 and I R 2 , respectively, and the identity matrices ( I I 2 , I I 3 ) are replaced with the matrix factors ( B ( 1 ) , A ( 2 ) ) . From Equation (36), we deduce that the NTD-4 model can be viewed as the cascade of two TD-(2,3) models, as illustrated in Figure 7b. A similar cascade structure can be deduced from Equation (37).
Matrix unfoldings associated with the TD models of the third-order tensors X ( 1 ) , X ( 2 ) , X c 1 and X c 2 , defined in Equations (33)–(37), are obtained using the general formulae recalled in Table 6, as summarized in Table 15.
Vectorizing X I 1 I 2 × I 3 I 4 yields the following vectorized form of X :
x I 3 I 4 I 1 I 2 = vec ( X I 1 I 2 × I 3 I 4 ) = ( I I 3 A ( 2 ) ) G I 3 R 4 × R 3 ( 2 ) ( A ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) vec ( U ) .
By combining the unfoldings in Table 15, it is easy to obtain the unfoldings of the NTD-4 model summarized in Table 16, where the last equation is the vectorized form (38).
Remark 2. 
We can make the following remarks:
  • The equations in Table 16 can be used to build a five-step ALS-based algorithm to separately and iteratively estimate the factors ( A ( 1 ) , G ( 1 ) , A ( 2 ) , G ( 2 ) , U ) of the NTD-4 model.
  • When the tensors G ( 1 ) and G ( 2 ) are known, the equations in Table 16 can be used to estimate the unknown matrix factors ( A ( 1 ) , A ( 2 ) , U ) of the NTD-4 model using a three-step ALS algorithm. In Section 4.3, equations in Table 15 will be exploited to derive two closed-form algorithms based on the Kronecker factorization (KronF) method. These algorithms will be used to develop semi-blind receivers for relay systems where the tensors G ( 1 ) and G ( 2 ) are STST coding tensors assumed to be known at the destination.
  • Using the writing (15) of a TTD, the NTD-4 model (35) can be rewritten as follows:
    x i ̲ 4 = a i 1 , ( 1 ) G , i 2 , ( 1 ) U G , i 3 , ( 2 ) a , i 4 ( 2 ) ,
    which allows for the conclusion that the NTD-4 model is unique up to non-singular matrices Λ ( n ) K R n × R n , n 4 , such that the following applies:
    A ˜ ( 1 ) = A ( 1 ) Λ ( 1 ) , U ˜ = [ Λ ( 2 ) ] 1 U Λ ( 3 ) , A ˜ ( 2 ) = [ Λ ( 4 ) ] 1 A ( 2 )
    G ˜ , i 2 , ( 1 ) = [ Λ ( 1 ) ] 1 G , i 2 , ( 1 ) Λ ( 2 ) , G ˜ , i 3 , ( 2 ) = [ Λ ( 3 ) ] 1 G , i 3 , ( 2 ) Λ ( 4 ) .
    Indeed, it is easy to verify that a ˜ i 1 , ( 1 ) G ˜ , i 2 , ( 1 ) U ˜ G ˜ , i 3 , ( 2 ) a ˜ , i 4 ( 2 ) = a i 1 , ( 1 ) G , i 2 , ( 1 ) U G , i 3 , ( 2 ) a , i 4 ( 2 ) = x i ̲ 4 .
  • When the core tensors G ( 1 ) and G ( 2 ) are perfectly known, as will be the case with the coding tensors in the context of relay systems, the ambiguity matrices Λ ( n ) become identity matrices multiplied by a scalar:
    Λ ( n ) = α n I R n for n 4 , with n = 1 4 α n = 1 ; α 2 = α 1 , α 3 = α 4 and U ˜ = α 4 α 1 U .
    The scaling factors α 1 and α 4 can be determined using the knowledge of one element in A ( 1 ) and A ( 4 ) , respectively.

3.3.3. NTD-6 and NGTD-7 Models

In Table 17, we present two generalizations of the NTD-4 model: the NTD-6 and NGTD-7 models, for tensors of order 6 and 7, respectively. These models will be used to represent relay systems with TST and TSTF codings, respectively, in Section 5.
The NTD-6 model corresponds to a GTTD-(2,4,2,4,2), which means a train of three matrix factors ( A ( 1 ) , U , A ( 2 ) ) and two fourth-order core tensors ( G ( 1 ) , G ( 2 ) ) , illustrated in the graph in Figure 9, to be compared with Figure 8 for NTD-4. It can be viewed as the nesting of the following two TD-(2,4) models (see Table 8 with ( N 1 , N ) = ( 2 , 4 ) ) of the fourth-order tensors X ( 1 ) K I 1 × I 2 × I 3 × R 3 and X ( 2 ) K R 2 × I 4 × I 5 × I 6 , which share the matrix factor U :
X ( 1 ) = G ( 1 ) × 1 A ( 1 ) × 4 U T
X ( 2 ) = G ( 2 ) × 1 U × 4 A ( 2 ) .
The NGTD-7 model is a GTTD-(3,5,3,5,2), represented in Figure 10. It corresponds to the nesting of the following two GTD-(2,5) models:
X ( 1 ) = G ( 1 ) × 1 3 A ( 1 ) × 5 1 U K I 1 × I 2 × I 3 × I 4 × R 3
X ( 2 ) = G ( 2 ) × 1 3 U × 5 A ( 2 ) K R 2 × I 2 × I 5 × I 6 × I 7 ,
which share the mode i 2 and the third-order tensor factor U . Equations (45) and (46) are to be compared with Equations (33) and (34) for the NTD-4 model. From this comparison, we deduce the correspondences ( I 1 , I 2 , I 3 , I 4 ) ( I 2 I 1 , I 2 I 3 I 4 , I 2 I 5 I 6 , I 7 ) , and the contracted forms (36) and (37) become the following:
X c 1 = G ( 1 ) × 1 3 A ( 1 ) × 5 X I 2 I 5 I 6 I 7 × R 2 ( 2 ) K I 1 × I 2 × I 3 × I 4 × I 5 I 6 I 7
X c 2 = G ( 2 ) × 1 X I 1 I 2 I 3 I 4 × R 3 ( 1 ) × 5 A ( 2 ) K I 1 I 2 I 3 I 4 × I 5 × I 6 × I 7 .
Sharing mode i 2 among the factors ( A ( 1 ) , G ( 1 ) , U , G ( 2 ) ) means that the Kronecker products in matrix unfoldings can be calculated separately for each value of i 2 I 2 , using block-diagonal matrices composed of I 2 blocks, denoted as bdiag i 2 ( . ) . From the GTD-(2,5) models (45) and (46) of X ( 1 ) and X ( 2 ) , we deduce the following block-diagonal matrix unfoldings:
bdiag i 2 X I 1 R 3 × I 3 I 4 ( 1 ) ( i 2 ) = bdiag i 2 A I 1 × R 1 ( 1 ) ( i 2 ) U R 3 × R 2 ( i 2 ) bdiag i 2 G R 1 R 2 × I 3 I 4 ( 1 ) ( i 2 )
bdiag i 2 X I 7 R 2 × I 5 I 6 ( 2 ) ( i 2 ) = A ( 2 ) bdiag i 2 ( U R 2 × R 3 ( i 2 ) ) bdiag i 2 ( G R 4 R 3 × I 5 I 6 ( 2 ) ( i 2 ) ) ,
where the first term in the right side of (49) denotes a block-diagonal matrix for which each block on the diagonal is the Kronecker product of unfoldings of A ( 1 ) and U calculated for a given i 2 .
Similarly, from the models (47) and (48) of X c 1 and X c 2 , we have the following block-diagonal matrix unfoldings:
bdiag i 2 X I 1 I 5 I 6 I 7 × I 3 I 4 ( i 2 ) = bdiag i 2 A I 1 × R 1 ( 1 ) ( i 2 ) X I 5 I 6 I 7 × R 2 ( 2 ) ( i 2 ) bdiag i 2 G R 1 R 2 × I 3 I 4 ( 1 ) ( i 2 )
bdiag i 2 X I 7 I 1 I 3 I 4 × I 5 I 6 ( i 2 ) = A ( 2 ) bdiag i 2 ( X I 1 I 3 I 4 × R 3 ( 1 ) ( i 2 ) ) bdiag i 2 ( G R 4 R 3 × I 5 I 6 ( 2 ) ( i 2 ) ) .
The matrix unfoldings (49)–(52) will be used in Section 4.5 to devise two closed-form parameter estimation algorithms.
In the cases of the NTD-6 and NGTD-7 models, Equation (39) becomes, respectively, the following:
x i ̲ 6 = a i 1 , ( 1 ) G , i 2 , i 3 , ( 1 ) U G , i 4 , i 5 , ( 2 ) a , i 6 ( 2 )
x i ̲ 7 = a i 1 , i 2 , ( 1 ) G , i 2 , i 3 , i 4 , ( 1 ) U , i 2 , G , i 2 , i 5 , i 6 , ( 2 ) a , i 7 ( 2 ) .
From these equations, we can draw the same conclusions as for the NTD-4 model, namely that the NTD-6 and NGTD-7 models are unique up to non-singular matrices Λ ( n ) K R n × R n , n 4 , and that, when the tensors G ( 1 ) and G ( 2 ) are known, the ambiguity matrices Λ ( n ) become identity matrices multiplied by a scalar.
Remark 3. 
A simplified version of the NGTD-7 model, called NGTD-5 and represented in the graph in Figure 11, will be used in Section 5 to model the tensor of received signals in a relay system employing a simplified TSTF coding, denoted as STSTF. This model for a fifth-order tensor, X K I ̲ 5 , corresponds to a generalized tensor train decomposition GTTD-(3,4,3,4,2), which results from the nesting of the following two GTD-(2,4) models:
X ( 1 ) = G ( 1 ) × 1 3 A ( 1 ) × 4 1 U K I 1 × I 2 × I 3 × R 3
X ( 2 ) = G ( 2 ) × 1 3 U × 5 A ( 2 ) K R 2 × I 2 × I 4 × I 5 ,
with
A ( 1 ) K I 1 × I 2 × R 1 ; G ( 1 ) K R 1 × I 2 × I 3 × R 2 ; U K R 2 × I 2 × R 3
G ( 2 ) K R 3 × I 2 × I 4 × R 4 ; A ( 2 ) K I 5 × R 4 .
Note that the tensor factors ( A ( 1 ) , G ( 1 ) , U , G ( 2 ) ) have the dimension I 2 in common, and the core tensors G ( 1 ) and G ( 2 ) are of fourth-order instead of fifth-order for NGTD-7.
In Figure 12, we summarize two families of tensor models using two trees, based on the basic CPD/PARAFAC and Tucker decompositions. For the definition of acronyms and associated references, see Appendix A.
Note that the NGTD and some variants of nested models are introduced, for the first time, in the present paper as generalizations of the NTD-4 model [41]. Nested tensor models will be used in Section 5 to represent two-hop relay systems using different tensor- and matrix-based codings. See Table 25.
Remark 4. 
The standard PARATUCK-2 model for a third-order tensor combines the properties of the PARAFAC and Tucker models. It was originally introduced in the context of psychometric applications [51]. In the context of wireless communications, a first application of the PARATUCK-2 model was proposed for blind joint identification and equalization of Wiener–Hammerstein nonlinear communication channels [52]. Then, it was used to model a two-hop relay system using simplified Khatri–Rao space-time (SKRST) coding [53]. An extension, denoted as PARATUCK- ( 2 , 4 ) , was introduced in [37] to model the fourth-order tensor of signals received in a MIMO communication system with a tensor space-time (TST) coding.

4. Parameter Estimation Algorithms

In this section, we present two families of algorithms to estimate the parameters of the tensor models introduced in the previous section: alternating least-squares (ALS)-based algorithms and closed-form algorithms. These latter use the Khatri–Rao and Kronecker factorization methods, respectively denoted as KRF and KronF, described in Appendix B and Appendix C. Closed-form algorithms can be applied when some factors of the models are known a priori, as will be the case with coding matrices and tensors, in the context of relay systems studied in Section 5. The standard ALS algorithm is first recalled for a third-order CPD/PARAFAC model.

4.1. CPD/PARAFAC Model

The ALS algorithm was first proposed to estimate the parameters of a CPD/PARAFAC model in [8,47]. Such a model for a third-order tensor X is trilinear in its parameters in the sense that it is linear with respect to each of its three matrix factors ( A , B , C ) . When the relations (11) are taken into account, the idea behind the ALS algorithm is to replace the global minimization of the quadratic error between the data tensor and its CPD/PARAFAC model
min A , B , C X A , B , C ; R F 2
through an alternating minimization of three quadratic cost functions deduced from matrix unfoldings of X , each one being quadratic with respect to one factor, while the other two are fixed with their previously estimated values.
At iteration t, the minimization of each cost function provides an estimated matrix factor:
min A X J K × I ( B t 1 C t 1 ) A T F 2 A t T = ( B t 1 C t 1 ) X J K × I min B X K I × J ( C t 1 A t ) B T F 2 B t T = ( C t 1 A t ) X K I × J min C X I J × K ( A t B t ) C T F 2 C t T = ( A t B t ) X I J × K
where A t denotes the LS estimate of A at iteration t.

4.2. NCPD-4 Model

In the next two subsections, we first present a five-step ALS-based algorithm to estimate the five matrix factors A ( 1 ) , B ( 1 ) , G , A ( 2 ) , B ( 2 ) of the NCPD-4 model (21). The case where the factors ( B ( 1 ) , A ( 2 ) ) are known is also considered. Then, under this assumption, two closed-form algorithms using the KRF method are described. Necessary identifiability conditions are given for each identification algorithm.

4.2.1. ALS Algorithm

Considering the first four matrix unfoldings in Table 14 with Equation (27), and applying the same approach as for the CPD model, we derive the following five LS cost functions to estimate the parameters of a NCPD-4 model:
min B ( 1 ) X I 1 I 3 I 4 × I 2 A t 1 ( 1 ) ( A t 1 ( 2 ) B t 1 ( 2 ) ) G t 1 T ( B ( 1 ) ) T F 2 B t ( 1 )
min A ( 1 ) X I 2 I 3 I 4 × I 1 B t ( 1 ) ( A t 1 ( 2 ) B t 1 ( 2 ) ) G t 1 T ( A ( 1 ) ) T F 2 A t ( 1 )
min B ( 2 ) X I 3 I 1 I 2 × I 4 A t 1 ( 2 ) ( A t ( 1 ) B t ( 1 ) ) G t 1 ( B ( 2 ) ) T F 2 B t ( 2 )
min A ( 2 ) X I 4 I 1 I 2 × I 3 B t ( 2 ) ( A t ( 1 ) B t ( 1 ) ) G t 1 ( A ( 2 ) ) T F 2 A t ( 2 )
min G x I 3 I 4 I 1 I 2 ( A t ( 2 ) B t ( 2 ) ) ( A t ( 1 ) B t ( 1 ) ) vec ( G ) F 2 G t .
Minimizing the above LS criteria leads to the following equations for the ALS algorithm:
( B t ( 1 ) ) T = A t 1 ( 1 ) ( A t 1 ( 2 ) B t 1 ( 2 ) ) G t 1 T X I 1 I 3 I 4 × I 2
( A t ( 1 ) ) T = B t ( 1 ) ( A t 1 ( 2 ) B t 1 ( 2 ) ) G t 1 T X I 2 I 3 I 4 × I 1
( B t ( 2 ) ) T = A t 1 ( 2 ) ( A t ( 1 ) B t ( 1 ) ) G t 1 X I 3 I 1 I 2 × I 4
( A t ( 2 ) ) T = B t ( 2 ) ( A t ( 1 ) B t ( 1 ) ) G t 1 X I 4 I 1 I 2 × I 3
vec ( G t ) = ( A t ( 2 ) B t ( 2 ) ) ( A t ( 1 ) B t ( 1 ) ) x I 3 I 4 I 1 I 2 .
This algorithm requires the pseudo-inversion of five matrices, implying the following necessary identifiability conditions on the dimensions to guarantee the uniqueness of the pseudo-inverses:
I 1 I 3 I 4 R 1 ; I 2 I 3 I 4 R 1 ; I 3 I 1 I 2 R 2 ; I 4 I 1 I 2 R 2 ; I 3 I 4 I 1 I 2 R 1 R 2 .
As is well known, the main drawback of iterative algorithms based on ALS is slow convergence, with the possibility of converging to a local minimum, which strongly depends on the choice of initialization.
When the factors ( B ( 1 ) , A ( 2 ) ) are known, as will be the case with coding matrices in the context of relay systems, a three-step ALS algorithm composed of Equations (67), (68) and (70) can be used to estimate the unknown factors ( A ( 1 ) , B ( 2 ) , G ) .

4.2.2. KRF-Based Closed-Form Algorithms

As mentioned in Section 3.3.1, when the factors ( B ( 1 ) , A ( 2 ) ) are known, two closed-form algorithms using the KRF method can be devised as shown hereafter. These algorithms, composed of two steps, are deduced from unfoldings (in green) in Table 13.
The first algorithm is based on the following matrix unfoldings of X c 2 and X ( 1 ) deduced from the coupled CPD-3 models (26) and (19), respectively:
X I 4 I 1 I 2 × I 3 = ( B ( 2 ) X I 1 I 2 × R 2 ( 1 ) ) ( A ( 2 ) ) T
X I 1 R 2 × I 2 ( 1 ) = ( A ( 1 ) G T ) ( B ( 1 ) ) T .
These equations are successively exploited as follows. In a first step, the LS estimate of the Khatri–Rao product (KRP) in (72) is calculated, and its factors ( B ( 2 ) , X I 1 I 2 × R 2 ( 1 ) ) are determined using the KRF method. In the second step, after a reshaping of the estimate X ^ I 1 I 2 × R 2 ( 1 ) , the LS estimate of the KRP in (73), with X I 1 R 2 × I 2 ( 1 ) replaced with its estimate X ^ I 1 R 2 × I 2 ( 1 ) deduced from the reshaping, is calculated, and the KRF method is applied again to estimate the factors ( A ( 1 ) , G T ) . The resulting algorithm, namely Closed-Form Algorithm 1, is summarized in Table 18.
Similarly, a second closed-form solution can be derived from the following coupled matrix unfoldings of X c 1 and X ( 2 ) deduced from the CPD-3 models (25) and (20), respectively:
X I 1 I 3 I 4 × I 2 = ( A ( 1 ) X I 3 I 4 × R 1 ( 2 ) ) ( B ( 1 ) ) T
X I 4 R 1 × I 3 ( 2 ) = ( B ( 2 ) G ) ( A ( 2 ) ) T .
Following the same approach as for the Closed-Form Algorithm 1, the Closed-Form Algorithm 2 is summarized in Table 18.
Identifiability conditions for the closed-form algorithms are linked with the uniqueness of right inverses of ( B ( 1 ) ) T and ( A ( 2 ) ) T , which yields the following necessary conditions:
I 2 R 1 and I 3 R 2 .
Comparing the identifiability conditions (76) with (71), one can conclude that the ALS algorithm is less constraining than the closed-form ones.
Remark 5. 
It is worth noting that closed-form algorithms involve the drawback of error propagation for computing the LS estimate of the KRP in the second step due to the use of the estimates X ^ I 1 I 2 × R 2 ( 1 ) and X ^ I 3 I 4 × R 1 ( 2 ) obtained in the first step. To alleviate error propagation, a solution consists of combining both closed-form algorithms to estimate the factors ( A ( 1 ) , B ( 2 ) ) in parallel. Then, the factor G is estimated using one of the following formulae deduced from the first and third equations in Table 13:
G ^ = ( A ^ ( 1 ) B ( 1 ) ) X ^ I 1 I 2 × R 2 ( 1 ) or G ^ T = ( A ( 2 ) B ^ ( 2 ) ) X ^ I 3 I 4 × R 1 ( 2 ) ,
where ( A ^ ( 1 ) , X ^ I 1 I 2 × R 2 ( 1 ) ) and ( B ^ ( 2 ) , X ^ I 3 I 4 × R 1 ( 2 ) ) are estimates obtained with the closed-form algorithms. The identifiability conditions for these combined closed-form algorithms are given by (76) with additional conditions resulting from the pseudo-inverses to be calculated in (77):
I 1 I 2 R 1 and I 3 I 4 R 2 .
These conditions are necessarily satisfied when conditions (76) are.

4.3. NTD-4 Model

In the following, we assume that the core tensors ( G ( 1 ) , G ( 2 ) ) are known, and we describe a three-step ALS-based algorithm and two closed-form algorithms to estimate the factors ( A ( 1 ) , U , A ( 2 ) ) of the NTD-4 model.

4.3.1. ALS Algorithm

As the LS cost functions (61)–(65) for the NCPD-4 model, unfoldings in Table 16 can be used to define three LS criteria to be minimized in order to estimate the matrix factors ( A ( 1 ) , U , A ( 2 ) ) by means of a three-step ALS-based algorithm. From the first, third, and fifth equations (in blue) in Table 16, we define the following LS cost functions:
min A ( 2 ) X I 3 I 1 I 2 × I 4 I I 3 ( A t 1 ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) U t 1 G I 3 R 3 × R 4 ( 2 ) A ( 2 ) T F 2 A t ( 2 )
min A ( 1 ) X I 2 I 3 I 4 × I 1 I I 2 ( I I 3 A t ( 2 ) ) G I 3 R 4 × R 3 ( 2 ) U t 1 T G I 2 R 2 × R 1 ( 1 ) ( A ( 1 ) ) T F 2 A t ( 1 )
min U x I 3 I 4 I 1 I 2 ( I I 3 A t ( 2 ) ) G I 3 R 4 × R 3 ( 2 ) ( A t ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) vec ( U ) F 2 U t .
Minimizing these LS criteria leads to the following equations for the ALS algorithm:
( A t ( 2 ) ) T = [ I I 3 ( A t 1 ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) U t 1 ] G I 3 R 3 × R 4 ( 2 ) X I 3 I 1 I 2 × I 4
( A t ( 1 ) ) T = [ I I 2 ( I I 3 A t ( 2 ) ) G I 3 R 4 × R 3 ( 2 ) U t 1 T ] G I 2 R 2 × R 1 ( 1 ) X I 2 I 3 I 4 × I 1
vec ( U t ) = ( I I 3 A t ( 2 ) ) G I 3 R 4 × R 3 ( 2 ) ( A t ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) x I 3 I 4 I 1 I 2 .
The uniqueness of the above LS solutions implies the following necessary identifiability conditions in the dimensions:
I 3 I 1 I 2 R 4 ; I 2 I 3 I 4 R 1 ; I 3 I 4 I 1 I 2 R 3 R 2 .

4.3.2. Closed-Form Algorithms

Following the same approach as for the NCPD-4 model, we now present two closed-form solutions exploiting the matrix unfoldings (in green) in Table 15 and using the KronF method. From the coupled TD-(2,3) models (37) and (33) of X c 2 and X ( 1 ) , we deduce the following matrix unfoldings of the tensors X and X ( 1 ) :
X I 4 I 1 I 2 × I 3 = ( A ( 2 ) X I 1 I 2 × R 3 ( 1 ) ) G R 4 R 3 × I 3 ( 2 )
X I 1 R 3 × I 2 ( 1 ) = ( A ( 1 ) U T ) G R 1 R 2 × I 2 ( 1 ) .
These equations are used to build the first closed-form algorithm as follows: In a first step, the LS method is applied to estimate the Kronecker product (KronP) in Equation (86) as follows:
A ( 2 ) X I 1 I 2 × R 3 ( 1 ) = X I 4 I 1 I 2 × I 3 [ G R 4 R 3 × I 3 ( 2 ) ] .
The factors ( A ( 2 ) , X I 1 I 2 × R 3 ( 1 ) ) are then estimated using the KronF method. In a second step, after a reshaping of the estimate X ^ I 1 I 2 × R 3 ( 1 ) , the LS method is used to estimate the KronP in Equation (87) as follows:
A ( 1 ) U T = X ^ I 1 R 3 × I 2 ( 1 ) ( G R 1 R 2 × I 2 ( 1 ) ) .
The factors ( A ( 1 ) , U ) are then deduced using the KronF method. This two-step Closed-Form Algorithm 1 is summarized in Table 19.
The second closed-form solution is based on the following matrix unfoldings deduced from the coupled TD-(2,3) models (36) and (34) of X c 1 and X ( 2 ) :
X I 1 I 3 I 4 × I 2 = ( A ( 1 ) X I 3 I 4 × R 2 ( 2 ) ) G R 1 R 2 × I 2 ( 1 )
X I 4 R 2 × I 3 ( 2 ) = ( A ( 2 ) U ) G R 4 R 3 × I 3 ( 2 ) ,
which leads to Closed-Form Algorithm 2, summarized in Table 19.
For these closed-form solutions, necessary identifiability conditions, associated with the right inversion of G R 1 R 2 × I 2 ( 1 ) and G R 4 R 3 × I 3 ( 2 ) , are given by the following:
I 2 R 1 R 2 and I 3 R 3 R 4 .
These conditions are to be compared with the ALS ones (85).
Remark 6. 
As for the NCPD-4 model, the closed-form algorithms in Table 19 involve the drawback of error propagation due to the use, in the second step, of the estimates X ^ I 1 I 2 × R 3 ( 1 ) and X ^ I 3 I 4 × R 2 ( 2 ) obtained in the first step. An improved solution consists of combining both closed-form algorithms to estimate the factors ( A ( 1 ) , A ( 2 ) ) in parallel. Then, the factor U is estimated using one of the following formulae deduced from the expressions of X I 1 I 2 × R 3 ( 1 ) and X I 3 I 4 × R 2 ( 2 ) given in Table 15:
U ^ = ( A ^ ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) X ^ I 1 I 2 × R 3 ( 1 ) or U ^ T = ( I I 3 A ^ ( 2 ) ) G I 3 R 4 × R 3 ( 2 ) X ^ I 3 I 4 × R 2 ( 2 ) ,
where ( A ^ ( 1 ) , A ^ ( 2 ) , X ^ I 1 I 2 × R 3 ( 1 ) , X ^ I 3 I 4 × R 2 ( 2 ) ) are estimates obtained in the first step of the closed-form algorithms. Identifiability conditions for these combined closed-form algorithms are given by (92) with additional conditions for the pseudo-inverses to be calculated in (93):
I 1 I 2 R 2 and I 3 I 4 R 3 .
Note that these conditions are always satisfied when conditions (92) are.

4.4. NTD-6 Model

The closed-form algorithms for the NTD-4 model are generalized to the NTD-6 using the correspondences ( I 2 , I 3 , I 4 ) ( I 2 I 3 , I 4 I 5 , I 6 ) , as summarized in Table 20.
Necessary identifiability conditions for NTD-6 are associated with the uniqueness of the right inverses of G R 1 R 2 × I 2 I 3 ( 1 ) and G R 4 R 3 × I 4 I 5 ( 2 ) , which gives the following:
I 2 I 3 R 1 R 2 and I 4 I 5 R 3 R 4 .

4.5. NGTD-7 Model

Assuming the core tensors ( G ( 1 ) , G ( 2 ) ) are known, we now present two closed-form algorithms to estimate the parametters ( A ( 1 ) , U , A ( 2 ) ) of the NGTD-7 model introduced in Table 17. The first one is based on the coupled matrix unfoldings (52) and (49) of X and X ( 1 ) . Equation (52) is used to determine the LS estimate of the KronF product from which the factors ( A ( 2 ) , bdiag i 2 ( X I 1 I 3 I 4 × R 3 ( 1 ) ( i 2 ) ) ) are determined by applying the KronF method. Then, after a reshaping of bdiag i 2 ( X ^ I 1 I 3 I 4 × R 3 ( 1 ) ( i 2 ) ) ) , the LS estimate of the KronF product in (49) is calculated, and the factors ( bdiag i 2 ( A I 1 × R 1 ( 1 ) ( i 2 ) ) , bdiag i 2 ( U R 3 × R 2 ( i 2 ) ) ) are determined via the KronF method.
The resulting closed-form algorithm for NGTD-7 is summarized in Table 21, which also contains the second closed-form method based on the coupled matrix unfoldings (51) and (50), and derived following the same approach. To simplify the writing of equations in Table 21, the term “ ( i 2 ) ” is dropped from the block-diagonal matrices bdiag i 2 ( . ) .
For both closed-form algorithms, necessary identifiability conditions are associated with the uniqueness of the right inverses of bdiag ( G R 1 R 2 × I 3 I 4 ( 1 ) ) and bdiag ( G R 4 R 3 × I 5 I 6 ( 2 ) ) , which implies the following:
I 3 I 4 R 1 R 2 and I 5 I 6 R 3 R 4 .
Remark 7. 
The closed-form algorithms for the NGTD-5 model can be deduced from the ones for the NGTD-7 model using the following correspondences: ( I 3 I 4 , I 5 I 6 , I 7 ) ( I 3 , I 4 , I 5 ) . The necessary identifiability conditions for NGTD-5, associated with the uniqueness of the right inverses of bdiag ( G R 1 R 2 × I 3 ( 1 ) ) and bdiag ( G R 4 R 3 × I 4 ( 2 ) ) , are then given by the following:
I 3 R 1 R 2 and I 4 R 3 R 4 .

5. Overview of Cooperative and Two-Hop Relay Systems

In this section, we first present an overview of tensor-based MIMO cooperative systems from a semi-blind receiver perspective. Then, we provide a detailed presentation of main two-hop relay systems.

5.1. Overview of Cooperative Systems

Table 22 completes Table 9 of the companion paper [45]. This table mentions the cases of OFDM/mmW communications, the type of cooperation (relay/IRS/UAV), the coding, and the tensor model of signals received at destination. It also includes the receiver algorithms used for each cooperative system.
Hereafter, we highlight some important characteristics of cooperative systems to motivate the choice of two-hop relay systems taken into account in our comparative study:
  • Cooperation scheme: Most cooperative systems use relays for the exchange of information between source and destination nodes. Relay stations are equipped with hardware and signal processing capability that allow signal decoding/coding steps, depending on the relay protocol. Recent works have addressed cooperation schemes using intelligent reflecting surfaces (IRSs), which act as a large number of passive re-reflecting elements with low energy consumption and limited processing capacity to process the received signals. For a comprehensive presentation of IRS-assisted MIMO communication systems and, more generally, of various applications of IRS-assisted wireless networks, the reader is referred to the following review papers: [75,76,77,78,79,80].
    Note that, unlike the semi-blind receivers proposed in this paper that allow an estimation of individual channels in two-hop relay systems with the use of very few pilot symbols, most existing works regarding IRS-assisted wireless communications have presented supervised solutions using a pilot sequence to estimate the individual transmitter-to-IRS and IRS-to-receiver channels or the cascaded transmitter–IRS receiver channel, view as a single channel [65,68,69,81,82,83,84,85]. The use of pilot sequences results in a reduction in transmission rates.
    Channel estimation for IRS-assisted MIMO communication systems is a very challenging task due to the following: (i) the large number of passive IRS elements and, therefore, of channel coefficients to be estimated; and (ii) the lack of processing capacity at the IRS.
    A lot of methods have been proposed in the literature to solve the channel estimation problem, depending on the channel model, system configuration (SISO/MISO/MIMO, single-/multi-user, single-/multi-IRS, or narrowband/broadband communication), type of receiver (supervised using pilot sequences versus semi-blind), and algorithm (ALS, closed-form, compressed sensing, or deep learning) used for individual or cascaded channel estimation. The reader is invited to consult the survey paper [86] for a more detailed presentation of IRS channel estimation methods.
    Another type of cooperative system involves UAV-assisted communications [70,87]. Consult [88] for a survey on civil UAV applications. In the context of 6G wireless networks, from the perspective of connecting everyone and everything, integrated satellite–terrestrial networks [89], also known as integrated satellite terrestrial/aerial networks (IST/ANs) [90], have been the subject of recent studies. Other recent works consider IRS-UAV-assisted wireless communication networks for assisting the communication between a base station and multiple users [91,92] or an Internet of Things (IoT) terminal [71].
    In the future, highly digitized world of all connected objects, 6G wireless networks will integrate different functionalities, including sensing, communication, computing, localization, navigation, signal and image processing, and object recognition, combined with artificial intelligence (AI) technology. This is the case of recent integrated (radar) sensing and communication (ISAC) systems [93,94,95], also known as joint communication and radar/radio sensing (JCAS) systems [96], which aim to improve spectral efficiency while minimizing hardware cost and power consumption.
  • Relaying protocols: The two most common relaying protocols are DF and AF, depending on whether the relay decodes the received signals. Although the DF protocol generally offers better performance, most of the relay systems presented in Table 22 use the AF protocol. In [55], the authors compare the system performance obtained with both protocols to jointly and semi-blindly estimate the transmitted symbols and individual channels in a one-way two-hop relay system.
  • Coding schemes: Various coding schemes are used in the cooperative systems reported in Table 22. However, much of the works propose cooperative systems employing KRST and TST codings. In the present paper, we consider two coding families: tensor-based (e.g., TST and TSTF, and simplified versions denoted as STST and STSTF) and matrix-based (e.g., SKRST and DKRSTF) codings employed at the source and the relay. Two combined codings, denoted as SKRST-MSMKR and STST-MSMKron, are also considered.
  • Communication channels: Two types of channels are most often considered, namely flat-fading and frequency-selective fading channels. In the latter case, the channels, which depend on frequency, are represented by means of third-order tensors, implying more channel coefficients to be estimated.
  • Receiver algorithms: Parameter estimation algorithms can be classified into two main categories: iterative and closed-form algorithms. The works cited in Table 22 mainly use ALS-based algorithms and closed-form solutions based on KRF and KronF methods, described respectively in Appendix B and Appendix C. These algorithms will be applied in Section 6 to devise semi-blind receivers for the joint estimation of symbol matrices and channel coefficients for all considered relay systems.

5.2. Overview of Two-Hop Relay Systems

Relay systems can be classified according to the following characteristics:
  • One-way/two-way;
  • AF/DF relaying protocol;
  • Two-hop/multihop;
  • Half-duplex/full-duplex (the relay can then transmit and receive simultaneously in both directions, during the same time slot).
In Section 5.3, Section 5.4 and Section 5.5, we provide an overview of one-way half-duplex two-hop relay systems, applying several coding schemes that take into account different types of diversity, which results in different tensor models for the received signals. The codings considered are the following: tensor space-time-frequency (TSTF), tensor space-time (TST), simplified TSTF (STSTF), simplified TST (STST), simplified Khatri–Rao space-time (SKRST), and double Khatri–Rao space-time-frequency (DKRSTF), all using the AF protocol.
Additionally, to provide a comparison with the DF protocol, we consider STST and SKRST combined with MSMKron and MSMKR codings, as proposed in [97,98] in the context of point-to-point systems.
The relay systems considered can be classified according to different choices concerning the coding scheme, relaying protocol, tensor model, and receiver algorithm. Figure 13 illustrates this classification according to the coding structure, mentioning the order of the tensor of encoded signals to be transmitted and specifying the tensor model for the signals received at the destination node. It is noteworthy that certain codings are particular cases of more generic ones, resulting from simple restrictions in terms of diversities taken into account, which induces simplified tensor models.
Let us introduce the general framework of a one-way, two-hop relay system, as shown in Figure 14, which is composed of source (S), relay (R), and destination (D) nodes, assumed to be equipped with multiple antennas: M S transmit and M D receive antennas at the source and destination, respectively, while the relay has M R receive and M T transmit antennas. Note that, in this paper, the direct link between the source and the destination is assumed to be unavailable. Table 23 summarizes the definitions of the design parameters and the matrices and tensors used to describe all the relay systems.
In the following, we consider three classes of coding: tensor-based codings in Section 5.3, matrix-based codings in Section 5.4, and tensor- and matrix-based codings combined with MSMKron and MSMKR codings, respectively, in Section 5.5. With the first two classes of coding, the AF relaying protocol will be used, while the DF protocol will be considered in the case of combined codings.

5.3. AF Two-Hop Relay Systems Using Tensor-Based Codings

High-order tensors have been used to define tensor coding schemes, as proposed for the first time with the TST coding for CDMA systems [37] and the TSTF coding for OFDM systems [38], in the context of point-to-point wireless communications, in 2012 and 2014, respectively. Let us start our survey by presenting the most general two-hop system using fifth-order TSTF codings at both the source and relay, from which simpler systems will be straightforwardly derived. To alleviate the presentation, we consider the noiseless case. The noise will be introduced in the experimental study, in Section 7.

5.3.1. TSTF Coding

Fifth-order TSTF codings C ( S ) C M S × F × P S × J S × R and C ( R ) C M T × F × P R × J R × M R are employed at the source and relay, respectively, with the AF protocol at the relay. The channels between the source and the relay (SR), and between the relay and the destination (RD), are assumed to be frequency-selective fading and quasic-static, i.e., time-invariant during the transmission, composed of P S and P R time slots at the source and relay, respectively. They are represented by third-order tensors H ( S R ) C M R × F × M S and H ( R D ) C M D × F × M R .
Let us consider the first hop, detailing the coding at the source and the transmission from the source to the relay. The symbol matrix S C N × R to be transmitted contains R data streams, each one composed of N information symbols. At the source, the nth symbols s n , r of the R data streams are linearly combined and duplicated in the space–frequency–time–chip domains, owing to the coding tensor C ( S ) , via the following equation, which defines the fifth-order tensor U ( S ) C M S × F × P S × J S × N of coded signals:
U ( S ) = C ( S ) × 5 S u m S , f , p S , j S , n ( S ) = r = 1 R c m S , f , p S , j S , r ( S ) s n , r .
This coding is associated with a transmission during P S time slots composed of N symbol periods each, a time spreading composed of J S chips, and a repetition over F subcarriers.
The coded signals are transmitted via the M S transmit antennas from the source to the relay through the transmission channel H ( S R ) . The m R th antenna of the relay receives during the j S th chip period of the nth symbol period of the p S th time slot, and associated with the fth subcarrier, the signal given by the following:
X ( R ) = U ( S ) × 1 3 H ( S R ) = H ( S R ) × 3 1 U ( S ) x m R , f , p S , j S , n ( R ) = m S = 1 M S h m R , f , m S ( S R ) u m S , f , p S , j S , n ( S ) .
Combining Equations (98) and (99) allows for rewriting the fifth-order tensor X ( R ) C M R × F × P S × J S × N of signals received at the relay as follows:
X ( R ) = H ( S R ) × 3 1 C ( S ) × 5 S x m R , f , p S , j S , n ( R ) = m S = 1 M S r = 1 R h m R , f , m S ( S R ) c m S , f , p S , j S , r ( S ) s n , r ,
or equivalently:
X ( R ) = C ( S ) × 1 3 H ( S R ) × 5 S ,
which is a GTD-(2,5) model whose core tensor is the source-coding tensor C ( S ) .
We now follow the same reasoning for the second hop. Due to the AF protocol, the signals received at the relay are re-encoded using the coding tensor C ( R ) C M T × F × P R × J R × M R to give the seventh-order tensor U ( R ) C M T × F × P R × J R × P S × J S × N of coded signals:
U ( R ) = C ( R ) × 5 1 X ( R ) u m T , f , p R , j R , p S , j S , n ( R ) = m R = 1 M R c m T , f , p R , j R , m R ( R ) x m R , f , p S , j S , n ( R ) .
These coded signals are transmitted via the relay to the destination through the channel H ( R D ) , giving the seventh-order tensor X ( D ) C M D × F × P R × J R × P S × J S × N of signals received at the destination:
X ( D ) = H ( R D ) × 3 1 U ( R ) x m D , f , p R , j R , p S , j S , n ( D ) = m T = 1 M T h m D , f , m T ( R D ) u m T , f , p R , j R , p S , j S , n ( R ) .
Replacing U ( R ) , X ( R ) and U ( S ) with their expressions (102), (99) and (98) gives the following:
X ( D ) = H ( R D ) × 3 1 C ( R ) × 5 1 X ( R )
       = H ( R D ) × 3 1 C ( R ) × 5 1 H ( S R ) × 5 1 U ( S )
          = H ( R D ) × 3 1 C ( R ) × 5 1 H ( S R ) × 5 1 C ( S ) × 7 S
or in scalar form:
x m D , f , p R , j R , p S , j S , n ( D ) = m T = 1 M T m R = 1 M R m S = 1 M S r = 1 R h m D , f , m T ( R D ) c m T , f , p R , j R , m R ( R ) h m R , f , m S ( S R ) c m S , f , p S , j S , r ( S ) s n , r .
Note that, in Equation (106), the operation × 3 1 and the first operation × 5 1 correspond to contractions along the modes m T and m R , respectively, while the second operation × 5 1 and the mode-7 product correspond to contractions along the modes m S and r, respectively, as shown in Equation (107). From this equation, we conclude that the tensor X ( D ) satisfies the Tucker train (TT) model H ( R D ) , C ( R ) , H ( S R ) , C ( S ) , S ; M T , M R , M S , R , as represented by means of the block diagram in Figure 15.
Comparing Equation (106) with the equation of the NGTD-7 model presented in Table 17, we deduce the following correspondences:
H ( R D ) , C ( R ) , H ( S R ) , C ( S ) , S A ( 1 ) , G ( 1 ) , U , G ( 2 ) , A ( 2 )
M D , F , P R , J R , P S , J S , N , M T , M R , M S , R I 1 , I 2 , I 3 , I 4 , I 5 , I 6 , I 7 , R 1 , R 2 , R 3 , R 4 .
Now, let us define the effective channel tensor H ( S R D ) C M D × F × P R × J R × M S between the source and the destination, as follows:
H ( S R D ) = H ( R D ) × 3 1 C ( R ) × 5 1 H ( S R ) = C ( R ) × 1 3 H ( R D ) × 5 1 H ( S R ) .
This tensor satisfies a GTD-(2,5) model whose core tensor is the relay-coding tensor C ( R ) . The expressions (105) and (106) of X ( D ) can then be rewritten in terms of the effective channel as follows:
X ( D ) = H ( S R D ) × 5 1 U ( S )
                 = H ( S R D ) × 5 1 C ( S ) × 7 S = C ( S ) × 1 5 H ( S R D ) × 7 S .
From Equation (111), we conclude that the TT structure of the tensor X ( D ) can be interpreted as the contraction of the tensors H ( S R D ) and U ( S ) along the mode m S , as illustrated in Figure 15.
Moreover, a comparison with the NGTD-7 model reveals that X ( D ) is the nesting of the tensor H ( S R D ) , corresponding to the component X ( 1 ) defined in (45), with the tensor X ( R ) defined in (101) and to be associated with the component X ( 2 ) in (46), in red and blue in Equations (106) and (107), respectively, sharing the tensor factor H ( S R ) (in green).
In Section 6, under the assumption that coding tensors are known at the destination, the coupled GTD-(2,5) models (112) and (110) will be exploited to design a KronF-based semi-blind receiver composed of two stages; in the first one, the symbol matrix and the effective channel tensor are jointly estimated, applying the KronF algorithm to the model (112), and in the second one, individual channel tensors ( H ( R D ) , H ( S R ) ) are estimated using the KronF algorithm applied to model (110) of the effective channel estimated in the previous stage. See Table 28.
The TSTF system presented above is the most comprehensive relay system utilizing tensor codings that combine signal diversities across space (antennas), frequency (subcarriers), time (time-spreading lengths and symbol periods), and code (chips) domains, corresponding to the indices ( m D , f , p R , j R , p S , j S , n ) of the received signals’ tensor X ( D ) . This system extends the relay system proposed in [41], which does not take frequency and chip diversities into account, inducing third-order instead of fifth-order coding tensors, and it considers flat-fading channels, leading to a NTD-4 model for the fourth-order tensor of signals received at the destination instead of a NGTD-7 model for the TSTF system. The coding in [41], denoted as STST, as well as the tensor-based codings TST and STSTF, can be derived from the TSTF coding by applying certain simplifications, as detailed hereafter.

5.3.2. TST Coding

With TST coding, we consider flat fading channels H ( S R ) C M R × M S and H ( R D ) C M D × M T , and we assume that both the source and relay transmit signals using only one subcarrier. This implies removing frequency diversity from coding tensors, resulting in fourth-order tensors C ( S ) C M S × P S × J S × R and C ( R ) C M T × P R × J R × M R . The signals received at the destination then form a sixth-order tensor, X ( D ) C M D × P R × J R × P S × J S × N , directly deduced from Equation (107), which is simplified as follows:
x m D , p R , j R , p S , j S , n ( D ) = m T = 1 M T m R = 1 M R m S = 1 M S r = 1 R h m D , m T ( R D ) c m T , p R , j R , m R ( R ) h m R , m S ( S R ) c m S , p S , j S , r ( S ) s n , r .
The tensor X ( D ) satisfies a NTD-6 model, such as that introduced in Table 17, corresponding to the GTTD-(2,4,2,4,2) model H ( R D ) , C ( R ) , H ( S R ) , C ( S ) , S ; M T , M R , M S , R , which can be viewed as the nesting of the following two TD-(2,4) models deduced from Equations (110) and (101), sharing the common matrix factor H ( S R ) :
H ( S R D ) = C ( R ) × 1 H ( R D ) × 4 ( H ( S R ) ) T C M D × P R × J R × M S
X ( R ) = C ( S ) × 1 H ( S R ) × 4 S C M R × P S × J S × N .
H ( S R D ) and X ( R ) are the tensors of the effective channel and of signals received at the relay, respectively. Figure 16 depicts a block diagram of the TST system, highlighting its nesting structure. Equations (114) and (115) are analogous to Equations (43) and (44) with the following correspondences:
H ( R D ) , C ( R ) , H ( S R ) , C ( S ) , S A ( 1 ) , G ( 1 ) , U , G ( 2 ) , A ( 2 )
M D , P R , J R , P S , J S , N , M T , M R , M S , R I 1 , I 2 , I 3 , I 4 , I 5 , I 6 , R 1 , R 2 , R 3 , R 4 .
Moreover, Equation (112) becomes the following:
X ( D ) = C ( S ) × 1 4 H ( S R D ) × 6 S .
In the next section, the coupled TD-(2,4) models (114) and (118) will be used to design a KronF-based semi-blind receiver. See Table 28.

5.3.3. STSTF and STST Codings

Simplified versions of the TSTF and TST codings, denoted as STSTF and STST, are now introduced. These coding schemes do not consider chip diversity for STSTF, and both chip and frequency diversities for STST, leading to fourth-order coding tensors C ( S ) C M S × F × P S × R , C ( R ) C M T × F × P R × M R , and third-order tensors C ( S ) C M S × P S × R , C ( R ) C M T × P R × M R , respectively. With these simplifications, we can directly infer the signals received at the destination from Equations (107) and (113):
x m D , f , p R , p S , n ( D ) = m T = 1 M T m R = 1 M R m S = 1 M S r = 1 R h m D , f , m T ( R D ) c m T , f , p R , m R ( R ) h m R , f , m S ( S R ) c m S , f , p S , r ( S ) s n , r
for the STSTF system, and
x m D , p R , p S , n ( D ) = m T = 1 M T m R = 1 M R m S = 1 M S r = 1 R h m D , m T ( R D ) c m T , p R , m R ( R ) h m R , m S ( S R ) c m S , p S , r ( S ) s n , r
for the STST one. The received signals in (119) and (120) define fifth- and fourth-order tensors that satisfy, respectively, the NGTD-5 and NTD-4 models, such as those introduced in Section 3.3.3 and Section 3.3.2, respectively.
Remark 8. 
Note that, for STST, the tensors H ( S R D ) and X ( R ) are modeled by means of the following TD-(2,3) models, directly deduced from (114) and (115):
H ( S R D ) = C ( R ) × 1 H ( R D ) × 3 ( H ( S R ) ) T C M D × P R × M S
X ( R ) = C ( S ) × 1 H ( S R ) × 3 S C M R × P S × N .
In summary, the signals received at the destination for a TSTF system form a seventh-order tensor X ( D ) C M D × F × P R × J R × P S × J S × N , satisfying a NGTD-7 model, i.e., a generalized Tucker train decomposition, GTTD-(3,5,3,5,2). The TST, STSTF, and STST systems are particular cases of TSTF, with certain signal diversities omitted (frequency for TST, chip for STSTF, and both for STST). These systems satisfy Tucker train models previously defined as GTTD-(2,4,2,4,2), GTTD-(3,4,3,4,2), and TTD-(2,3,2,3,2), corresponding to NTD-6, NGTD-5, and NTD-4 models, respectively. Note that the STST system corresponds to the relay system proposed in [41], while the other three are new.

5.4. AF Two-Hop Relay Systems Using Matrix-Based Codings

Coding schemes based on Khatri–Rao products provide a less complex coding structure than tensor codings. Following the same approach as in Section 5.3.1, let us start by describing the general DKRSTF coding based on a double Khatri–Rao product to introduce signal diversities in the space, time, and frequency domains.

5.4.1. DKRSTF Coding

This coding was introduced in [39] as an OFDM extension of the Khatri–Rao ST coding (KRST) defined in [99], in the context of a point-to-point communication system. Here, we propose using DKRSTF in a two-hop relay system. As introduced in Table 23, the coding matrices C ( S ) C P S × M S and C ( R ) C P R × M R spread the transmitted symbols across P S and P R time blocks, while the matrices A ( S ) C F × R , W ( S ) C M S × R and W ( R ) C M T × M R are employed to add frequency and space diversities. Table 24 summarizes the equations of the two-hop system using DKRSTF coding.
The first Khatri–Rao product provides space-frequency pre-coded signals at the source, forming the third-order tensor V ( S ) C M S × F × N , such as:
V F N × M S ( S ) = ( A ( S ) S ) W ( S ) T V ( S ) = I R × 1 W ( S ) × 2 A ( S ) × 3 S .
This tensor V ( S ) satisfies the CPD model, W ( S ) , A ( S ) , S ; R , which can be written in scalar notation as follows:
v m S , f , n ( S ) = r = 1 R w m S , r ( S ) a f , r ( S ) s n , r .
Then, the pre-coded signals are spread over P S blocks using a second Khatri–Rao ST coding matrix C ( S ) to give the fourth-order tensor U ( S ) C M S × P S × F × N of space–time-frequency encoded signals, such as:
U P S F N × M S ( S ) = C ( S ) V F N × M S ( S ) U c ( S ) = I M S × 1 I M S × 2 C ( S ) × 3 V F N × M S ( S ) ,
where U c ( S ) C M S × P S × F N denotes a contracted form of U ( S ) , resulting from a combination of the modes ( f , n ) . In scalar form, we have:
u m S , p S , f , n ( S ) = c p S , m S ( S ) v m S , f , n ( S ) .
After space-time-frequency coding at the source and transmission through the channel H ( S R ) C M R × M S , the signals received at the relay form a fourth-order tensor X ( R ) C M R × P S × F × N given by:
X ( R ) = U ( S ) × 1 H ( S R ) x m R , p S , f , n ( R ) = m S = 1 M S h m R , m S ( S R ) u m S , p S , f , n ( S ) .
Replacing U ( S ) with its expression (125) gives the following contracted form X c ( R ) C M R × P S × F N of X ( R ) :
X c ( R ) = U c ( S ) × 1 H ( S R ) = I M S × 1 H ( S R ) × 2 C ( S ) × 3 V F N × M S ( S ) ,
or, in scalar form,
x m R , p S , f , n ( R ) = m S = 1 M S h m R , m S ( S R ) c p S , m S ( S ) v m S , f , n ( S ) .
Combining (129) with (124) gives:
x m R , p S , f , n ( R ) = m S = 1 M S r = 1 R h m R , m S ( S R ) c p S , m S ( S ) w m S , r ( S ) a f , r ( S ) s n , r .
Equations (123) and (128) are to be compared to Equations (20) and (25) associated with an NCDP-4 model. From this comparison, we conclude that the tensor X ( R ) satisfies the NCDP-4 model H ( S R ) , C ( S ) , W ( S ) , A ( S ) , S ; M S , R with the following correspondences:
H ( S R ) , C ( S ) , W ( S ) , A ( S ) , S A ( 1 ) , B ( 1 ) , G , A ( 2 ) , B ( 2 )
M R , P S , F , N , M S , R I 1 , I 2 , I 3 , I 4 , R 1 , R 2 .
Due to the AF protocol, the signals received at the relay are encoded using a Khatri–Rao space-time coding with matrices W ( R ) and C ( R ) to give the fifth-order tensor U ( R ) C M T × P R × P S × F × N of coded signals defined by the following equation:
U P R P S F N × M T ( R ) = ( C ( R ) X P S F N × M R ( R ) ) W ( R ) T U c ( R ) = I M R × 1 W ( R ) × 2 C ( R ) × 3 X P S F N × M R ( R ) ,
where U c ( R ) C M T × P R × P S F N denotes a contracted form of U ( R ) , resulting from a combination of the modes ( p S , f , n ) . After transmission through the channel H ( R D ) C M D × M T , the signals received at the destination form a fifth-order tensor X ( D ) C M D × P R × P S × F × N whose a contracted form is given by:
X c ( D ) = U c ( R ) × 1 H ( R D ) X M D × P R P S F N ( D ) = H ( R D ) U M T × P R P S F N ( R ) .
Combining Equations (134) and (133) gives:
X c ( D ) = I M R × 1 ( H ( R D ) W ( R ) ) × 2 C ( R ) × 3 X P S F N × M R ( R ) .
When M R > M T , the term H ( R D ) W ( R ) C M D × M R can be interpreted as a virtual antenna array at the destination due to a virtual increase in the number of transmit antennas at the relay, thanks to the space coding matrix W ( R ) . Note that a necessary identifiability condition (184) with the KRF receiver for the DKRSTF system is M R M T .
Let us define the virtual channel as follows:
B H ( R D ) W ( R ) b m D , m R = m T = 1 M T h m D , m T ( R D ) w m T , m R ( R ) .
Then, Equation (135) defines the CPD-3 model B , C ( R ) , X P S F N × M R ( R ) ; M R . From this equation, we conclude that the tensor X ( D ) satisfies a NCPD-5 model, corresponding to the cascade of the CPD-3 model (135) with the NCPD-4 model (130) of the tensor X ( R ) , which is itself the cascade of two CPD-3 models, as illustrated in the block diagram in Figure 17.
When the CPD-3 model (135) and the expression (129) of x m R , f , p S , n ( R ) are taken into account, the tensor of signals received at the destination can also be written in scalar form as follows:
x m D , p R , p S , f , n ( D ) = m R = 1 M R b m D , m R c p R , m R ( R ) x m R , p S , f , n ( R )
              = m R = 1 M R m S = 1 M S b m D , m R c p R , m R ( R ) h m R , m S ( S R ) c p S , m S ( S ) v m S , f , n ( S )
      = m S = 1 M S h m D , p R , m S ( S R D ) c p S , m S ( S ) v m S , f , n ( S ) ,
where H ( S R D ) C M D × P R × M S is the effective channel between the source and the destination, defined as follows:
h m D , p R , m S ( S R D ) = m R = 1 M R b m D , m R c p R , m R ( R ) h m R , m S ( S R )
and v m S , f , n ( S ) is defined in (124). In Section 6.4, the three CPD-3 models (123), (139), and (140), with the definition (136) of B , will be exploited to devise a KRF-based, closed-form receiver, allowing for the joint estimation of the three unknown matrices ( S , H ( S R ) , H ( R D ) ) .
Remark 9. 
From Equations (124), (136), and (138), the signals received at the destination can be written as follows:
x m D , p R , p S , f , n ( D ) = m T = 1 M T m R = 1 M R m S = 1 M S r = 1 R h m D , m T ( R D ) w m T , m R ( R ) c p R , m R ( R ) h m R , m S ( S R ) c p S , m S ( S ) w m S , r ( S ) a f , r ( S ) s n , r .
Comparing this equation with (119), we deduce that the DKRSTF system can be viewed as a particular case of STSTF when considering flat-fading channels, with the following choices for the coding tensors:
c m S , f , p S , r ( S ) = c p S , m S ( S ) w m S , r ( S ) a f , r ( S ) and c m T , f , p R , m R ( R ) = w m T , m R ( R ) c p R , m R ( R ) .
Note that, in the case of DKRSTF, the frequency diversity is omitted from the coding at the relay.

5.4.2. SKRST Coding

Now, we consider a simplified version of KRST coding, denoted as SKRST, introduced in [40,53] for two-hop relay systems. At the source, the coding consists of a simple Khatri–Rao product using the coding matrix C ( S ) C P S × M S , with S C N × M S , resulting in the following encoded signals tensor U ( S ) C M S × P S × N :
U ( S ) = I M S × 1 I M S × 2 C ( S ) × 3 S U P S N × M S ( S ) = C ( S ) S ,
or, in scalar form,
u m S , p S , n ( S ) = c p S , m S ( S ) s n , m S .
Without the frequency-space coding matrices ( A ( S ) , W ( S ) , W ( R ) ) , Equations (130), (137) and (135) of the signals received at the relay and the destination are then simplified as follows:
x m R , p S , n ( R ) = m S = 1 M S h m R , m S ( S R ) c p S , m S ( S ) s n , m S X ( R ) = I M S × 1 H ( S R ) × 2 C ( S ) × 3 S
x m D , p R , p S , n ( D ) = m R = 1 M R h m D , m R ( R D ) c p R , m R ( R ) x m R , p S , n ( R ) X c ( D ) = I M R × 1 H ( R D ) × 2 C ( R ) × 3 X P S N × M R ( R ) ,
with X c ( D ) C M D × P R × P S N . Note that this simplification implies a constraint on the relay, such that the numbers of transmit and receive antennas must be equal ( M T = M R ). The coupled Equations (145) and (146), to be compared to Equations (20) and (25), define a NCPD-4 model for the fourth-order tensor X ( D ) C M D × P R × P S × N , which can be written in the following scalar form:
x m D , p R , p S , n ( D ) = m R = 1 M R m S = 1 M S h m D , m R ( R D ) c p R , m R ( R ) h m R , m S ( S R ) c p S , m S ( S ) s n , m S ,
with the following correspondences:
H ( R D ) , C ( R ) , H ( S R ) , C ( S ) , S A ( 1 ) , B ( 1 ) , G , A ( 2 ) , B ( 2 )
M D , P R , P S , N , M R , M S I 1 , I 2 , I 3 , I 4 , R 1 , R 2 .
Let us define the effective channel tensor H ( S R D ) C M D × P R × M S as follows:
H ( S R D ) = I M R × 1 H ( R D ) × 2 C ( R ) × 3 ( H ( S R ) ) T ,
which satisfies the CPD-3 model H ( R D ) , C ( R ) , H ( S R ) T ; M R and corresponds to Equation (19). Noting that the tensor X ( R ) of signals received at the relay, defined in (145), satisfies the CPD-3 model H ( S R ) , C ( S ) , S ; M S sharing the common factor H ( S R ) with H ( S R D ) , we conclude that the NCPD-4 model of X ( D ) is associated with the nesting of the CPD-3 models of H ( S R D ) and X ( R ) , as illustrated with the block diagram in Figure 18. The correspondences (148) and (149) will be employed to design ALS- and KRF-based receivers for the SKRST system. See Table 28.

5.5. DF Two-Hop Relay Systems Using STST-MSMKron and SKRST-MSMKR Codings

In the preceding two sections, as well as in most of the systems presented in Table 22, the AF protocol was employed. A significant improvement in the symbol error rate (SER) can be obtained with the DF protocol at the cost of additional computational complexity at the relay. In [55], the authors propose two-hop relay systems using multiple Khatri–Rao product-based space-time (MKRST) and multiple Kronecker product-based space-time (MKronST) codings at the source and relay, with a comparison of AF, DF, and estimate-forward (EF) protocols.
In this section, we consider two-hop relay systems with a DF protocol, using STST and SKRST codings, respectively, combined with multiple symbol matrices Kronecker and Khatri–Rao products, which simplify the MKronST and MKRST codings in eliminating the pre-coding matrix. These combined codings, respectively denoted as STST-MSMKron and SKRST-MSMKR, were originally proposed in [97,98], in the context of point-to-point systems. Here, we extend their use to a cooperative scenario.
Let us consider Q symbol matrices S ( q ) , with q Q , to be transmitted via the source. With MSMKron and MSMKR codings, each symbol s i , j ( q ) of a given symbol matrix S ( q ) is duplicated via Kronecker (Kron) and Khatri–Rao (KR) products with other symbol matrices S ( q ) , where q q . These multiple Kron and KR products induce a mutual spreading of transmitted symbols, thus providing additional diversity. Below, we detail these two systems.

5.5.1. STST-MSMKron Coding

For the STST-MSMKron coding, the symbol matrices S ( q ) C N q × R q are encoded at the source using the ( Q + 2 ) -order ST coding tensor C ( S ) C M S × P S × R 1 × × R Q , leading to the following ( Q + 2 ) -order tensor U ( S ) C M S × P S × N 1 × × N Q of signals to be transmitted:
U ( S ) = C ( S ) × 1 I M S × 2 I P S × 3 S ( 1 ) × 4 × Q + 2 S ( Q ) ,
which can be represented in a compact form as U c ( S ) = C ( S ) × 3 S C M S × P S × N , with S = q = 1 Q S ( q ) C N × R , N = q = 1 Q N q and R = q = 1 Q R q . In a scalar format, the coded signals transmitted via the m S th antenna, during the p S th time slot, are given by:
u m S , p S , n 1 , , n Q ( S ) = r 1 = 1 R 1 r Q = 1 R Q c m S , p S , r 1 , , r Q ( S ) q = 1 Q s n q , r q ( q ) .
After transmission through the channel H ( S R ) C M R × M S , the signals received at the relay form the ( Q + 2 ) -order tensor X ( R ) C M R × P S × N 1 × × N Q given by:
X ( R ) = U ( S ) × 1 H ( S R ) = C ( S ) × 1 H ( S R ) × 2 I P S × 3 S ( 1 ) × 4 × Q + 2 S ( Q ) .
Note that the mode-2 product with identity matrix I P S can be interpreted as repeating the transmission of the same symbols during P S time slots. The DF protocol, used at the relay, involves estimating the information symbols and then re-encoding the estimated symbols using the tensor C ( R ) C M S × P R × R 1 × × R Q before transmission to the destination. After transmission through the channel H ( R D ) C M D × M S of the signals coded at the relay, the tensor structure (153) is repeated at the destination for the tensor X ( D ) C M D × P R × N 1 × × N Q , given by:
X ( D ) = C ( R ) × 1 H ( R D ) × 3 S ^ ( 1 ) × 4 × Q + 2 S ^ ( Q ) ,
where S ^ ( q ) denotes the estimate of S ( q ) obtained via the decoding process at the relay.
Due to the DF protocol, the equations describing the signals received at the relay and the destination are similar, and they are written in scalar form as follows:
x m R , p S , n 1 , . . . , n Q ( R ) = m S = 1 M S r 1 = 1 R 1 r Q = 1 R Q h m R , m S ( S R ) c m S , p S , r 1 , . . . , r Q ( S ) q = 1 Q s n q , r q ( q )
x m D , p R , n 1 , . . . , n Q ( D ) = m S = 1 M S r 1 = 1 R 1 r Q = 1 R Q h m D , m S ( R D ) c m S , p R , r 1 , . . . , r Q ( R ) q = 1 Q s ^ n q , r q ( q ) .

5.5.2. SKRST-MSMKR Coding

For the SKRST-MSMKR coding, the symbol matrices S ( q ) C N q × M S are encoded at the source using the coding matrix C ( S ) C P S × M S , giving a ( Q + 2 ) -order tensor U ( S ) C M S × P S × N 1 × × N Q for the encoded signals, defined by means of the following matrix unfolding:
U P S N 1 N Q × M S ( S ) = C ( S ) S ( 1 ) S ( Q ) U ( S ) = I M S × 1 I M S × 2 C ( S ) × 3 S ( 1 ) × Q + 2 S ( Q ) ,
which can be compactly written as U P S N × M S ( S ) = C ( S ) S , with S = q = 1 Q S ( q ) C N × M S and N = q = 1 Q N q . The coded signals to be transmitted by the m S th antenna, during the p S th time slot, are given by:
u m S , p S , n 1 , . . . , n Q ( S ) = c p S , m S ( S ) q = 1 Q s n q , m S ( q ) .
After transmission through the channel H ( S R ) , the signals received at the relay form a ( Q + 2 ) -order tensor X ( R ) C M R × P S × N 1 × × N Q given by:
X ( R ) = U ( S ) × 1 H ( S R ) X M R × P S N 1 N Q ( R ) = H ( S R ) U M S × P S N 1 N Q ( S ) .
Replacing U M S × P S N 1 N Q ( S ) with its expression (157) gives:
X M R × P S N 1 N Q ( R ) = H ( S R ) C ( S ) S ( 1 ) S ( Q ) T .
The tensor X ( R ) satisfies the CPD- ( Q + 2 ) model H ( S R ) , C ( S ) , S ( 1 ) , . . . , S ( Q ) ; M S . With the DF protocol, the symbol matrices are estimated at the relay and then re-encoded with the coding matrix C ( R ) C P R × M S before transmission to the destination. The tensor X ( D ) C M D × P R × N 1 × × N Q of signals received at the destination satisfies the CPD- ( Q + 2 ) model H ( R D ) , C ( R ) , S ^ ( 1 ) , . . . , S ^ ( Q ) ; M S , given by:
X M D × P R N 1 N Q ( D ) = H ( R D ) C ( R ) S ^ ( 1 ) S ^ ( Q ) T .
Note that this equation implies that M T = M S . The signals received at the relay and the destination are given in scalar format as follows:
x m R , p S , n 1 , . . . , n Q ( R ) = m S = 1 M S h m R , m S ( S R ) c p S , m S ( S ) q = 1 Q s n q , m S ( q )
x m D , p R , n 1 , . . . , n Q ( D ) = m S = 1 M S h m D , m S ( R D ) c p R , m S ( R ) q = 1 Q s ^ n q , m S ( q ) .
It is worth mentioning that STST and SKSRT systems with a DF protocol can be deduced easily from STST-MSMKron and SKRST-MSMKR systems by considering only one matrix symbol, i.e., Q = 1 . This configuration will be considered in the experimental study, in Section 7.
Table 25 summarizes the tensors of signals encoded at the source and of signals received at the relay and the destination for each two-hop relay system considered in this overview, unifying their tensor representation and mentioning the tensor model of X ( D ) .
Remark 10. 
Analyzing the nested models in Table 25 leads to the following conclusions:
  • In the case of tensor-based codings, the nested models of signals received at the relay and the destination result from the nesting of two TD-(2,4) and two TD-(2,3) models for the TST and STST systems, respectively, and two GTD-(2,5) and two GTD-(2,4) models for the TSTF and STSTF systems, respectively, with coding tensors as core tensors.
  • In the case of matrix-based codings, the nested models are cascades of three and two CPD-3 models for the DKRSTF and SKRST systems, respectively, with a coding matrix as factor of each CPD-3 model.
  • In the case of combined codings, with the DF protocol, the signals received at relay and destination form ( Q + 2 ) -order tensor models which satisfy a CPD model with a coding matrix as a factor for the SKRST-MSMKR system, and a TD model with a coding tensor as a core tensor for the STST-MSMKron system.
Capitalizing on these tensor models and assuming the coding matrices/tensors are known at relay and destination allow closed-form, semi-blind receivers to be devised for all the considered relay systems, as shown in the next section. Moreover, knowledge of coding matrices/tensors guarantees the uniqueness of the tensor models. In Section 7, extensive Monte Carlo simulation results will be presented to compare the performance of the different relay systems and associated semi-blind receivers.

6. Semi-Blind Receivers

In the following, we assume that the coding matrices and tensors used at the source and relay are known at the destination. In Section 4, two families of algorithms were presented to estimate unknown parameters in various NCPD and NTD models, namely ALS and closed-form algorithms. In this section, these algorithms are used to design the following semi-blind receivers for relay systems modeled by means of nested tensor models:
-
KronF receiver for the TSTF system (Table 21, for a NGTD-7 model);
-
KronF receiver for the TST system (Table 20, for a NTD-6 model);
-
KronF receiver for the STSTF system (Remark 7, for a NGTD-5 model);
-
KronF receiver for the STST system (Table 19, for a NTD-4 model);
-
ALS receiver for the STST system (Equations (82)–(84), for a NTD-4 model);
-
KRF receiver for the SKRST system (Table 18, for a NCPD-4 model);
-
ALS receiver for the SKRST system (Equations (67), (68) and (70), for a NCPD-4 model).
In Table 26, we provide the correspondences between the tensors X ( D ) of signals received at the destination in systems TSTF/TST/STSTF/STST/SKRST with the generic nested tensor models described in Section 3.
Note that the DKRSTF, STST-MSMKron, and SKRST-MSMKR systems that satisfy specific tensor models will be considered separately in Section 6.4, Section 6.5 and Section 6.6, respectively. To facilitate the understanding of the receivers presented in a unified way, in Table 28, we first detail the development of the KronF receiver for the TSTF system represented by a NGTD-7 model, which is the most general one. An analysis of the parametric complexity of the proposed receivers is presented in Appendix D.

6.1. KronF Receiver for the TSTF System

In Table 21, two closed-form algorithms derived from two different ways of combining the matrix unfoldings (49)–(52) of the NGTD-7 model are proposed. Each algorithm is composed of two steps conditioning the order of estimation of the unknown factors ( A ( 1 ) , U , A ( 2 ) ) . In the context of a relay system, the closed-form algorithm 1 allows for first estimating the matrix factor A ( 2 ) , corresponding to the symbol matrix S , while algorithm 2 first estimates the tensor factor A ( 1 ) which corresponds to the channel tensor H ( R D ) . Note that these two closed-form algorithms can be used in parallel in order to eliminate propagation errors in the estimation of these two factors. In the next subsection, such a strategy will be exploited to devise closed-form receivers for the STST and SKRST systems. Now, let us detail the two-step KronF receiver for the TSTF system, derived from the coupled GTD-(2,5) models (112) and (110).
Using the correspondences in Table 26, Equation (52) gives the following block-diagonal matrix unfolding of X ( D ) deduced from the GTD-(2,5) model (112):
bdiag f X N M D P R J R × P S J S ( D ) ( f ) = S bdiag f H M D P R J R × M S ( S R D ) ( f ) bdiag f C R M S × P S J S ( S ) ( f ) .
The LS estimate of the KronP in this equation is given by:
S bdiag f H M D P R J R × M S ( S R D ) ( f ) = bdiag f X N M D P R J R × P S J S ( D ) ( f ) bdiag f C R M S × P S J S ( S ) ( f ) .
The factors S and H ( S R D ) are then estimated by applying the KronF method.
After reshaping the estimate bdiag f H ^ M D P R J R × M S ( S R D ) ( f ) into bdiag f H ^ M D M S × P R J R ( S R D ) ( f ) , in a second step, we use the following unfolding deduced from the GTD-(2,5) model (110) of H ( S R D ) to estimate the individual channels:
bdiag f H M D M S × P R J R ( S R D ) ( f ) = bdiag f H M D × M T ( R D ) ( f ) H M S × M R ( S R ) ( f ) bdiag f C M T M R × P R J R ( R ) ( f ) ,
which corresponds to the unfolding (49), leading to the following LS estimate of the KronP:
bdiag f H M D × M T ( R D ) ( f ) H M S × M R ( S R ) ( f ) = bdiag f H ^ M D M S × P R J R ( S R D ) ( f ) bdiag f C M T M R × P R J R ( R ) ( f ) .
The KronF method is applied once again to obtain estimates of the individual channel tensors H ( S R ) and H ( R D ) .
KronF receivers for other tensor-based coding systems can easily be derived using the same approach, with simplified unfoldings. For instance, in the case of the TST system, where frequency diversity is not considered, the simplification involves fixing F = 1 . Similarly, for the STSTF system which does not consider chip diversity, the simplification results from choosing J S = J R = 1 .
For each two-step KronF receiver, the procedure consists of (i) jointly estimating ( S , H ( S R D ) ) using the expression of the received signal tensor X ( D ) in terms of the effective channel and (ii) estimating the individual channels using the estimated effective channel. Table 28 presents the unfoldings used for each system to derive KronF receivers.
Remark 11. 
The matrix Moore–Penrose pseudo-inverses to be calculated with the proposed closed-form receivers can be simplified when choosing orthonormal matrices. For instance, let us consider the case of a column-orthonormal matrix, A C M × N , i.e., A H A = I N , which implies r ( A ) = N and M N . Its left-inverse is then given by A = A H . Similarly, if A is row-orthonormal, i.e., A A H = I M , implying r ( A ) = M and N M , its right-inverse is also given by A = A H . In the case of the pseudo-inversion of A T , we have ( A T ) = A * if A T is column- or row-orthonormal.
In Table 28, this property is used to simplify the calculation of right-inverses of matrix unfoldings of coding tensors, chosen row-orthonormal by construction.
The same simplification is valid for the left-inverse of the KRP and KronP of two column-orthonormal matrices. Indeed, for A C M × N , B C P × N and C C P × Q , we have:
( A B ) H ( A B ) = A H A B H B = I N I N = I N
( A C ) H ( A C ) = A H A C H C = I N I Q = I N Q ,
which implies ( A B ) = ( A B ) H and ( A C ) = ( A C ) H . This property is used to simplify the calculation of the left-inverse of the KRP ( A ( S ) W ( S ) ) of coding matrices in the KRF receiver of the DKRSTF system.

6.2. KronF Receiver for the STST System and KRF Receiver for the SKRST System

Closed-form receivers for the STST and SKRST systems are derived from Table 19 and Table 18, respectively, by exploiting the correspondences in Table 26, with ( H ( S R D ) , X ( R ) ) playing the role of ( X ( 1 ) , X ( 2 ) ) . This gives the two closed-form receivers summarized in Table 27.
Closed-form receiver 1 will be taken as the standard algorithm, allowing for the estimation of the symbol matrix in the first step and thus avoiding error propagation in the information symbol recovery. As H ^ ( S R ) and H ^ ( R D ) are calculated, in the second step, from the estimate H ^ M D M S × P R ( S R D ) , this introduces error propagation in channel estimates.
To eliminate this error propagation in the estimation of H ( R D ) , an alternative solution consists of using both closed-form algorithms in parallel.
The channel H ( S R ) can then be estimated either from the estimate H ^ M D P R × M S ( S R D ) obtained with Algorithm 1, or the estimate X ^ P S N × M R ( R ) obtained with Algorithm 2. For SKRST, using mode-3 and mode-1 unfoldings of the CPD-3 models (150) of H ( S R D ) and (145) of X ( R ) , respectively, we derive the following two solutions to estimate H ( S R ) :
H ^ ( S R ) = H ^ ( R D ) C ( R ) H ^ M D P R × M S ( S R D ) and H ^ ( S R ) T = C ( S ) S ^ X ^ P S N × M R ( R ) .
Similarly, for STST, from unfoldings of the TD-(2,3) models (121) and (122), we obtain:
H ^ ( S R ) = ( H ^ ( R D ) I P R ) C M T P R × M R ( R ) H ^ M D P R × M S ( S R D ) and H ^ ( S R ) T = ( I P S S ^ ) C P S R × M S ( S ) X ^ P S N × M R ( R ) .
In Section 7, these alternative solutions will be compared with the closed-form receivers 1.
Necessary identifiability conditions for the algorithms presented in Table 27 are linked with the uniqueness of right inverses of the coding matrices/unfoldings, which gives:
P S M S , P R M R
for the KRF receiver of the SKRST system, and
P S M S R , P R M R M T
for the KronF receiver of the STST system.
For the alternative solutions proposed above, the additional steps (170) and (171) to estimate H ( S R ) imply the following supplementary identifiability conditions: M D P R M R and P S N M S . Note that these conditions are always satisfied when (172) and (173) are satisfied.

6.3. ALS Receivers for the STST and SKRST Systems

To derive ALS-based receivers, we use the unfoldings (82)–(84) for the STST system and (67), (68) and (70) for the SKRST system, with the correspondences in Table 26, to estimate the symbol matrix and the channels in an alternate and iterative way. Equations for the ALS-based receivers are presented in Table 28.

6.4. KRF Receiver for the DKRSTF System

Recall the CPD-3 models (139), (140) and (123), with the definition (136), used to design the KRF-based receiver for the DKRSTF system:
X c ( D ) = I M S × 1 H M D P R × M S ( S R D ) × 2 C ( S ) × 3 V F N × M S ( S )
H ( S R D ) = I M R × 1 B × 2 C ( R ) × 3 ( H ( S R ) ) T
V ( S ) = I R × 1 W ( S ) × 2 A ( S ) × 3 S
B = H ( R D ) W ( R ) ,
where X c ( D ) C M D P R × P S × F N is a contracted form of X ( D ) C M D × P R × P S × F × N , and H M D P R × M S ( S R D ) and V F N × M S ( S ) are matrix unfoldings of the tensors H ( S R D ) and V ( S ) . From the CPD models (174) and (175), we obtain the following tall mode-2 unfoldings:
X F N M D P R × P S ( D ) = V F N × M S ( S ) H M D P R × M S ( S R D ) C ( S ) T
H M D M S × P R ( S R D ) = B H ( S R ) T C ( R ) T .
The above unfoldings are alternately exploited in a two-step KRF-based algorithm. In the first step, the LS estimate of the KRP in (178) is calculated, and its factors are estimated using the KRF method:
V F N × M S ( S ) H M D P R × M S ( S R D ) = X F N M D P R × P S ( D ) C ( S ) T KRF V ^ F N × M S ( S ) , H ^ M D P R × M S ( S R D ) .
In the second step, the estimate H ^ M D P R × M S ( S R D ) is reshaped into H ^ M D M S × P R ( S R D ) , and then the LS estimate of the KRP in (179) is calculated. The KRF method is applied again to estimate its factors:
B H ( S R ) T = H ^ M D M S × P R ( S R D ) C ( R ) T KRF B ^ , H ^ ( S R ) .
Finally, after reshaping V ^ F N × M S ( S ) into V ^ F M S × N ( S ) , LS estimates of S and H ( R D ) are obtained using the following unfolding deduced from the CPD-3 model (176):
V F M S × N ( S ) = A ( S ) W ( S ) S T ,
and the definition (177) of B , which gives:
S ^ T = A ( S ) W ( S ) V ^ F M S × N ( S ) and H ^ ( R D ) = B ^ W ( R ) .
Necessary identifiability conditions are linked to the uniqueness of LS estimates in (180)–(183), i.e., full column-rank conditions for C ( S ) , C ( R ) , A ( S ) W ( S ) , and ( W ( R ) ) T , which gives:
P S M S , P R M R , F M S R , M R M T .

6.5. KronF Receiver for the STST-MSMKron System

Since the DF protocol is employed for this system, the signals received at the relay are processed to estimate the symbol matrices before re-encoding and forwarding to the destination. From the TD-( Q + 2 ) model (153), we deduce the following tall mode-2 unfolding of the tensor X ( R ) C M R × P S × N 1 × × N Q of signals received at the relay:
X N 1 N Q M R × P S ( R ) = q = 1 Q S ( q ) H ( S R ) C R 1 R Q M S × P S ( S ) .
Analogously, from the TD model (154), we deduce the following tall mode-2 unfolding of the tensor X ( D ) C M D × P R × N 1 × × N Q :
X N 1 N Q M D × P R ( D ) = q = 1 Q S ^ ( q ) H ( R D ) C R 1 R Q M S × P R ( R ) .
Equations (185) and (186) are exploited by a KronF-based receiver, at the relay and the destination, to estimate the symbol matrices S ^ ( q ) and S ^ ^ ( q ) , q Q , simultaneously with the channel matrices H ^ ( S R ) and H ^ ( R D ) , using the THOSVD algorithm:
q = 1 Q S ( q ) H ( S R ) = X N 1 N Q M R × P S ( R ) C R 1 R Q M S × P S ( S ) KronF S ^ ( 1 ) , , S ^ ( Q ) , H ^ ( S R ) ,
q = 1 Q S ^ ( q ) H ( R D ) = X N 1 N Q M D × P R ( D ) C R 1 R Q M S × P R ( R ) KronF S ^ ^ ( 1 ) , , S ^ ^ ( Q ) , H ^ ( R D ) .
Note that the symbol matrices estimated at the relay and the destination are denoted S ^ ( q ) and S ^ ^ ( q ) , respectively. It is worth mentioning that, with the DF protocol, the destination node estimates only the RD channel, while the SR channel is estimated at the relay.

6.6. KRF Receiver for the SKRST-MSMKR System

Similarly to the STST-MSMKron system, the SKRST-MSMKR one employs the DF protocol, and the proposed KRF receiver follows the same steps as previously described for the KronF receiver. Thus, from the CPD- ( Q + 2 ) models (160) and (161), we deduce the following tall mode-2 unfoldings of the tensors X ( R ) and X ( D ) :
X N 1 N Q M R × P S ( R ) = q = 1 Q S ( q ) H ( S R ) C ( S ) T ,
X N 1 N Q M D × P R ( D ) = q = 1 Q S ^ ( q ) H ( R D ) C ( R ) T .
with C ( S ) C P S × M S and C ( R ) C P R × M S . Note that these equations imply M S = M T , which means that the source and the relay must have the same number of transmit antennas.
Equations (189) and (190) are exploited by a KRF-based receiver, at the relay and the destination, to estimate the symbol matrices S ^ ( q ) and S ^ ^ ( q ) , q Q , with the channel matrices H ^ ( S R ) and H ^ ( R D ) , using the THOSVD algorithm:
q = 1 Q S ( q ) H ( S R ) = X N 1 N Q M R × P S ( R ) C ( S ) T KRF S ^ ( 1 ) , , S ^ ( Q ) , H ^ ( S R ) ,
q = 1 Q S ^ ( q ) H ( R D ) = X N 1 N Q M D × P R ( D ) C ( R ) T KRF S ^ ^ ( 1 ) , , S ^ ^ ( Q ) , H ^ ( R D ) .
As for the STST-MSMKron system, the destination node estimates only the RD channel, while the SR channel is estimated at the relay.
Remark 12. 
The necessary identifiability conditions for the KronF and KRF receivers (187)–(188) and (191)–(192) are linked to the uniqueness of LS estimates in these equations, which respectively gives:
P S M S q = 1 Q R q , P R M T q = 1 Q R q ,
and
P S M S , P R M S .
From the conditions (193) and (194), we conclude that the SKRST-MSMKR system is less constraining for choosing the time-spreading lengths than the STST-MSMKron system.
The unfoldings used to derive the proposed semi-blind receivers for all considered relay systems, as well as the factors estimated at each step, are summarized in Table 28. The correspondences with equations and algorithms associated with the generic nested tensor models are also given.

6.7. Zero-Forcing Receivers for the STST and SKRST Systems

In Section 7, the proposed ALS and closed-form receivers of the STST and SKRST systems will be compared to zero-forcing (ZF) receivers which assume a perfect knowledge of communication channels at the destination node. These receivers can be directly derived from the unfoldings of the received signals tensor used to estimate the symbol matrix with the ALS receivers.
Thus, by exploiting the correspondences in Table 26, the ZF receiver for the STST system is deduced from (82) as follows:
S ^ T = I P S H ( R D ) I P R C M T P R × M R ( R ) H ( S R ) C P S M S × R ( S ) X P S M D P R × N ( D ) .
Similarly, the ZF receiver for the SKRST system, deduced from (68), is given by:
S ^ T = C ( S ) H ( R D ) C ( R ) H ( S R ) X P S M D P R × N ( D ) .
The SER obtained with ZF receivers serves as a reference to evaluate the proposed semi-blind receivers. ZF receivers are also used to analyze the impact of the choice of some design parameters on the SER performance in the ideal situation where the channels are perfectly known.

6.8. Ambiguity Relations, Identifiability Conditions, and Transmission Rates

Factors estimated using KRF and KronF methods are obtained up to scalar ambiguities, as discussed in Appendix B and Appendix C. To eliminate these ambiguities, some a priori knowledge is required.
For KronF receivers, the knowledge of only one element of one of the matrices involved in each KronP is enough. Thus, for the KronF receiver of the TSTF system, based on KronPs in (165) and (167), we assume the knowledge of the elements s 1 , 1 and h 1 , f , 1 ( R D ) of S and H · f · ( R D ) , for f F . This a priori knowledge is used to deduce the scaling factors allowing to eliminate the scaling ambiguities as follows:
λ S = s 1 , 1 / s ^ 1 , 1 ; λ H · f · ( R D ) = h 1 , f , 1 ( R D ) / h ^ 1 , f , 1 ( R D ) , for f F , S ^ λ S S ^ ; H ^ F M D P R J R × F M S ( S R D ) λ S 1 H ^ F M D P R J R × F M S ( S R D ) ; H ^ · f · ( R D ) λ H · f · ( R D ) H ^ · f · ( R D ) , H ^ · f · ( S R ) λ H · f · ( R D ) 1 H ^ · f · ( S R ) ,
with H F M D P R J R × F M S ( S R D ) = bdiag f H M D P R J R × M S ( S R D ) ( f ) . The same procedure is applied to the receivers of the TST, STSTF, and STST systems, noting that the TST and STST systems do not exploit frequency diversity, and therefore, F = 1 in these cases.
For KRF receivers, a priori knowledge of one row of one of the matrices involved in each KRP is required. For instance, in the case of the DKRSTF system based on KRPs in (180) and (181), the first rows of V F N × M S ( S ) and B can be calculated using the first rows of S and H ( R D ) , respectively, from the unfolding (123) of V ( S ) and the definition (177) of B , as follows:
V F N × M S ( S ) 1 , · = ( a 1 , · ( S ) s 1 , · ) W ( S ) T and b 1 , · = h 1 , · ( R D ) W ( R ) .
The ambiguity relations are then given by the following:
Λ V ( S ) = diag V F N × M S ( S ) 1 , · diag V ^ F N × M S ( S ) 1 , · 1 ; Λ B = diag ( b 1 , · ) diag ( b ^ 1 , · ) 1 V ^ F N × M S ( S ) V ^ F N × M S ( S ) Λ V ( S ) ; H ^ M D P R × M S ( S R D ) H ^ M D P R × M S ( S R D ) Λ V ( S ) 1 B ^ B ^ Λ B ; H ^ ( S R ) T H ^ ( S R ) T Λ B 1 .
For the KRF receiver of the SKRST system, we have the following ambiguity relations:
Λ S = diag ( s 1 , · ) diag ( s ^ 1 , · ) 1 ; Λ H ( R D ) = diag ( h 1 , · ( R D ) ) diag ( h ^ 1 , · ( R D ) ) 1 , S ^ S ^ Λ S ; H ^ M D P R × M S ( S R D ) H ^ M D P R × M S ( S R D ) Λ S 1 , H ^ ( R D ) H ^ ( R D ) Λ H ( R D ) ; H ^ ( S R ) T H ^ ( S R ) T Λ H ( R D ) 1 .
For combined codings based on multiple KronPs and KRPs, the closed-form receivers are described by Equations (187) and (188) for the STST-MSMKron system and (191)–(192) for the SKRST-MSMKR one. Due to the DF protocol, each estimation step at the relay and destination requires a priori knowledge of one element or one row of each symbol matrix to remove ambiguities. In this case, the channels H ( S R ) and H ( R D ) are estimated blindly, that is to say without a priori knowledge on H ( R D ) , unlike the AF protocol.
Finally, for the ALS receivers of the SKRST and STST systems, the procedure to eliminate the ambiguities results from the discussion in Section 3.3.1 and Section 3.3.2 on the uniqueness of the NCDP-4 and NTD-4 models. The fact that the coding matrices/tensors are assumed to be perfectly known at the destination ensures the uniqueness of the tensor models and avoids permutation ambiguity in estimated factor matrices. Only scaling ambiguities have to be removed. By exploiting the ambiguity relations (32) and (42) and the correspondences in Table 26, we deduce the following ambiguity relations for the ALS receivers of the SKRST and STST systems:
NCPD - 4 : Λ S = diag ( s 1 , · ) diag ( s ^ 1 , · ) 1 and Λ H ( R D ) = diag ( h 1 , · ( R D ) ) diag ( h ^ 1 , · ( R D ) ) 1 , S ^ S ^ Λ S ; H ^ ( R D ) H ^ ( R D ) Λ H ( R D ) ; H ^ ( S R ) Λ H ( R D ) 1 H ^ ( S R ) Λ S 1 ,
NTD - 4 : λ S = s 1 , 1 / s ^ 1 , 1 and λ H ( R D ) = h 1 , 1 ( R D ) / h ^ 1 , 1 ( R D ) , S ^ λ S S ^ ; H ^ ( R D ) λ H ( R D ) H ^ ( R D ) ; H ^ ( S R ) λ S 1 λ H ( R D ) 1 H ^ ( S R ) ,
where ( S ^ , H ^ ( R D ) , H ^ ( S R ) ) are estimates at convergence. In practice, the a priori information on the RD channel, necessary to remove ambiguities, can be obtained by applying a supervised procedure, that is to say a pilot symbol or a pilot sequence sent via the relay to the destination in the cases of KronF and KRF receivers, respectively. Several examples of this procedure can be found in the literature in the context of relay systems [40,41,42,43,55,58,60,100].
In Section 4, necessary conditions have been established for identifiability with ALS and closed-form estimation algorithms. By exploiting the correspondences in Table 26, identifiability conditions are derived in terms of design parameters for the proposed semi-blind receivers. For the KRF and KronF receivers of the DKRSTF, STST-MSMKron, and SKRST-MSMKR systems, identifiability conditions are given in (184), (193) and (194).
Table 29 summarizes the identifiability conditions for each proposed semi-blind receiver, as well as the transmission rate for each relay system. An analysis of the results in this table allows us to draw some preliminary conclusions with a view of choosing the design parameters in order to obtain the best trade-off between different performance criteria:
  • ALS-based receivers are less constraining than closed-form ones. The matrices to be inverted in the ALS steps induce greater flexibility in the choice of design parameters compared to closed-form receivers which impose more restrictive conditions on these parameters.
  • Matrix-based coding schemes induce fewer restrictions than tensor codings, i.e., softer constraints for the choice of design parameters.
  • Exploiting chip diversity in the encoding process leads to identifiability conditions that are less constraining without degrading transmission rate.

7. Simulation Results

In this section, we present Monte Carlo simulation results to illustrate the performance of the proposed MIMO relay systems and associated semi-blind receivers.
For each system, information symbols are randomly drawn from a unit energy 4-QAM constellation. Channel coefficients are simulated as independent identically distributed (i.i.d.) complex random variables, following a Gaussian distribution with zero-mean and unit variance. The coding matrices/tensors are generated in such a way that each matrix or matrix unfolding to be inverted in the closed-form receiver algorithms is an orthonormal truncated discrete Fourier transform (DFT) matrix in order to simplify the computation of pseudo-inverses, as discussed in Remark 11.
All receiving antennas are assumed to be subject to an additive white Gaussian noise (AWGN), so the noisy received signal tensors are simulated as follows:
X n ( R ) = X ( R ) + N 0 N ( R )
X n ( D ) = X ( D ) + N 0 N ( D ) ,
where the noise-free tensors X ( R ) and X ( D ) are defined in Section 5 and Section 6, N ( R ) and N ( D ) represent unit energy noise tensors at the relay and the destination, respectively, and N 0 is the noise spectral density. At each Monte Carlo run, N 0 is calculated according to the desired signal-to-noise ratio (SNR) value, varying between −10 and 20 dB, and determined as follows:
SNR = 10 log 10 X ( D ) F 2 N 0 .
The criteria used to evaluate the proposed receivers are the SER and NMSEs of the estimated SR and RD channels and of the received signals tensor X ( D ) , reconstructed using the parameters estimated at each Monte Carlo run. The corresponding curves are plotted versus the SNR, and the design parameter values are indicated at the top of each figure.
The SER is computed after the projection of the estimated symbols onto the symbol alphabet. The NMSE of estimated channels and reconstructed signals is defined as follows:
NMSE ( H ) = 10 log 10 1 M C m c = 1 M C H m c H ^ m c F 2 H m c F 2 NMSE ( X ( D ) ) = 10 log 10 1 M C m c = 1 M C X m c ( D ) X ^ m c ( D ) F 2 X m c ( D ) F 2 ,
where M C denotes the number of Monte Carlo runs. H m c and H ^ m c represent the channel H simulated and estimated at the m c -th Monte Carlo run, respectively. X m c ( D ) and X ^ m c ( D ) are defined similarly. The performance criteria are averaged over 10 4 Monte Carlo runs for various system configurations. In addition to the SER and NMSE curves, an assessment of the computational complexity of each receiver is also measured based on the computation time.
To perform a comprehensive evaluation of the considered systems, we follow the methodology summarized in Figure 19, consisting of four main comparisons concerning receiver algorithms, design parameters, relaying protocols, and coding schemes. The STST and SKRST codings were selected as reference systems, except for in the last set of simulations, where all coding schemes are compared.

7.1. Comparison of Receiver Algorithms

The first simulations aimed to compare the performance of the closed-form (KronF/KRF) and ALS algorithms used in the semi-blind receivers presented in Table 28 for the STST and SKRST systems. A comparison was also conducted with the corresponding ZF receivers.
In the case of ALS receivers, the convergence is decided when the difference between two successive normalized reconstruction errors becomes smaller than a predefined threshold η = 10 5 , i.e., | ϵ t ϵ t 1 | η with:
ϵ t = X n ( D ) X ^ t ( D ) F 2 X n ( D ) F 2 .
Figure 20 shows the SER versus SNR for the STST and SKRST systems. As expected, the best SER is obtained with ZF receivers due to a perfect knowledge of the channels at the destination. Slightly better performance is observed when ALS receivers are employed, compared to closed-form receivers. However, this improvement strongly depends on the choice of the threshold for convergence. A lower threshold improves the performance but at the cost of a higher computation time.
From the results in Figure 20, we conclude that the STST system outperforms the SKRST system in terms of SER. For example, for a SER of 10 3 , there is a gap of about 8 dB between the STST and SKRST performances. This is due to the use of tensor codings that provide increased diversity compared to matrix codings. Analyzing the signals encoded at the source, in Table 25, we conclude that, for STST, each symbol s n , r is repeated M S P S times, while, for SKRST, each symbol s n , m S is repeated only P S times. On the other hand, it is worth noting that the STST system requires a higher computation time with quasi the same number of iterations for convergence compared to the SKRST system, as illustrated in Figure 21a,b.
Figure 21a compares the computation times (in seconds) versus SNR, and Figure 21b shows the normalized reconstruction error ϵ t  versus the number of iterations for four different SNR values. As expected, closed-form and ZF receivers have constant computation times versus SNR, the shortest being obtained with ZF receivers since they only involve the estimation of symbols. Due to the knowledge of coding matrices and tensors, the three-step ALS receivers converge in very few iterations (three iterations for high SNR and up to six iterations for low SNR). Nevertheless, closed-form receivers require much less calculation than ALS-based ones, the cheapest being the KRF receiver for SKRST.
Figure 22 shows the NMSE of the estimated channels H ( S R ) and H ( R D ) versus SNR for the ALS and KronF /KRF receivers of the STST and SKRST systems. For both channels, the performance of ALS receivers is very close to that of closed-form receivers. We note that the alternative method proposed in Section 6.2, denoted as KRF2 and KronF2 in Figure 22, which combines both closed-form algorithms in parallel, allows for a significant improvement of the estimate of H ( R D ) .
From this first set of simulation results, we can conclude that ALS and closed-form receivers lead to a gap of approximately 3 dB for a SER of 10 2 , compared to ZF receivers. Additionally, ALS receivers slightly outperform closed-form receivers, penalized by error propagation, at the cost of a higher computation time.

7.2. Impact of Design Parameters and Relay Protocol

The following simulations aimed to evaluate the impact of certain design parameters on the SER performance of STST and SKRST systems using ZF receivers. More precisely, we evaluate the impact of time-spreading lengths and numbers of antennas while keeping the other design parameters fixed. The impact of the number Q of symbol matrices in combined codings STST-MSMKron and SKRST-MSMKR is also evaluated.

7.2.1. Impact of Time-Spreading Lengths: P S and P R

Figure 23 shows the SER versus SNR for different values of time-spreading lengths at the source and relay nodes. We compare the following configurations: ( P S , P R ) = ( 4 , 4 ) , ( 4 , 8 ) , and ( 8 , 4 ) . From the simulation results, we deduce that an increase in time-spreading lengths, whether at the source or the relay, leads to a gain in diversity and, therefore, to an improvement in the SER. A comparison of the configurations ( P S , P R ) = ( 4 , 8 ) and ( 8 , 4 ) shows that the SER improves more by increasing P S , rather than P R . This is because P S modifies the system diversity at both the relay and destination, while P R acts only at the destination. On the other hand, from Table 29, we conclude that an increase in P S degrades the transmission rate more than an increase in P R .
The above conclusions regarding the impact of time-spreading lengths on SER performance are valid for both the STST and SKRST systems, with, however, a greater impact for STST, whose SER vanishes for an SNR greater than 5 dB.

7.2.2. Impact of Numbers of Antennas: M S , M R , M T , and  M D

Figure 24 shows the SER versus SNR for different configurations of the number of antennas at the source ( M S ), the relay ( M R , M T ), and the destination ( M D ). Due to the constraint imposed on the SKRST system, we assume that M R = M T for the following configurations: ( M S , M R , M T , M D ) = ( 2 , 2 , 2 , 2 ) , ( 4 , 2 , 2 , 2 ) , ( 2 , 4 , 4 , 2 ) , and ( 2 , 2 , 2 , 4 ) , compared using the ZF receivers of STST and SKRST systems.
The ( 2 , 2 , 2 , 2 ) configuration serves as a reference to evaluate the impact of an increase in the number of antennas at the different nodes.
With the SKRST system (Figure 24a), increasing the number M S of antennas at the source leads to a degradation of the SER for all SNR values. This is due to the dimension of the symbol matrix depending on M S and, therefore, to an increase in the number of symbols to be estimated when M S is increased without changing the diversity of the system. With the ( 2 , 4 , 4 , 2 ) and ( 2 , 2 , 2 , 4 ) configurations, we observe some improvement in performance compared to the ( 2 , 2 , 2 , 2 ) configuration. In particular, an increase in the number M R of receive antennas at the relay allows a more significant improvement in the SER than an increase in the number M D of receiving antennas at the destination because M R is directly linked to the spatial diversity provided via the coding matrix C ( R ) at the relay, which is not the case for M D .
For the STST system (Figure 24b), any increase in the numbers of antennas improves the SER. The ( 2 , 4 , 4 , 2 ) configuration provides the most significant improvement because M R and M T are both involved in the dimension of the coding tensor C ( R ) at the relay. Comparing the ( 4 , 2 , 2 , 2 ) and ( 2 , 2 , 2 , 4 ) configurations, we observe that both give nearly the same SER, with a slight advantage when increasing the number M S of antennas at the source.
In Figure 25, we compare the DKRSTF, STSTF, and TSTF systems for the following configurations: ( M T , M R ) = ( 2 , 2 ) and (2,4). As discussed in Section 5.4.1, the configuration M R > M T is associated with a virtual array at the destination for the DKRSTF system due to space coding at the relay. Note that all three systems exploit frequency diversity, with frequency-selective channels for the STSTF and TSTF systems, while flat-fading channels are assumed for the DKRSTF system. As expected, from the results on Figure 25, we conclude that increasing the number M R of receive antennas at the relay allows for an improvement of the SER, with the best improvement obtained using DKRSTF. For example, for a SER of 10 2 , the DKRSTF system achieves a performance gain of approximately 8 dB between the two configurations, while the STSTF and TSTF systems provide a maximum gain of 2 dB. Note also that the ( M T , M R ) = ( 2 , 4 ) configuration with DKRSTF gives the same SER as the ( M T , M R ) = ( 2 , 2 ) configuration with STSTF because the doubling of M R compensates for the lack of frequency coding at the relay in the case of DKRSTF, while F = 2 with STSTF. See the equivalence relationships (142) between the two codings at the relay. The TSTF system provides the best SER, thanks to a greater diversity provided via tensor coding.

7.2.3. Impact of the Number Q of Symbol Matrices in Combined Codings

We now evaluate the impact of the number Q of symbol matrices in the MSMKron and MSMKR codings by varying the numbers N q of data streams for q Q , so as to have the same number of symbols to estimate for each value of Q.
Figure 26 shows the SER versus SNR obtained with the STST-MSMKron and SKRST-MSMKR systems for the following configurations: ( N q , Q ) { ( 6 , 1 ) , ( 3 , 2 ) , ( 2 , 3 ) } , q Q . From the simulation results, we conclude that increasing Q improves the SER, with the best performance obtained for Q = 3 , leading to an SNR gain of 3 dB with STST-MSMKron and 4 dB with SKRST-MSMKR, for a SER of 10 2 , compared to the case of Q = 1 . This improvement is due to the fact that each symbol is repeated N q Q 1 times with the SKRST-MSMKR coding and ( N q R q ) Q 1 times with the STST-MSMKron coding, which induces a greater repetition of each transmitted symbol when Q is increased, thanks to the mutual space-time spreading of information symbols provided via the multiple KRPs and KronPs of symbol matrices. The best SER is obtained with the STST-MSMKron coding.

7.2.4. Impact of Relaying Protocol: AF and DF

In this section, we compare the impact of the AF and DF protocols for the STST and SKRST systems. With the DF protocol, the KronF and KRF receivers are directly deduced from the ones proposed for the combined codings by fixing Q = 1 . See Section 6.5 and Section 6.6.
Figure 27 and Figure 28 respectively depict the SER and NMSE of estimated channels versus SNR. From Figure 27, we observe a clear advantage of using the DF protocol at the relay to improve the SER at the destination. Indeed, estimating the symbols at the relay before their re-encoding and then their transmission to the destination makes it possible to avoid the amplification of the noise contained in the signals received at the relay and directly coded in the case of the AF protocol. Moreover, it is obvious that, whatever protocol is used, the STST system achieves a better SER than the SKRST system. This corroborates the greater efficiency of tensor coding compared to matrix coding.
From Figure 28a,b, we observe that, for both protocols, the NMSEs of estimated channels obtained with the STST and SKRST systems are nearly identical, although the SKRST system uses more a priori information on the RD channel (a row instead of a single element of H ( R D ) with STST). This additional information compensates for the consideration of less diversity with SKRST than with STST.
Moreover, regarding the DF protocol, we observe that the SR and RD channels, which play a symmetrical role in terms of estimation, present the same NMSE. Note that this estimation is then perfectly blind in the sense that the a priori information needed to remove ambiguities only concerns the transmitted symbols.
As expected, and for the same reason as for the SER, from Figure 28a, we conclude that the SR channel is better estimated with the DF protocol than with AF, while, from Figure 28b, we draw the conclusion that the NMSE of the RD channel is smaller with AF than with DF, thanks to a priori information on the RD channel used with AF.

7.3. Comparison of Coding Schemes

In this section, we carry out a comparison of all the relay systems considered in this paper, in terms of SER (Figure 29), the NMSE of estimated channels (Figure 30), the NMSE of reconstructed signals, and the computation time (Figure 31), versus SNR. The design parameters are chosen such that the transmission rate is approximately the same for all simulated systems.
In Figure 29, we observe the best performance when tensor-based codings (STST-MSMKron, TSTF, TST, STSTF, and STST) are used. The best SER is obtained with the combined STST-MSMKron coding, thanks to the multiple Kronecker product of symbol matrices that induces mutual diversity in the transmitted signals and to the use of the DF protocol.
For codings with the AF protocol, a classification can be made according to a continuous improvement in SER performance: (1) STST, (2) STSTF, (3) TST, and (4) TSTF, which corresponds to an increasing diversity associated with an increasing order (from four to seven) of the tensor X ( D ) of signals received at the destination.
Comparing the SERs obtained with TST and STSTF codings allows us to conclude that exploiting chip diversity at both the source and relay nodes ( J S and J R dimensions of X ( D ) ) is more beneficial than exploiting frequency diversity, which introduces only one dimension (F) to X ( D ) .
Regarding matrix-based codings, we draw a similar conclusion for the improvement in SER when increasing signal diversity, leading to the following classification (from worst to best): (1) SKRST, (2) DKRSTF, and (3) SKRST-MSMKR.
Concerning channel estimation, Figure 30a and Figure 30b respectively plot the NMSE of the estimated SR and RD channel matrices/tensors for all the systems considered.
To facilitate the comparative analysis of simulation results in these figures, Table 30 provides the numerical values of the NMSE of estimated channels for an SNR of 0 dB. This table also presents the values of the NMSE of reconstructed signals, shown in Figure 31a.
Note that the SR channel is estimated without a priori knowledge, unlike the RD channel for which a coefficient or a row is assumed to be known a priori to remove ambiguities, which explains a better estimation of the RD channel, except with the STST-MSMKron and SKRST-MSMKR systems for which the use of the DF protocol avoids any a priori knowledge on the channels.
As expected, the best NMSE results are achieved with codings that provide greater diversity. For instance, when comparing the TSTF and TST systems with their simplified versions, STSTF and STST, it is evident that TSTF and TST deliver superior performance.
For the same reason, tensor-based coding systems outperform matrix-based coding systems. It is also important to highlight the significant improvement in RD channel estimation observed with KRF-based receivers (for SKRST and DKRSTF systems) compared to KronF-based receivers (for STST and STSTF systems). This improvement is due to the a priori knowledge of an entire RD channel row required to remove ambiguities with KRF-based receivers, whereas KronF-based receivers only require a priori knowledge of a single channel coefficient, as detailed in Equations (200) and (197), respectively.
Finally, Figure 31 compares the performance of all relay systems in terms of the NMSE of reconstructed received signals and computation time. It is worth noting that the reconstruction of the signals received at the destination using symbol and channel estimates provides a meaningful measure to compare the effectiveness of the proposed semi-blind receivers.
In Figure 31a, we observe that the best performance is obtained with the TSTF system, with a SNR gain of about 2 dB compared to the TST system which gives the second best performance, for any SNR value. This better performance is achieved at the cost of a higher computation time, as shown in Figure 31b. In general, as already mentioned, we observe that the lowest performance is obtained with matrix codings that offer less diversity than tensor codings, inducing a lower computational complexity and, therefore, a lower computation time.
Comparing the results in Figure 31b and Table A1 in Appendix D makes the difference in computational cost even more evident. For instance, when comparing STST and SKRST systems which both generate fourth-order tensors for received signals, we observe that the complexity of the receiver STST/KronF is O ( N M 2 P R ) + O ( M 4 ) , while for SKRST/KRF, it is O ( N M 2 P ) + O ( M 3 ) , which corroborates the higher computational cost of STST/KronF, as shown in Figure 31b. A similar conclusion can be drawn for the comparison of STSTF/KronF and DKRSTF/KRF receivers. For the STST-MSMKRon/KronF receiver, the complexity increases with a multiplicative factor R Q which induces an exponential growth of the complexity as the number Q of symbol matrices increases in the coding scheme. This highlights the trade-off between computational cost and SER performance that improves as Q increases.
It should be noted that the computational costs depend more on the number of antennas than on other design parameters, with, in particular, M 4 and M 5 terms for the complexities of the KronF and ALS algorithms, respectively. This can become problematic in the case of massive MIMO systems involving a large number of antennas. For such systems, optimizing the computational cost will be crucial to avoid possible real-time difficulties.
Comparing the results in Figure 31a and Table 30 with those in Figure 29, we observe that the NMSE of reconstructed signals follows the same improvement as the SER when the system diversity is increased with the AF protocol, which is not the case of combined codings due to a degradation in channel estimation with the DF protocol. Regarding the robustness to noise, Table 31 provides the SNR thresholds from which the systems achieve the desired performance in terms of SER or NMSE of the reconstructed signals, deduced, respectively, from Figure 29 and Figure 31a. Two different values of SER and NMSE are considered. For example, for a fixed SER of 10 2 , the STST-MSMKron system is the most robust to noise, achieving the desired SER performance from an SNR level of −5 dB. On the other hand, for the NMSE of the reconstructed signals set to −10 dB, the TSTF system is the most robust, with the required NMSE performance obtained from an SNR level of −2 dB. Table 31 highlights the systems with the best (green) and least (red) robustness to noise relative to the SNR levels required to achieve the desired performance. This analysis provides key information for selecting the system based on the desired balance between performance and noise tolerance. Note that the absence of an SNR threshold in Table 31 means that the corresponding performance is achieved for any SNR value greater than the largest value for which the SER is non-zero. For example, with the TST/KronF receiver, for any SNR value greater than 0 dB, the SER is zero.
To complement the previous comparison and help the user achieve the best trade-off between performance and complexity, taking into account the constraints imposed by the necessary identifiability conditions (NIC) and the required a priori knowledge (AK), Table 32 summarizes the characteristics of each system and the performance of associated semi-blind receivers, with the following criteria: (i) diversities, (ii) assumption of the channels (flat fading (FF) versus frequency selective fading (FSF)), and (iii) performance in terms of the NIC, AK, SER, NMSE of estimated channels, and computation time (CT). Performance is compared using positive signs (+) for advantages and negative signs (−) for disadvantages.
From Table 32, we can draw the following conclusions:
  • Diversity: All coding schemes used at both the source and the relay include space-time (ST) coding. TST coding incorporates additional chip diversity, while STSTF and DKRSTF take into account frequency diversity. TSTF is the most comprehensive coding simultaneously exploiting the space-time–chip–frequency diversities ( M , P , J , F ) , resulting in the best performance.
  • Channels: Systems based on TSTF and STSTF codings consider frequency-selective fading channels, i.e., third-order channel tensors, while all other systems assume flat fading channels, i.e., channel matrices. It is worth noting that, despite a larger number of channel coefficients to be estimated, the best SER is obtained with TSTF coding due to the exploitation of all diversities, at the cost of a higher computation time.
  • Necessary identifiability conditions: These conditions are related to the uniqueness of the pseudo-inverses of coding matrices or matrix unfoldings of coding tensors, depending on both the coding and the receiver type (ALS/KronF/KRF). ALS receivers are the least restrictive, i.e., the most flexible in the choice of design parameters. TSTF and TST are less restrictive than their simplified versions (STSTF and STST), thanks to the incorporation of chip diversity. Matrix codings entail fewer constraints than tensor codings. Note that frequency diversity has no impact on the identifiability conditions.
  • A priori knowledge: With matrix codings (SKRST and DKRSTF) leading to CP models for received signals, associated semi-blind receivers use the KRF algorithm, which requires knowledge of an entire row of symbol and RD-channel matrices to remove ambiguities. On the other hand, systems based on tensor codings are modeled using Tucker or generalized Tucker models for which the semi-blind receivers use the KronF algorithm, which requires knowledge of only one element of the symbol and RD-channel matrices. With the DF protocol, only a few pilot symbols are needed, with receivers performing blind channel estimation in the sense that no a priori channel information is needed. In particular, the STST-MSMKron system requires knowledge of only one pilot symbol in each symbol matrix.
  • SER and NMSE of estimated channels: These performances have already been commented in detail previously. It is worth highlighting that the TSTF and STST-MSMKron systems provide the best performance in terms of symbol estimation. Regarding channel estimation, the performance of the TSTF and TST systems are particularly remarkable.
  • Computation time: As expected, iterative receivers based on the ALS algorithm require the highest computation times due to their iterative nature, which leads to a refinement in the estimation of unknown parameters. In general, the computation time reflects the amount of diversity taken into account. Higher diversity leads to higher-order tensors, which induce higher computation times. Matrix coding systems perform the best in terms of computational complexity.
In summary, the numerical simulations performed reveal a clear hierarchy in the performances of relay systems, depending on diversity management (via the choice of the coding scheme), relaying protocol and channel characteristics, which induce a more or less complex receiver and guide the choice of coding in order to satisfy a desired trade-off for a given application. Systems incorporating frequency and/or chip diversity present superior overall performance with less restrictive identifiability conditions using chip diversity and less a priori knowledge with KronF type receivers. Matrix-based codings are more suitable for low complexity requirements, but for better performance, tensor codings such as TST, TSTF, or STST-MSMKron are more appropriate, inducing higher computational costs. Moreover, these three coding schemes offer the best balance between SER and NMSE, in addition to exhibiting greater robustness to noise for different SNR levels.

8. Conclusions and Perspectives

In this paper, we have first provided an in-depth overview of tensor models, with a detailed description of nested tensor decompositions which are very useful to represent cooperative communication systems. Initially proposed for fourth-order tensors, nested decompositions have been generalized to higher-order tensors using graph-based representations and a novel interpretation in terms of cascading tensor models. Some new nested models have been introduced to represent relay systems using tensor codings. Several parameter estimation methods have been described, with a particular focus on closed-form algorithms based on KRF and KronF methods when some factors of nested models are a priori known, as is the case with coding matrices/tensors in the context of relay systems.
In a second part, a thorough survey of cooperative communication systems, and more particularly of one-way two-hop MIMO relay systems, was carried out using various coding strategies, with the aim of showing how these strategies impact the tensor model of signals received at the destination node. Capitalizing on this tensor model, two classes of semi-blind receivers were proposed to jointly estimate the transmitted information symbols and the individual communication channels: iterative receivers based on ALS and closed-form ones using KRF and KronF methods, which consist of rank-one matrix or tensor approximations. Extensive Monte Carlo simulation results were presented to analyze and compare the impact of the coding strategy, of the choice of design parameters, of the relay protocol and of the type of receiver on the system performance in terms of SER, channel estimate NMSE, and computation time. Simulation results show that the best SER performance is achieved with the most general TSTF tensor coding at the price of higher computational complexity.
Several extensions of this work can be considered, such as multi-user and multi-relay scenarios, two-way relaying case, and power and resource allocation, as well as three-dimensional (3D) polarized channels that combine dual-polarized antenna arrays with double-directional channels and time-varying mmWave channels characterized by an inherent sparse and low-rank structure that allows for the employment of compressed sensing methods. In terms of cooperative communications, combinations of relay-, IRS-, and UAV-aided communication systems are particularly interesting to study for future 6G wireless networks. We believe the tensor codings and nested tensor models highlighted in this paper for two-hop relay systems can be very useful in designing such new combined assisted communication networks.
The development of ISAC systems opens the way for new research problems for which sensing and cooperative communication systems are combined in a general framework allowing for estimates of target physical parameters, e.g., angles of arrival/departure (AoA/AoD), time delays, and Doppler shift, at a sensing base station, and communication channels and information symbols at an user equipment. The nested tensor models studied in this paper are also very attractive for modeling such ISAC systems.

Funding

This work was supported in part by the CAPES-COFECUB bilateral research project number Ma 985-23. The work of Danilo S. Rocha was supported in part by CAPES—Finance Code 001.

Conflicts of Interest

The authors declare no conflict of interest. The funders had no role in the design of the study; in the collection, analyses, or interpretation of data; in the writing of the manuscript, or in the decision to publish the results.

Appendix A. List of Acronyms

AcronymsDefinitionsReferences
Tensor models
TDTucker decomposition[7]
TTDTensor train decomposition[44]
NTDNested TD[41]
CNTDCoupled NTD[42]
DCNTDDoubly coupled NTD[43]
GTDGeneralized TD[38]
NGTDNested GTD
CPDCanonical polyadic decomposition[34]
NCPDNested CPD[39,40]
CCPDCoupled CPD[101,102]
CNCPDCoupled NCPD
DCNCPDDoubly coupled NCPD[43]
PARALINDPARAFAC with linearly dependent loadings[103]
CONFACConstrained factor decomposition[35,36]
BTDBlock-term decomposition[104]
PARAFACParallel factors decomposition[8,47]
PARATUCKPARAFAC-Tucker decomposition[51,52,53]
CNTD-CPDCoupled nested TD-CPD[97]
Codings
STSTSimplified tensor space-time[41]
STSTFSimplified tensor space-time-frequency
TSTTensor space-time[37]
TSTFTensor space-time frequency[38]
SKRSTSimplified Khatri–Rao space-time[40,53]
DKRSTFDouble Khatri–Rao space-time frequency[39]
MSMKRMultiple symbol matrices Khatri–Rao (product)[98]
MSMKronMultiple symbol matrices Kronecker (product)[97]
Algorithms
ALSAlternating least squares
KRFKhatri–Rao factorization
KronFKronecker factorization
ZFZero-forcing
Other acronyms
AFAmplify-and-forward
DFDecode-and-forward
SISOSingle-input single-output
MISOMultiple-input single-output
MIMOMultiple-input multiple-output
CDMACode division multiple access
OFDMOrthogonal frequency-division multiplexing
IRSIntelligent reflecting surface
UAVUnmanned aerial vehicular
SERSymbol error rate
NMSENormalized mean square error

Appendix B. KRF Method

In this Appendix, we present the Khatri–Rao factorization method, denoted as KRF, to estimate the matrix factors of a Khatri–Rao product, abbreviated as KRP, based on the minimization of an LS criterion, first proposed in [49].
Let us consider the case of the noisy KRP: Y = A B + E of two matrices A = [ a 1 , , a R ] K I × R and B = [ b 1 , , b R ] K J × R , where E is an additive noise matrix due to modeling error and measurement noise, with the following LS criterion to be minimized:
min A , B Y A B F 2 .
By vectorizing Y , this criterion can be rewritten as follows:
min A , B vec ( Y ) vec ( A B ) 2 2 = min a r , b r r R r = 1 R y r a r b r 2 2 .
Since each term in this sum can be minimized separately, the column vectors a r K I and b r K J are estimated by minimizing min a r , b r y r a r b r 2 2 .
The idea behind the KRF method is to form a rank-one matrix by unvectorizig each KRP y r for r R , as
Y r unvec ( y r ) = unvec ( a r b r ) = b r a r T = b r a r K J × I
and to calculate the rank-one approximation of this matrix using the reduced SVD algorithm. The vectors ( a r , b r ) are determined via a simple identification of the formed rank-one matrix with the left and right singular vectors associated with the largest singular value. The LS criterion is then replaced with a rank-one matrix approximation:
min a r , b r Y r b r a r T F 2 = min a r , b r Y r b r a r F 2 , r R ,
where ∘ denotes the outer product. The vectors a r and b r are estimated by computing the rank-one reduced SVD of Y r :
Y r = σ r ( 1 ) u r ( 1 ) ( v r ( 1 ) ) H ,
where σ r ( 1 ) denotes the largest singular value, and u r ( 1 ) and v r ( 1 ) are the left and right singular vectors associated with σ r ( 1 ) , respectively. Identifying the reduced SVD (A5) with b r a r T leads to the following estimates [49]:
a ^ r = σ r ( 1 ) v r ( 1 ) * , b ^ r = σ r ( 1 ) u r ( 1 ) .
Note:
  • The R columns of A and B can be estimated in parallel by computing the rank-one approximation of R matrices;
  • Each vector, a r and b r , r R , is only estimated up to a scaling factor, since ( λ r a r ) ( 1 λ r b r ) = a r b r for every λ r K . To eliminate this scaling ambiguity, we need to know one component of one of these two vectors. For example, if we assume that the first component ( a r ) 1 is equal to 1, then λ r = 1 ( a ^ r ) 1 , and the ambiguity is removed as follows:
    a ^ ^ r = 1 ( a ^ r ) 1 a ^ r , b ^ ^ r = ( a ^ r ) 1 b ^ r ,
    where a ^ r and b ^ r are defined in (A6). We can conclude that the factors A and B of a Khatri–Rao product are estimated without scaling ambiguity if we have a priori knowledge of one row of A or B . This follows from the fact that A Λ B Λ 1 = A B for any non-singular matrix Λ = diag ( λ 1 , , λ R ) K R × R .
We now present a generalization of the KRF method to the case of a multiple KRP [33]:
min A ( n ) , n N Y n = 1 N A ( n ) F 2 ,
with A ( n ) = [ a 1 ( n ) , , a R ( n ) ] K I n × R , n N . This generalization is based on the rank-one approximation of R tensors of order N constructed from the columns of the factors A ( n ) . Let us consider the column y r = n = 1 N a r ( n ) K I 1 I N , for r R , and the associated rank-one tensor X r = n = 1 N a r ( n ) K I 1 × × I N defined by the transformation y r X r such as:
( X r ) i 1 , , i N = ( y r ) i 1 i N ¯ ,
where
i 1 i N ¯ i N + n = 1 N 1 ( i n 1 ) k = n + 1 N I k .
As for the case of the KRP of two vectors, the LS criterion (A8) is replaced with:
min a r ( n ) , n N y r n = 1 N a r ( n ) F 2 = min a r ( n ) , n N X r n = 1 N a r ( n ) F 2 , r R .
The columns a r ( n ) are then determined in calculating the rank-one approximation of the tensor X r by means of the truncated higher-order SVD (THOSVD) algorithm [48]. Like for a Khatri–Rao product of two vectors, the R columns of the factors A ( n ) can be estimated in parallel.
Noting that n = 1 N λ n a r ( n ) = n = 1 N a r ( n ) for any λ n such that n = 1 N λ n = 1 , one concludes that each estimated column vector a r ( n ) is unique up to a scalar ambiguity. To eliminate these ambiguities, one component of N 1 columns among the N vectors a r ( n ) , n N , must be known.

Appendix C. KronF Method

We now consider the Kronecker factorization method, denoted as KronF, to estimate the matrix factors of a multiple Kronecker product, denoted as KronP, in minimizing the following LS cost function:
min A ( n ) , n N Y n = 1 N A ( n ) F 2 ,
with A ( n ) K I n × J n , n N .
Inspired by the approach proposed in [105] to solve the problem of approximating a given matrix by a simple Kronecker product using a SVD-based rank-one matrix approximation, a generalization to a multiple Kronecker product is obtained in constructing a rank-one tensor from Y , defined as the outer product of vectorized forms vec ( A ( n ) ) K J n I n of the matrices A ( n ) :
X = n = 1 N vec ( A ( n ) ) K J 1 I 1 × J 2 I 2 × × J N I N .
The problem of estimating the factors A ( n ) can be solved in the sense of minimizing the following LS criterion [33]:
min A ( 1 ) , , A ( N ) Y n = 1 N A ( n ) F 2 = min A ( 1 ) , , A ( N ) X n = 1 N vec ( A ( n ) ) F 2 .
This minimization amounts to finding the best rank-one approximation for the tensor X using the THOSVD method. This approximation gives an estimate of the vectorized forms vec ( A ( n ) ) , from which it is easy to deduce an estimate of the factors A ( n ) via a simple unvectorization operation. Since, for A ˜ ( n ) = λ n A ( n ) , n N , we have n = 1 N vec ( A ˜ ( n ) ) = n = 1 N vec ( A ( n ) ) if n = 1 N λ n = 1 , we deduce that each estimate vec ( A ( n ) ) is subject to a scalar scaling ambiguity λ n requiring knowledge of one component to eliminate this ambiguity.

Appendix D. Complexity of the Receiver Algorithms

The computational complexity of the proposed semi-blind receivers is evaluated using Big-O notation, considering the most computationally expensive operations. For KronF/KRF-based methods, the dominant cost comes from SVD-based rank-1 approximations, while for ALS methods, it is associated with pseudo-inverse computations. The computational complexities of the proposed receivers are summarized in Table A1, assuming P S = P R = P , J S = J R = J , M S = M R = M T = M D = M , N 1 = = N Q = N , and R 1 = = R Q = R .
It is important to note that, for ALS receivers, the complexity is evaluated for a single iteration. The total computational cost is consequently dependent on the number of iterations required for convergence.
In this analysis, we consider that the computational complexity of the KronF and KRF algorithms, when applied to an I 1 × I 2 matrix, is O ( I 1 I 2 ) , corresponding to the computation of a rank-1 SVD. For the ALS algorithms, the complexity of the pseudo-inverse calculation is O ( I 1 I 2 2 ) when I 1 I 2 , or O ( I 1 2 I 2 ) when I 1 I 2 .
Table A1. Complexity of the receivers.
Table A1. Complexity of the receivers.
System/ReceiverMatrix DimensionComplexity
Tensor-based codings-AF protocol
TSTF/KronF S bdiag f H M D P R J R × M S ( S R D ) ( f ) F ( N M D P R J R × R M S ) O ( F N M 2 P J R )
bdiag f H M D × M T ( R D ) ( f ) H M S × M R ( S R ) ( f ) F ( M D M S × M T M R ) O ( F M 4 )
TST/KronF S H M D P R J R × M S ( S R D ) N M D P R J R × R M S O ( N M 2 P J R )
H ( R D ) H ( S R ) T M D M S × M T M R O ( M 4 )
STSTF/KronF S bdiag f H M D P R × M S ( S R D ) ( f ) F ( N M D P R × R M S ) O ( F N M 2 P R )
bdiag f H M D × M T ( R D ) ( f ) H M S × M R ( S R ) ( f ) F ( M D M S × M T M R ) O ( F M 4 )
STST/KronF S H M D P R × M S ( S R D ) N M D P R × R M S O ( N M 2 P R )
H ( R D ) H ( S R ) T M D M S × M T M R O ( M 4 )
STST/ALS I P S H ^ t 1 ( R D ) I P R C M T P R × M R ( R ) H ^ t 1 ( S R ) C P S M S × R ( S ) P S M D P R × R O ( M P 2 R 2 )
I P R I P S S ^ t C P S R × M S ( S ) H ^ t 1 ( S R ) T C P R M R × M T ( R ) P R P S N × M T O ( N M 2 P 2 )
( I P S S ^ t ) ( H ^ t ( R D ) I J ) C P S R × M S ( S ) C M T P R × M R ( R ) P S N M D P R × M R M S O ( N M 5 P 2 )
Matrix-based codings-AF protocol
DKRSTF/KRF V F N × M S ( S ) H M D P R × M S ( S R D ) F N M D P R × M S O ( F N M 2 P )
B H ( S R ) T M D M S × M R O ( M 3 )
SKRST/KRF S H M D P R × M S ( S R D ) N M D P R × M S O ( N M 2 P )
H ( R D ) H ( S R ) T M D M S × M R O ( M 3 )
SKRST/ALS C ( R ) C ( S ) S ^ t 1 H ^ t 1 ( S R ) T P R P S N × M R O ( N M 2 P 2 )
C ( S ) H ^ t ( R D ) C ( R ) H ^ t 1 ( S R ) P S M D P R × M S O ( M 3 P 2 )
C ( S ) S ^ t H ^ t ( R D ) C ( R ) P S N M D P R × M S M R O ( N M 5 P 2 )
Combined codings-DF protocol
STST-MSMKron/KronF q = 1 Q S ( q ) H ( S R ) N 1 N Q M R × R 1 R Q M S O ( N Q M 2 R Q )
q = 1 Q S ^ ( q ) H ( R D ) N 1 N Q M D × R 1 R Q M S O ( N Q M 2 R Q )
SKRST-MSMKR/KRF q = 1 Q S ( q ) H ( S R ) N 1 N Q M R × M S O ( N Q M 2 )
q = 1 Q S ^ ( q ) H ( R D ) N 1 N Q M D × M S O ( N Q M 2 )

References

  1. Boulogeorgos, A.A.A.; Alexiou, A. Performance analysis of reconfigurable intelligent surface-assisted wireless systems and comparison with relaying. IEEE Access 2020, 8, 94463–94483. [Google Scholar] [CrossRef]
  2. Di Renzo, M.; Ntontin, K.; Song, J.; Danufane, F.; Qian, X.; Lazarakis, F.; De Rosny, J.; Phan-Huy, D.T.; Simeone, O.; Zhang, R.; et al. Reconfigurable intelligent surfaces vs. relaying: Differences, similarities, and performance comparison. IEEE Open J. Commun. Soc. 2020, 1, 798–807. [Google Scholar] [CrossRef]
  3. Bjornson, E.; Ozdogan, O.; Larsson, E.G. Intelligent reflecting surface versus decode-and-forward: How large surfaces are needed to beat relaying. IEEE Wireless Commun. Lett. 2020, 9, 244–248. [Google Scholar] [CrossRef]
  4. Ding, Q.; Yang, J.; Luo, Y.; Luo, C. Intelligent reflecting surfaces vs. full-duplex relays: A comparison in the air. IEEE Commun. Lett. 2024, 28, 397–401. [Google Scholar] [CrossRef]
  5. Zheng, B.; Zhang, R. IRS meets relaying: Joint resource allocation and passive beamforming optimization. IEEE Wirel. Commun. Lett. 2021, 10, 2080–2084. [Google Scholar] [CrossRef]
  6. Yildirim, I.; Kilinc, F.; Basar, E.; Alexandropoulos, G.C. Hybrid RIS-empowered reflection and decode-and-forward relaying for coverage extension. IEEE Commun. Lett. 2021, 25, 1692–1696. [Google Scholar] [CrossRef]
  7. Tucker, L.R. Some mathematical notes on three-mode factor analysis. Psychometrika 1966, 31, 279–311. [Google Scholar] [CrossRef]
  8. Harshman, R.A. Foundations of the PARAFAC procedure: Models and conditions for an “explanatory” multimodal factor analysis. UCLA Work. Pap. Phon. 1970, 16, 1–84. [Google Scholar]
  9. Comon, P.; Cardoso, J.F. Eigenvalue decomposition of a cumulant tensor with applications. In Proceedings of the SPIE Conference on Advanced Signal Processing Algorithms, Architectures, and Implementations, San Diego, CA, USA, 8–13 July 1990; pp. 361–372. [Google Scholar]
  10. Bro, R. Parafac. Tutorial and applications. Chemom. Intell. Lab. Syst. 1997, 38, 149–171. [Google Scholar] [CrossRef]
  11. de Lathauwer, L. Signal Processing Based on Multilinear Algebra. Ph.D. Thesis, KUL, Leuven, Belgium, 1997. [Google Scholar]
  12. Sidiropoulos, N.D.; Giannakis, G.B.; Bro, R. Blind PARAFAC receivers for DS-CDMA systems. IEEE Trans. Signal Process. 2000, 48, 810–823. [Google Scholar] [CrossRef]
  13. Vasilescu, M.A.O.; Terzopoulos, D. Multilinear analysis of image ensembles: TensorFaces. In Proceedings of the European Conference on Computer Vision (ECCV’02), Copenhagen, Denmark, 28–31 May 2002; pp. 447–460. [Google Scholar]
  14. Acar, E.; Aykut-Bingol, C.; Bingol, H.; Bro, R.; Yener, B. Multiway analysis of epilepsy tensors. Bioinformatics 2007, 23, i10–i18. [Google Scholar] [CrossRef]
  15. Cong, F.; Lin, Q.H.; Kuang, L.D.; Gong, X.F.; Astikainen, P.; Ristaniemi, T. Tensor decomposition of EEG signals: A brief review. J. Neurosci. Methods 2015, 248, 59–69. [Google Scholar] [CrossRef]
  16. Padhy, S.; Goovaerts, G.; Boussé, M.; De Lathauwer, L.; Van Huffel, S. The Power of Tensor-Based Approaches in Cardiac Applications, Chapter in Biomedical Signal Processing. Advances in Theory, Algorithms and Applications; Naik, G., Ed.; Springer: Berlin/Heidelberg, Germany, 2019; pp. 291–323. [Google Scholar]
  17. Makantasis, K.; Doulamis, A.; Nikitakis, A. Tensor-based classification models for hyperspectral data analysis. IEEE Trans. Geosci. Remote Sens. 2018, 56, 6884–6898. [Google Scholar] [CrossRef]
  18. Chen, X.; Wang, Z.; Wang, K.; Jia, H.; Han, Z.; Tang, Y. Multi-dimensional low-rank with weighted Schatten p-norm minimization for hyperspectral anomaly detection. Remote Sens. 2024, 16, 74. [Google Scholar] [CrossRef]
  19. Tan, H.; Feng, G.; Feng, J.; Wang, W.; Zhang, Y.J.; Li, F. A tensor-based method for missing traffic data completion. Transp. Res. Part Emerg. Technol. 2013, 28, 15–27. [Google Scholar] [CrossRef]
  20. Lyu, C.; Lu, Q.L.; Wu, X.; Antoniou, C. Tucker factorization-based tensor completion for robust traffic data imputation. Transp. Res. Part Emerg. Technol. 2024, 160, 104502. [Google Scholar] [CrossRef]
  21. Chen, P.; Li, F.; Wei, D.; Lu, C. Spatiotemporal traffic data completion with truncated minimax-concave penalty. Transp. Res. Part Emerg. Technol. 2024, 164, 104657. [Google Scholar] [CrossRef]
  22. Bobadilla, J.; Ortega, F.; Hernando, A.; Gutiérrez, A. Recommender systems survey. Knowl.-Based Syst. 2013, 46, 109–132. [Google Scholar] [CrossRef]
  23. Frolov, E.; Oseledets, I. Tensor methods and recommender systems. WIREs Data Min. Knowl. Discov. 2017, 7, e1201. [Google Scholar] [CrossRef]
  24. Favier, G.; Kibangou, A. Tensor-based approaches for nonlinear and multilinear systems modeling and identification. Algorithms 2023, 16, 443. [Google Scholar] [CrossRef]
  25. Morup, M. Applications of tensor (multiway array) factorizations and decompositions in data mining. Wiley Interdiscip. Rev. Data Min. Knowl. Discov. 2011, 1, 24–40. [Google Scholar] [CrossRef]
  26. Lahat, D.; Adah, T.; Jutten, C. Multimodal data fusion: An overview of methods, challenges and prospects. Proc. IEEE 2015, 103, 1449–1477. [Google Scholar] [CrossRef]
  27. Papalexakis, E.; Faloutsos, C.; Sidiropoulos, N. Tensors for data mining and data fusion: Models, applications, and scalable algorithms. SIAM Trans. Intell. Syst. Technol. 2016, 8, 16.1–16.44. [Google Scholar] [CrossRef]
  28. Chatzichristos, C.; Van Eyndhoven, S.; Kofidis, E.; Van Huffel, S. Coupled tensor decompositions for data fusion. In Tensors for Data Processing; Liu, Y., Ed.; Academic Press: Cambridge, MA, USA, 2022; Chapter 10; pp. 341–370. [Google Scholar]
  29. Novikov, A.; Podoprikhin, D.; Osokin, A.; Vetrov, D. Tensorizing neural networks. In Proceedings of the 28th International Conference on Neural Information Processing Systems, Montreal, QC, Canada, 7–12 December 2015; Volume 1, pp. 442–450. [Google Scholar]
  30. Cichocki, A.; Phan, A.H.; Zhao, Q.; Lee, N.; Oseledets, I.; Sugiyama, M.; Mandic, D.P. Tensor networks for dimensionality reduction and large-scale optimization: Part 2-Applications and future perspectives. Found. Trends Mach. Learn. 2017, 9, 431–673. [Google Scholar] [CrossRef]
  31. Cichocki, A.; Mandic, D.; de Lathauwer, L.; Zhou, G.; Zhao, Q.; Caiafa, C.; Phan, A. Tensor decompositions for signal processing applications: From two-way to multiway component analysis. IEEE Signal Process. Mag. 2015, 32, 145–163. [Google Scholar] [CrossRef]
  32. Sidiropoulos, N.; de Lathauwer, L.; Fu, X.; Huang, K.; Papalexakis, E.; Faloutsos, C. Tensor decomposition for signal processing and machine learning. IEEE Trans. Signal Process. 2017, 65, 3551–3582. [Google Scholar] [CrossRef]
  33. Favier, G. Matrix and Tensor Decompositions in Signal Processing; Wiley: Hoboken, NJ, USA, 2021; Volume 2. [Google Scholar]
  34. Hitchcock, F.L. The expression of a tensor or a polyadic as a sum of products. J. Math. Phys. 1927, 6, 164–189. [Google Scholar] [CrossRef]
  35. de Almeida, A.L.; Favier, G.; Mota, J.C.M. A constrained factor decomposition with application to MIMO antenna systems. IEEE Trans. Signal Process. 2008, 56, 2429–2442. [Google Scholar] [CrossRef]
  36. Favier, G.; de Almeida, A.L.F. Overview of constrained PARAFAC models. EURASIP J. Adv. Signal Process. 2014, 2014, 142. [Google Scholar] [CrossRef]
  37. Favier, G.; da Costa, M.; de Almeida, A.; Romano, J. Tensor space-time (TST) coding for MIMO wireless communication systems. Signal Process. 2012, 92, 1079–1092. [Google Scholar] [CrossRef]
  38. Favier, G.; de Almeida, A.L.F. Tensor space-time-frequency coding with semi-blind receivers for MIMO wireless communication systems. IEEE Trans. Signal Process. 2014, 62, 5987–6002. [Google Scholar] [CrossRef]
  39. de Almeida, A.L.; Favier, G. Double Khatri–Rao space-time-frequency coding using semi-blind PARAFAC based receiver. IEEE Signal Process. Lett. 2013, 20, 471–474. [Google Scholar] [CrossRef]
  40. Ximenes, L.R.; Favier, G.; de Almeida, A.L. Semi-blind receivers for non-regenerative cooperative MIMO communications based on nested PARAFAC modeling. IEEE Trans. Signal Process. 2015, 63, 4985–4998. [Google Scholar] [CrossRef]
  41. Favier, G.; Fernandes, C.A.R.; de Almeida, A.L. Nested Tucker tensor decomposition with application to MIMO relay systems using tensor space–time coding (TSTC). Signal Process. 2016, 128, 318–331. [Google Scholar] [CrossRef]
  42. Rocha, D.S.; Fernandes, C.A.R.; Favier, G. MIMO multi-relay systems with tensor space-time coding based on coupled nested Tucker decomposition. Digit. Signal Process. 2019, 89, 170–185. [Google Scholar] [CrossRef]
  43. Rocha, D.S.; Fernandes, C.A.R.; Favier, G. Doubly coupled nested tensor decompositions with application to multirelay multicarrier MIMO communication networks. Digit. Signal Process. 2023, 140, 104143. [Google Scholar] [CrossRef]
  44. Oseledets, I.V. Tensor-train decomposition. SIAM J. Sci. Comput. 2011, 33, 2295–2317. [Google Scholar] [CrossRef]
  45. Favier, G.; Rocha, D.S. Overview of tensor-based cooperative MIMO communication systems—Part 1: Tensor modeling. Entropy 2023, 25, 1181. [Google Scholar] [CrossRef]
  46. Pollock, D.S.G. On Kronecker Products, Tensor Products and Matrix Differential Calculus; Discussion Papers in Economics 11/34; Division of Economics, School of Business, University of Leicester: Leicester, UK, 2011. [Google Scholar]
  47. Carroll, J.D.; Chang, J. Analysis of individual differences in multidimensional scaling via an N-way generalization of Eckart-Young decomposition. Psychometrika 1970, 35, 283–319. [Google Scholar] [CrossRef]
  48. de Lathauwer, L.; de Moor, B.; Vandewalle, J. A multilinear singular value decomposition. SIAM J. Matrix Anal. Appl. 2000, 21, 1253–1278. [Google Scholar] [CrossRef]
  49. Kibangou, A.Y.; Favier, G. Non-iterative solution for PARAFAC with a Toeplitz matrix factor. In Proceedings of the European Signal Processing Conference (EUSIPCO), Glasgow, Scotland, 24–28 August 2009. [Google Scholar]
  50. Kruskal, J.B. Three-way arrays: Rank and uniqueness of trilinear decompositions, with application to arithmetic complexity and statistics. Linear Algebra Appl. 1977, 18, 95–138. [Google Scholar] [CrossRef]
  51. Harshman, R.A.; Lundy, M.E. Uniqueness proof for a family of models sharing features of Tucker’s three-mode factor analysis and PARAFAC/CANDECOMP. Psychometrika 1996, 61, 133–154. [Google Scholar] [CrossRef]
  52. Kibangou, A.Y.; Favier, G. Blind joint identification and equalization of Wiener-Hammerstein communication channels using PARATUCK-2 tensor decomposition. In Proceedings of the European Signal Processing Conference (EUSIPCO), Poznan, Poland, 3–7 September 2007. [Google Scholar]
  53. Ximenes, L.R.; Favier, G.; de Almeida, A.L.; Silva, Y.C. PARAFAC-PARATUCK semi-blind receivers for two-hop cooperative MIMO relay systems. IEEE Trans. Signal Process. 2014, 62, 3604–3615. [Google Scholar] [CrossRef]
  54. Ximenes, L.R.; Favier, G.; de Almeida, A.L. Closed-form semi-blind receiver for MIMO relay systems using double Khatri–Rao space-time coding. IEEE Signal Process. Lett. 2016, 23, 316–320. [Google Scholar] [CrossRef]
  55. Freitas, W., Jr.; Favier, G.; de Almeida, A.L. Generalized Khatri-Rao and Kronecker space-time coding for MIMO relay systems with closed-form semi-blind receivers. Signal Process. 2018, 151, 19–31. [Google Scholar] [CrossRef]
  56. Roemer, F.; Haardt, M. Tensor-based channel estimation and iterative refinements for two-way relaying with multiple antennas and spatial reuse. IEEE Trans. Signal Process. 2010, 58, 5720–5735. [Google Scholar] [CrossRef]
  57. Freitas, W., Jr.; Favier, G.; de Almeida, A.L. Tensor-based joint channel and symbol estimation for two-way MIMO relaying systems. IEEE Signal Process. Lett. 2019, 26, 227–231. [Google Scholar] [CrossRef]
  58. Freitas, W.d.C.; Favier, G.; de Almeida, A.L. Sequential closed-form semiblind receiver for space-time coded multihop relaying systems. IEEE Signal Process. Lett. 2017, 24, 1773–1777. [Google Scholar] [CrossRef]
  59. Han, X.; Ying, J.; Liu, A.; Ma, L. A nested tensor-based receiver employing triple constellation precoding for three-hop cooperative communication systems. Digit. Signal Process. 2023, 133, 103862. [Google Scholar] [CrossRef]
  60. Rong, Y.; Khandaker, R.; Xiang, Y. Channel estimation of dual-hop MIMO relay system via parallel factor analysis. IEEE Trans. Wirel. Commun. 2012, 11, 2224–2233. [Google Scholar] [CrossRef]
  61. Sokal, B.; de Almeida, A.L.; Haardt, M. Semi-blind receivers for MIMO multi-relaying systems via rank-one tensor approximations. Signal Process. 2020, 166, 107254. [Google Scholar] [CrossRef]
  62. Cavalcante, Í.V.; de Almeida, A.L.; Haardt, M. Joint channel estimation for three-hop MIMO relaying systems. IEEE Signal Process. Lett. 2015, 22, 2430–2434. [Google Scholar] [CrossRef]
  63. Du, J.; Han, M.; Jin, L.; Hua, Y.; Li, X. Semi-blind receivers for multi-user massive MIMO relay systems based on block Tucker2-PARAFAC tensor model. IEEE Access 2020, 8, 32170–32186. [Google Scholar] [CrossRef]
  64. Lin, Y.; Matthaiou, M.; You, X. Tensor-based channel estimation for millimeter wave MIMO-OFDM with dual-wideband effects. IEEE Trans. Commun. 2020, 68, 4218–4232. [Google Scholar] [CrossRef]
  65. Lin, Y.; Jin, S.; Matthaiou, M.; You, X. Tensor-based channel estimation for hybrid IRS-assisted MIMO-OFDM. IEEE Trans. Wirel. Commun. 2021, 20, 3770–3784. [Google Scholar] [CrossRef]
  66. Lin, H.; Zhang, G.; Mo, W.; Lan, T.; Zhang, Z.; Ye, S. PARAFAC-Based Channel Estimation for Relay Assisted mmWave Massive MIMO Systems. In Proceedings of the 7th International Conference on Computer and Communications (ICCC), Chengdu, China, 10–13 December 2021; pp. 1869–1874. [Google Scholar]
  67. Du, J.; Han, M.; Chen, Y.; Jin, L.; Gao, F. Tensor-based joint channel estimation and symbol detection for time-varying mmWave massive MIMO systems. IEEE Trans. Signal Process. 2021, 69, 6251–6266. [Google Scholar] [CrossRef]
  68. Wei, L.; Huang, C.; Alexandropoulos, G.C.; Yuen, C. Parallel factor decomposition channel estimation in RIS-assisted multi-user MISO communication. arXiv 2020, arXiv:2001.09413. [Google Scholar]
  69. Wei, L.; Huang, C.; Alexandropoulos, G.C.; Yang, Z.; Yuen, C.; Zhang, Z. Joint channel estimation and signal recovery in RIS-assisted multi-user MISO communications. In Proceedings of the 2021 IEEE Wireless Communications and Networking Conference (WCNC), Nanjing, China, 29 March–1 April 2021; pp. 1–6. [Google Scholar]
  70. Du, J.; Luo, X.; Jin, L.; Gao, F. Robust tensor-based algorithm for UAV-assisted IoT communication systems via nested PARAFAC analysis. IEEE Trans. Signal Process. 2022, 70, 5117–5132. [Google Scholar] [CrossRef]
  71. Li, M.; Luo, X.; Jia, W.; Wang, S. Channel estimation and symbol detection for UAV-RIS assisted IoT systems via tensor decomposition. IEEE Access 2024, 12, 84020–84032. [Google Scholar] [CrossRef]
  72. Du, J.; Ye, S.; Jin, L.; Li, X.; Ngo, H.Q.; Dobre, O. Tensor-based joint channel estimation for multi-way massive MIMO hybrid relay systems. IEEE Trans. Veh. Technol. 2022, 71, 9571–9585. [Google Scholar] [CrossRef]
  73. Rocha, D.S.; Fernandes, C.A.R.; Favier, G. Space-Time-Frequency (STF) MIMO Relaying System with Receiver Based on Coupled Tensor Decompositions. In Proceedings of the 2018 52nd Asilomar Conference on Signals, Systems, and Computers, Pacific Grove, CA, USA, 28–31 October 2018; pp. 328–332. [Google Scholar]
  74. Wang, Q.; Zhang, L.; Li, B.; Zhu, Y. Space-time-frequency coding for MIMO relay system based on tensor decomposition. Radioelectron. Commun. Syst. 2020, 63, 77–87. [Google Scholar] [CrossRef]
  75. Basar, E.; Di Renzo, M.; De Rosny, J.; Debbah, M.; Alouini, M.S.; Zhang, A.R. Wireless communications through reconfigurable intelligent surfaces. IEEE Access 2019, 7, 116753–116773. [Google Scholar] [CrossRef]
  76. Wu, Q.; Zhang, R. Towards smart and reconfigurable environment: Intelligent reflecting surface aided wireless network. IEEE Commun. Mag. 2020, 58, 106–112. [Google Scholar] [CrossRef]
  77. Di Renzo, M.; Zappone, A.; Debbah, M.; Alouini, M.S.; Yuen, C.; De Rosny, J.; Tretyakov, S. Smart radio environments empowered by reconfigurable intelligent surfaces: How it works, state of research, and road ahead. IEEE J. Sel. Areas Commun. 2020, 38, 2450–2525. [Google Scholar] [CrossRef]
  78. Sur, S.N.; Bera, R. Intelligent reflecting surface assisted MIMO communication system: A review. Phys. Commun. 2021, 47, 101386. [Google Scholar] [CrossRef]
  79. Wu, Q.; Zhang, S.; Zheng, B.; You, C.; Zhang, R. Intelligent reflecting surface aided wireless communications: A tutorial. IEEE Trans. Commun. 2021, 69, 3313–3351. [Google Scholar] [CrossRef]
  80. Kaur, R.; Bansal, B.; Majhi, S.; Jain, S.; Huang, C.; Yuen, C. A survey on reconfigurable intelligent surface for physical layer security of next-generation wireless communications. IEEE Open J. Veh. Technol. 2024, 5, 172–199. [Google Scholar] [CrossRef]
  81. He, Z.; Yuan, X. Cascaded channel estimation for large intelligent metasurface assisted massive MIMO. IEEE Wirel. Commun. Lett. 2019, 9, 210–214. [Google Scholar] [CrossRef]
  82. Kang, J.M. Intelligent reflecting surface: Joint optimal training sequence and reflection pattern. IEEE Commun. Lett. 2020, 24, 1784–1788. [Google Scholar] [CrossRef]
  83. You, C.; Zheng, B.; Zhang, R. Channel estimation and passive beamforming for intelligent reflecting surface: Discrete phase shift and progressive refinement. IEEE J. Sel. Areas Commun. 2020, 38, 2604–2620. [Google Scholar] [CrossRef]
  84. Guo, H.; Lau, V.K.N. Cascaded channel estimation for intelligent reflecting surface assisted multiuser MISO systems. arXiv 2021, arXiv:2108.09002v1. [Google Scholar] [CrossRef]
  85. Noh, S.; Yu, H.; Sung, Y. Training signal design for sparse channel estimation in intelligent reflecting surface-assisted millimeter-wave communication. IEEE Trans. Wirel. Commun. 2022, 21, 2399–2413. [Google Scholar] [CrossRef]
  86. Zheng, B.; You, C.; Mei, W.; Zhang, R. A survey on channel estimation and practical passive beamforming design for intelligent reflecting surface-aided wireless communications. IEEE Commun. Surv. Tutor. 2022, 24, 1035–1071. [Google Scholar] [CrossRef]
  87. Dai, M.; Huang, N.; Wu, Y.; Gao, J.; Su, Z. UAV-assisted wireless networks. IEEE Internet Things J. 2023, 10, 4117–4147. [Google Scholar] [CrossRef]
  88. Shakhatreh, H.; Sawalmeh, A.; Al-Fuqaha, A.; Dou, Z.; Almaita, E.; Khalil, I.; Othman, N.S.; Khreishah, A.; Guizani, M. Unmanned aerial vehicles: A survey on civil applications and key research challenges. IEEE Access 2019, 7, 48572–48634. [Google Scholar] [CrossRef]
  89. Zhu, X.; Jiang, C. Integrated satellite-terrestrial networks toward 6G: Architectures, applications, and challenges. IEEE Internet Things J. 2022, 9, 437–461. [Google Scholar] [CrossRef]
  90. Peng, D.; He, D.; Li, Y.; Wang, Z. Integrating terrestrial and satellite multibeam systems toward 6G: Techniques and challenges for interference mitigation. IEEE Wirel. Commun. 2022, 29, 24–31. [Google Scholar] [CrossRef]
  91. Singh, S.K.; Agrawal, K.; Singh, K.; Li, C.P.; Ding, Z. NOMA enhanced hybrid RIS-UAV-assisted full-duplex communication system with imperfect SIC and CSI. IEEE Trans. Commun. 2022, 70, 7609–7627. [Google Scholar] [CrossRef]
  92. Pogaku, A.C.; Do, D.T.; Lu, B.M.; Nguyen, N.D. UAV-assisted RIS for future wireless communications: A survey on optimization and performance analysis. IEEE Access 2022, 10, 16320–16336. [Google Scholar] [CrossRef]
  93. Kim, K.; Kim, J.; Joung, J. A survey on system configurations of integrated sensing and communication (ISAC) systems. In Proceedings of the 2022 13th International Conference on Information and Communication Technology Convergence (ICTC), Jeju Island, Republic of Korea, 19–21 October 2022. [Google Scholar]
  94. Liu, F.; Cui, Y.; Masouros, C.; Xu, J.; Han, T.X.; Eldar, Y.C.; Buzzi, S. Integrated sensing and communications: Towards dual-functional wireless networks for 6G and beyond. arXiv 2021, arXiv:2108.07165. [Google Scholar] [CrossRef]
  95. Zhang, R.; Cheng, L.; Wang, S.; Lou, Y.; Gao, Y.; Wu, W.; Ng, D.W.K. Integrated sensing and communication with massive MIMO: A unified tensor approach for channel and target parameter estimation. arXiv 2024, arXiv:2401.01738v1. [Google Scholar] [CrossRef]
  96. Zhang, J.A.; Rahman, M.L.; Wu, K.; Huang, X.; Guo, Y.J.; Chen, S.; Yuan, J. Enabling joint communication and radar sensing in mobile networks—A survey. arXiv 2021, arXiv:2006.07559v4. [Google Scholar] [CrossRef]
  97. Couras, M.; de Pinho, P.; Favier, G.; Zarzoso, V.; de Almeida, A.; da Costa, J. Semi-blind receivers based on a coupled nested Tucker-PARAFAC model for dual-polarized MIMO systems using combined TST and MSMKron codings. Digit. Signal Process. 2023, 137, 104043. [Google Scholar] [CrossRef]
  98. Randriambelonoro, S.V.N.; Favier, G.; Boyer, R. Semi-blind joint symbols and multipath parameters estimation of MIMO systems using KRST/MKRSM coding. Digit. Signal Process. 2021, 109, 102908. [Google Scholar] [CrossRef]
  99. Sidiropoulos, N.; Budampati, R. Khatri-Rao space time codes. IEEE Trans. Signal Process. 2002, 50, 2396–2407. [Google Scholar] [CrossRef]
  100. Han, X.; de Almeida, A.L.; Yang, Z. Channel estimation for MIMO multi-relay systems using a tensor approach. EURASIP J. Adv. Signal Process. 2014, 2014, 163. [Google Scholar] [CrossRef]
  101. Sørensen, M.; De Lathauwer, L. Coupled tensor decompositions for applications in array signal processing. In Proceedings of the 5th IEEE International Workshop on Computational Advances in Multi-Sensor Adaptive Processing (CAMSAP), Saint Martin, French West Indies, France, 15–18 December 2013; pp. 228–231. [Google Scholar]
  102. Sørensen, M.; De Lathauwer, L.D. Coupled canonical polyadic decompositions and (coupled) decompositions in multilinear rank-(Lr, n, Lr, n, 1) terms—Part I: Uniqueness. SIAM J. Matrix Anal. Appl. 2015, 36, 496–522. [Google Scholar] [CrossRef]
  103. Bro, R.; Harshman, R.A.; Sidiropoulos, N.D.; Lundy, M.E. Modeling multi-way data with linearly dependent loadings. Chemometrics 2009, 23, 324–340. [Google Scholar] [CrossRef]
  104. De Lathauwer, L. Decompositions of a higher-order tensor in block terms—Part II: Definitions and uniqueness. SIAM J. Matrix Anal. Appl. 2008, 30, 1033–1066. [Google Scholar] [CrossRef]
  105. Van Loan, C.F.; Pitsianis, N. Approximation with Kronecker products. In Linear Algebra for Large Scale and Real-Time Applications; Moonen, M.S., Golub, G.H., De Moor, B.L.R., Eds.; Kluwer Academic Publishers: Dordrecht, The Netherlands, 1993; pp. 293–314. [Google Scholar]
Figure 1. Organization of the paper.
Figure 1. Organization of the paper.
Entropy 26 00937 g001
Figure 2. Nested tensor decompositions based on TD and CPD.
Figure 2. Nested tensor decompositions based on TD and CPD.
Entropy 26 00937 g002
Figure 3. TTD of a Pth-order tensor, X K I ̲ P .
Figure 3. TTD of a Pth-order tensor, X K I ̲ P .
Entropy 26 00937 g003
Figure 4. Graph of the TTD-4 model for a fourth-order tensor X K I ̲ 4 .
Figure 4. Graph of the TTD-4 model for a fourth-order tensor X K I ̲ 4 .
Entropy 26 00937 g004
Figure 5. Graph of the GTTD-(2,4,4,2) model for a sixth-order tensor X K I ̲ 6 .
Figure 5. Graph of the GTTD-(2,4,4,2) model for a sixth-order tensor X K I ̲ 6 .
Entropy 26 00937 g005
Figure 6. NCPD-4 model as (a) a nesting of two CPD-3 models and (b) a cascade of two CPD-3 models.
Figure 6. NCPD-4 model as (a) a nesting of two CPD-3 models and (b) a cascade of two CPD-3 models.
Entropy 26 00937 g006
Figure 7. NTD-4 model as (a) a particular TTD and (b) a cascade of two TD-(2,3) models.
Figure 7. NTD-4 model as (a) a particular TTD and (b) a cascade of two TD-(2,3) models.
Entropy 26 00937 g007
Figure 8. Graph of the NTD-4 model for a fourth-order tensor X K I ̲ 4 .
Figure 8. Graph of the NTD-4 model for a fourth-order tensor X K I ̲ 4 .
Entropy 26 00937 g008
Figure 9. Graph of the NTD-6 model for a sixth-order tensor X K I ̲ 6 .
Figure 9. Graph of the NTD-6 model for a sixth-order tensor X K I ̲ 6 .
Entropy 26 00937 g009
Figure 10. Graph of the NGTD-7 model for a seventh-order tensor X K I ̲ 7 .
Figure 10. Graph of the NGTD-7 model for a seventh-order tensor X K I ̲ 7 .
Entropy 26 00937 g010
Figure 11. Graph of the NGTD-5 model for a fifth-order tensor X K I ̲ 5 .
Figure 11. Graph of the NGTD-5 model for a fifth-order tensor X K I ̲ 5 .
Entropy 26 00937 g011
Figure 12. Two families of TD- and CPD-based decompositions.
Figure 12. Two families of TD- and CPD-based decompositions.
Entropy 26 00937 g012
Figure 13. Classification of relay systems according to the coding scheme and tensor model.
Figure 13. Classification of relay systems according to the coding scheme and tensor model.
Entropy 26 00937 g013
Figure 14. One-way, two-hop cooperative system.
Figure 14. One-way, two-hop cooperative system.
Entropy 26 00937 g014
Figure 15. Tucker train model of a two-hop relay system using TSTF codings.
Figure 15. Tucker train model of a two-hop relay system using TSTF codings.
Entropy 26 00937 g015
Figure 16. Tucker train model of a two-hop relay system using TST codings.
Figure 16. Tucker train model of a two-hop relay system using TST codings.
Entropy 26 00937 g016
Figure 17. NCPD-5 model for the DKRSTF system as a cascade of three CPD-3 models.
Figure 17. NCPD-5 model for the DKRSTF system as a cascade of three CPD-3 models.
Entropy 26 00937 g017
Figure 18. NCPD-4 model for the SKRST system.
Figure 18. NCPD-4 model for the SKRST system.
Entropy 26 00937 g018
Figure 19. Plan of simulations for performance comparison.
Figure 19. Plan of simulations for performance comparison.
Entropy 26 00937 g019
Figure 20. SER comparison with different receivers for STST and SKRST.
Figure 20. SER comparison with different receivers for STST and SKRST.
Entropy 26 00937 g020
Figure 21. Comparison of (a) computation time for ZF, KronF/KRF, and ALS receivers and (b) number of iterations for convergence of ALS receivers for STST and SKRST.
Figure 21. Comparison of (a) computation time for ZF, KronF/KRF, and ALS receivers and (b) number of iterations for convergence of ALS receivers for STST and SKRST.
Entropy 26 00937 g021
Figure 22. NMSE of estimated channels with the KronF/KRF and ALS receivers for STST and SKRST: (a) H ^ ( S R ) and (b) H ^ ( R D ) .
Figure 22. NMSE of estimated channels with the KronF/KRF and ALS receivers for STST and SKRST: (a) H ^ ( S R ) and (b) H ^ ( R D ) .
Entropy 26 00937 g022
Figure 23. Impact of time-spreading lengths with ZF receivers of STST and SKRST.
Figure 23. Impact of time-spreading lengths with ZF receivers of STST and SKRST.
Entropy 26 00937 g023
Figure 24. Impact of numbers of antennas with ZF receivers of (a) SKRST and (b) STST.
Figure 24. Impact of numbers of antennas with ZF receivers of (a) SKRST and (b) STST.
Entropy 26 00937 g024
Figure 25. SER comparison for the DKRSTF, STSTF, and TSTF systems with M R M T .
Figure 25. SER comparison for the DKRSTF, STSTF, and TSTF systems with M R M T .
Entropy 26 00937 g025
Figure 26. Impact of the number Q of symbol matrices in combined codings with ZF receivers.
Figure 26. Impact of the number Q of symbol matrices in combined codings with ZF receivers.
Entropy 26 00937 g026
Figure 27. Impact of AF/DF protocols on SER performance of STST and SKRST.
Figure 27. Impact of AF/DF protocols on SER performance of STST and SKRST.
Entropy 26 00937 g027
Figure 28. Impact of AF/DF protocols on NMSE of estimated channels for STST and SKRST: (a) H ^ ( S R ) and (b) H ^ ( R D ) .
Figure 28. Impact of AF/DF protocols on NMSE of estimated channels for STST and SKRST: (a) H ^ ( S R ) and (b) H ^ ( R D ) .
Entropy 26 00937 g028
Figure 29. SER comparison for all considered relay systems.
Figure 29. SER comparison for all considered relay systems.
Entropy 26 00937 g029
Figure 30. NMSE of estimated channels for all considered relay systems: (a) H ^ ( S R ) and (b) H ^ ( R D ) .
Figure 30. NMSE of estimated channels for all considered relay systems: (a) H ^ ( S R ) and (b) H ^ ( R D ) .
Entropy 26 00937 g030
Figure 31. Comparison of considered relay systems in terms of (a) NMSE of reconstructed received signals and (b) computation time.
Figure 31. Comparison of considered relay systems in terms of (a) NMSE of reconstructed received signals and (b) computation time.
Entropy 26 00937 g031
Table 1. Notation.
Table 1. Notation.
SymbolsDefinitions
K = R or C Set of real or complex numbers
N { 1 , , N } Set of first N integers
i ̲ N { i 1 , , i N } Set of N indices
I ̲ N I 1 × × I N Size of an Nth-order tensor
a, a , A , A Scalar, column vector, matrix, tensor  
a i ̲ N = a i 1 , i 2 , , i N or [ A ] i 1 , i 2 , , i N ( i 1 , i 2 , , i N ) -th element of A K I ̲ N  
A T , A * , A H Transpose, complex conjugate, Hermitian transpose of A  
A Moore–Penrose pseudo-inverse of A  
A i , ( A , j ) i-th (j-th) row (column) of A K I × J  
I I N Identity matrix of size I N × I N  
e n ( N ) nth canonical basis vector of the Euclidean space R N  
vec ( · ) Vectorization operator  
diag ( · ) Diagonalization operator that forms a diagonal matrix from its vector argument  
D i ( A ) = diag ( A i , ) Diagonal matrix whose diagonal entries are the elements of the i-th row of A K I × J
bdiag i ( A i , , ) Block-diagonal matrix whose diagonal blocks are horizontal slices * of A K I × J × K
· , · Inner product  
· 2 , · F Euclidian and Frobenius norms  
Outer product  
Hadamard product  
Khatri–Rao product  
Kronecker product  
× n Mode-n product  
× p n Modes- ( p , n ) product  
* See Table 4 for the definition of horizontal slices.
Table 2. Notation for sets of indices and dimensions.
Table 2. Notation for sets of indices and dimensions.
i ̲ P { i 1 , , i P }   ;   j ̲ N { j 1 , , j N }
I ̲ P { I 1 , , I P }   ;   J ̲ N { J 1 , , J N }
I ̲ P I 1 × × I P   ;   J ̲ N J 1 × × J N
Table 3. Vector and matrix Kronecker products and matrix product using the index convention.
Table 3. Vector and matrix Kronecker products and matrix product using the index convention.
u K I , v K J
u v = ( u i e i ( I ) ) ( v j e j ( J ) ) = u i v j e i j K I J
u v T = ( u i e i ( I ) ) v j ( e j ( J ) ) T = u i v j e i j K I × J
A K I × J , B K J × K , C K K × M
A B = i = 1 I k = 1 K ( j = 1 J a i j b j k ) e i k = a i j b j k e i k K I × K
A C = ( a i j e i j ) ( b k m ) e k m ) = a i j b k m e i k j m K I K × J M
Table 4. Vector and matrix slices for a third-order tensor, X K I × J × K .
Table 4. Vector and matrix slices for a third-order tensor, X K I × J × K .
SlicesDefinitionsDimensions
VectorsFibers
column x , j , k I
row x i , , k J
tube x i , j , K
MatricesMatrix slices
frontal X , , k I × J
lateral X , j , K × I
horizontal X i , , J × K
Table 5. Definitions of some basic operations.
Table 5. Definitions of some basic operations.
Vectors/Matrices/TensorsOperationsDefinitions
u ( p ) K I p , p P X = p = 1 P u ( p ) x i ̲ P = p = 1 P u i p ( p )
X K I ̲ P , A K J × I p Y = X × p A y i 1 , , i p 1 , j , i p + 1 , , i P = i p = 1 I p a j , i p x i ̲ P = a j , i p x i ̲ P
X K I ̲ P , A ( p ) K J p × I p , p P Y = X × p = 1 P A ( p ) X × 1 A ( 1 ) × 2 × P A ( P ) y j ̲ P = i 1 = 1 I 1 i P = 1 I P p = 1 P a j p , i p ( p ) x i ̲ P = p = 1 P a j p , i p ( p ) x i ̲ P
X K I ̲ P , Y K J ̲ N Z = X × p n Y z i 1 , , i p 1 , i p + 1 , , i P , j 1 , , j n 1 , j n + 1 , , j N =
with   I p = J n = K k = 1 K a i 1 , , i p 1 , k , i p + 1 , , i P b j 1 , , j n 1 , k , j n + 1 , , j N
Table 6. TD of an Nth-order and third-order tensor.
Table 6. TD of an Nth-order and third-order tensor.
Nth-Order TensorThird-Order Tensor
Tensors X K I ̲ N X K I × J × K
Core tensors G K R ̲ N G K P × Q × S
Matrix factors A ( n ) K I n × R n A K I × P , B K J × Q , C K K × S
Scalar writing x i ̲ N = r 1 = 1 R 1 r N = 1 R N g r 1 , , r N n = 1 N a i n , r n ( n ) x i , j , k = p = 1 P q = 1 Q s = 1 S g p q s a i p b j q c k s
With mode-n products X = G × n = 1 N A ( n ) X = G × 1 A × 2 B × 3 C
With outer products X = r 1 = 1 R 1 r N = 1 R N g r 1 , , r N n = 1 N A . r n ( n ) X = p = 1 P q = 1 Q s = 1 S g p q s A . p B . q C . s
X I J × K = ( A B ) G P Q × S C T
Matrix unfoldings X S 1 ; S 2 = n S 1 A ( n ) G S 1 ; S 2 n S 2 A ( n ) T X J K × I = ( B C ) G Q S × P A T
X K I × J = ( C A ) G S P × Q B T
Table 7. CPD of an Nth-order and third-order tensor.
Table 7. CPD of an Nth-order and third-order tensor.
Nth-Order TensorThird-Order Tensor
Tensors X K I ̲ N X K I × J × K
Core tensors I N , R R R × R × . . . × R I 3 , R R R × R × R
Matrix factors A ( n ) K I n × R A K I × R , B K J × R , C K K × R
Scalar writing x i ̲ N = r R n = 1 N a i n , r ( n ) x i , j , k = r = 1 R a i r b j r c k r
With mode-n products X = I N , R × n = 1 N A ( n ) X = I 3 , R × 1 A × 2 B × 3 C
With outer products X = r = 1 R n = 1 N A . r ( n ) X = r = 1 R A . r B . r C . r
X I J × K = ( A B ) C T
Matrix unfoldings X S 1 ; S 2 = n S 1 A ( n ) n S 2 A ( n ) T X J K × I = ( B C ) A T
X K I × J = ( C A ) B T
Table 8. Tucker- ( N 1 , N ) decomposition and special cases for N = 3 .
Table 8. Tucker- ( N 1 , N ) decomposition and special cases for N = 3 .
X K I ̲ N
TD- ( N 1 , N ) model,  N N 1
A ( n ) K I n × R n ,  for  n N 1 A ( n ) = I I n ,  for  n = N 1 + 1 , , N G K R 1 × × R N 1 × I N 1 + 1 × × I N
x i ̲ N = r 1 = 1 R 1 r N 1 = 1 R N 1 g r 1 , , r N 1 , i N 1 + 1 , , i N n = 1 N 1 a i n , r n ( n )
X = G × 1 A ( 1 ) × 2 × N 1 A ( N 1 ) × N 1 + 1 I I N 1 + 1 × N I I N = G × n = 1 N 1 A ( n )
Special cases for N = 3
X K I × J × K
Tucker ( 2 , 3 ) model
G K P × Q × K A K I × P , B K J × Q , C = I K
x i j k = p = 1 P q = 1 Q g p q k a i p b j q X = G × 1 A × 2 B
Tucker ( 1 , 3 ) model
A K I × P , B = I J , C = I K
x i j k = p = 1 P g p j k a i p X = G × 1 A
Table 9. Generalized Tucker models.
Table 9. Generalized Tucker models.
GTD-(2,4) model
X K I ̲ 4
G K R 1 × R 2 × I 3 × I 4 , A K I 1 × R 1 × I 3 , B K I 2 × R 2
x i ̲ 4 = r 1 = 1 R 1 r 2 = 1 R 2 g r 1 , r 2 , i 3 , i 4 a i 1 , r 1 , i 3 b i 2 , r 2 , X = G × 1 2 A × 2 B
Table 10. TTD for a Pth-order tensor, X K I ̲ P .
Table 10. TTD for a Pth-order tensor, X K I ̲ P .
X K I ̲ P
G ( 1 ) K I 1 × R 1 ; G ( P ) K R P 1 × I P ; G ( p ) K R p 1 × I p × R p , p { 2 , , P 1 }
x i ̲ P = r ̲ P 1 = 1 ̲ R ̲ P 1 p = 1 P g r p 1 , i p , r p ( p ) with g r 0 , i 1 , r 1 ( 1 ) = g i 1 , r 1 ( 1 )     and     g r P 1 , i P , r P ( P ) = g r P 1 , i P ( P )
x i ̲ P = g i 1 , ( 1 ) G , i 2 , ( 2 ) G , i 3 , ( 3 ) G , i P 1 , ( P 1 ) g , i P ( P )
X = G ( 1 ) × 2 1 G ( 2 ) × 3 1 G ( 3 ) × 4 1 × P 1 1 G ( P 1 ) × P 1 G ( P )
X = r 1 = 1 R 1 r P 1 = 1 R P 1 g , r 1 ( 1 ) g r 1 , , r 2 ( 2 ) g r P 2 , , r P 1 ( P 1 ) g r P 1 , ( P ) T
Table 11. Parametric complexity of CPD, TD, and TTD.
Table 11. Parametric complexity of CPD, TD, and TTD.
ModelsNotationsElement x i ̲ N
CPD A ( 1 ) , , A ( N ) ; R r = 1 R n = 1 N a i n , r ( n )
TD G ; A ( 1 ) , , A ( N ) ; R 1 , , R N r 1 = 1 R 1 r N = 1 R N g r 1 , , r N n = 1 N a i n , r n ( n )
TTD G ( 1 ) , G ( 2 ) , , G ( N 1 ) , G ( N ) ; R 1 , , R N 1 r 1 = 1 R 1 r N 1 = 1 R N 1 n = 1 N g r n 1 , i n , r n ( n ) , r 0 = r N
ModelsParametersComplexity
CPD A ( n ) K I n × R , n N O ( N I R )
TD G K R ̲ N ; A ( n ) K I n × R n , n N O ( N I R + R N )
G ( 1 ) K I 1 × R 1 , G ( N ) K R N 1 × I N
TTD G ( n ) K R n 1 × I n × R n , n { 2 , 3 , , N 1 } O 2 I R + ( N 2 ) I R 2
Table 12. TTD, NCPD, and NTD models for a fourth-order tensor.
Table 12. TTD, NCPD, and NTD models for a fourth-order tensor.
X K I ̲ 4
TTD-4 model
G ( 1 ) K I 1 × R 1 ; G ( 4 ) K R 3 × I 4 ; G ( p ) K R p 1 × I p × R p , p { 2 , 3 }
x i ̲ 4 = r 1 = 1 R 1 r 2 = 1 R 2 r 3 = 1 R 3 g i 1 , r 1 ( 1 ) g r 1 , i 2 , r 2 ( 2 ) g r 2 , i 3 , r 3 ( 3 ) g r 3 , i 4 ( 4 )
NCPD-4 model
A ( 1 ) K I 1 × R 1 ; B ( 1 ) K I 2 × R 1 ; G K R 1 × R 2 ; A ( 2 ) K I 3 × R 2 ; B ( 2 ) K I 4 × R 2
x i ̲ 4 = r 1 = 1 R 1 r 2 = 1 R 2 a i 1 , r 1 ( 1 ) b i 2 , r 1 ( 1 ) g r 1 , r 2 a i 3 , r 2 ( 2 ) b i 4 , r 2 ( 2 )
NTD-4 model
A ( 1 ) K I 1 × R 1 ; G ( 1 ) K R 1 × I 2 × R 2 ; U K R 2 × R 3 ; G ( 2 ) K R 3 × I 3 × R 4 ; A ( 2 ) K I 4 × R 4
x i ̲ 4 = r 1 = 1 R 1 r 2 = 1 R 2 r 3 = 1 R 3 r 4 = 1 R 4 a i 1 , r 1 ( 1 ) g r 1 , i 2 , r 2 ( 1 ) u r 2 , r 3 g r 3 , i 3 , r 4 ( 2 ) a i 4 , r 4 ( 2 )
Table 13. Matrix unfoldings of CPD models.
Table 13. Matrix unfoldings of CPD models.
CPD model of X ( 1 ) = A ( 1 ) , B ( 1 ) , G T ; R 1 K I 1 × I 2 × R 2
X I 1 I 2 × R 2 ( 1 ) = ( A ( 1 ) B ( 1 ) ) G
X I 1 R 2 × I 2 ( 1 ) = ( A ( 1 ) G T ) ( B ( 1 ) ) T
CPD model of X ( 2 ) = G , A ( 2 ) , B ( 2 ) ; R 2 K R 1 × I 3 × I 4
X I 3 I 4 × R 1 ( 2 ) = ( A ( 2 ) B ( 2 ) ) G T
X I 4 R 1 × I 3 ( 2 ) = ( B ( 2 ) G ) ( A ( 2 ) ) T
CPD model of X c 1 = A ( 1 ) , B ( 1 ) , X I 3 I 4 × R 1 ( 2 ) ; R 1 K I 1 × I 2 × I 3 I 4
X I 1 I 3 I 4 × I 2 = ( A ( 1 ) X I 3 I 4 × R 1 ( 2 ) ) ( B ( 1 ) ) T
X I 2 I 3 I 4 × I 1 = B ( 1 ) X I 3 I 4 × R 1 ( 2 ) A ( 1 ) T
X I 1 I 2 × I 3 I 4 = ( A ( 1 ) B ( 1 ) ) X R 1 × I 3 I 4 ( 2 )
CPD model of X c 2 = X I 1 I 2 × R 2 ( 1 ) , A ( 2 ) , B ( 2 ) ; R 2 K I 1 I 2 × I 3 × I 4
X I 3 I 1 I 2 × I 4 = A ( 2 ) X I 1 I 2 × R 2 ( 1 ) B ( 2 ) T
X I 4 I 1 I 2 × I 3 = ( B ( 2 ) X I 1 I 2 × R 2 ( 1 ) ) ( A ( 2 ) ) T
X I 1 I 2 × I 3 I 4 = X I 1 I 2 × R 2 ( 1 ) ( A ( 2 ) B ( 2 ) ) T
Table 14. Matrix unfoldings of the NCPD-4 model.
Table 14. Matrix unfoldings of the NCPD-4 model.
NCPD-4 model of X = A ( 1 ) , B ( 1 ) , G , A ( 2 ) , B ( 2 ) ; R 1 , R 2 K I ̲ 4
X I 1 I 3 I 4 × I 2 = ( A ( 1 ) X I 3 I 4 × R 1 ( 2 ) ) ( B ( 1 ) ) T = A ( 1 ) ( A ( 2 ) B ( 2 ) ) G T ( B ( 1 ) ) T
 
X I 2 I 3 I 4 × I 1 = ( B ( 1 ) X I 3 I 4 × R 1 ( 2 ) ) ( A ( 1 ) ) T = B ( 1 ) ( A ( 2 ) B ( 2 ) ) G T ( A ( 1 ) ) T
 
X I 3 I 1 I 2 × I 4 = ( A ( 2 ) X I 1 I 2 × R 2 ( 1 ) ) ( B ( 2 ) ) T = A ( 2 ) ( A ( 1 ) B ( 1 ) ) G ( B ( 2 ) ) T
 
X I 4 I 1 I 2 × I 3 = ( B ( 2 ) X I 1 I 2 × R 2 ( 1 ) ) ( A ( 2 ) ) T = B ( 2 ) ( A ( 1 ) B ( 1 ) ) G ( A ( 2 ) ) T
 
X I 1 I 2 × I 3 I 4 = ( A ( 1 ) B ( 1 ) ) X R 1 × I 3 I 4 ( 2 ) = X I 1 I 2 × R 2 ( 1 ) ( A ( 2 ) B ( 2 ) ) T = A ( 1 ) B ( 1 ) G A ( 2 ) B ( 2 ) T
Table 15. Matrix unfoldings of TD models.
Table 15. Matrix unfoldings of TD models.
TD model of X ( 1 ) = G ( 1 ) ; A ( 1 ) , I I 2 , U T ; R 1 , R 2 K I 1 × I 2 × R 3
X I 1 I 2 × R 3 ( 1 ) = ( A ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) U
X I 1 R 3 × I 2 ( 1 ) = ( A ( 1 ) U T ) G R 1 R 2 × I 2 ( 1 )
TD model of X ( 2 ) = G ( 2 ) ; U , I I 3 , A ( 2 ) ; R 3 , R 4 K R 2 × I 3 × I 4
X I 3 I 4 × R 2 ( 2 ) = ( I I 3 A ( 2 ) ) G I 3 R 4 × R 3 ( 2 ) U T
X I 4 R 2 × I 3 ( 2 ) = ( A ( 2 ) U ) G R 4 R 3 × I 3 ( 2 )
TD model of X c 1 = G ( 1 ) ; A ( 1 ) , I I 2 , X I 3 I 4 × R 2 ( 2 ) ; R 1 , R 2 K I 1 × I 2 × I 3 I 4
X I 2 I 3 I 4 × I 1 = [ I I 2 X I 3 I 4 × R 2 ( 2 ) ] G I 2 R 2 × R 1 ( 1 ) ( A ( 1 ) ) T
X I 1 I 3 I 4 × I 2 = ( A ( 1 ) X I 3 I 4 × R 2 ( 2 ) ) G R 1 R 2 × I 2 ( 1 )
X I 1 I 2 × I 3 I 4 = ( A ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) X R 2 × I 3 I 4 ( 2 ) = ( A ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) U G R 3 × I 3 R 4 ( 2 ) ( I I 3 A ( 2 ) ) T
TD model of X c 2 = G ( 2 ) ; X I 1 I 2 × R 3 ( 1 ) , I I 3 , A ( 2 ) ; R 3 , R 4 K I 1 I 2 × I 3 × I 4
X I 3 I 1 I 2 × I 4 = [ I I 3 X I 1 I 2 × R 3 ( 1 ) ] G I 3 R 3 × R 4 ( 2 ) ( A ( 2 ) ) T
X I 4 I 1 I 2 × I 3 = ( A ( 2 ) X I 1 I 2 × R 3 ( 1 ) ) G R 4 R 3 × I 3 ( 2 )
X I 1 I 2 × I 3 I 4 = X I 1 I 2 × R 3 ( 1 ) G R 3 × I 3 R 4 ( 2 ) ( I I 3 A ( 2 ) ) T = ( A ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) U G R 3 × I 3 R 4 ( 2 ) ( I I 3 A ( 2 ) ) T
Table 16. Unfoldings of the NTD-4 model.
Table 16. Unfoldings of the NTD-4 model.
NTD-4 model of X = A ( 1 ) , G ( 1 ) , U , G ( 2 ) , A ( 2 ) ; R 1 , R 2 , R 3 , R 4 K I ̲ 4  
A ( 1 ) K I 1 × R 1 ; G ( 1 ) K R 1 × I 2 × R 2 ; U K R 2 × R 3 ; G ( 2 ) K R 3 × I 3 × R 4 ; A ( 2 ) K I 4 × R 4
X I 3 I 1 I 2 × I 4 = [ I I 3 ( A ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) U ] G I 3 R 3 × R 4 ( 2 ) ( A ( 2 ) ) T
X I 4 I 1 I 2 × I 3 = [ A ( 2 ) ( A ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) U ] G R 4 R 3 × I 3 ( 2 )
X I 2 I 3 I 4 × I 1 = [ I I 2 ( I I 3 A ( 2 ) ) G I 3 R 4 × R 3 ( 2 ) U T ] G I 2 R 2 × R 1 ( 1 ) ( A ( 1 ) ) T
X I 1 I 3 I 4 × I 2 = [ A ( 1 ) ( I I 3 A ( 2 ) ) G I 3 R 4 × R 3 ( 2 ) U T ] G R 1 R 2 × I 2 ( 1 )
x I 3 I 4 I 1 I 2 = [ ( I I 3 A ( 2 ) ) G I 3 R 4 × R 3 ( 2 ) ( A ( 1 ) I I 2 ) G R 1 I 2 × R 2 ( 1 ) ] vec ( U )
Table 17. NTD-6 and NGTD-7 models.
Table 17. NTD-6 and NGTD-7 models.
X K I ̲ 6
NTD-6 model  
A ( 1 ) K I 1 × R 1 ; G ( 1 ) K R 1 × I 2 × I 3 × R 2 ; U K R 2 × R 3 ; G ( 2 ) K R 3 × I 4 × I 5 × R 4 ; A ( 2 ) K I 6 × R 4
x i ̲ 6 = r 1 = 1 R 1 r 2 = 1 R 2 r 3 = 1 R 3 r 4 = 1 R 4 a i 1 , r 1 ( 1 ) g r 1 , i 2 , i 3 , r 2 ( 1 ) u r 2 , r 3 g r 3 , i 4 , i 5 , r 4 ( 2 ) a i 6 , r 4 ( 2 )
X K I ̲ 7
NGTD-7 model  
A ( 1 ) K I 1 × I 2 × R 1 ; G ( 1 ) K R 1 × I 2 × I 3 × I 4 × R 2 ; U K R 2 × I 2 × R 3 ; G ( 2 ) K R 3 × I 2 × I 5 × I 6 × R 4 ; A ( 2 ) K I 7 × R 4
x i ̲ 7 = r 1 = 1 R 1 r 2 = 1 R 2 r 3 = 1 R 3 r 4 = 1 R 4 a i 1 , i 2 , r 1 ( 1 ) g r 1 , i 2 , i 3 , i 4 , r 2 ( 1 ) u r 2 , i 2 , r 3 g r 3 , i 2 , i 5 , i 6 , r 4 ( 2 ) a i 7 , r 4 ( 2 )
Table 18. Closed-form algorithms to estimate the parameters of a NCPD-4 model.
Table 18. Closed-form algorithms to estimate the parameters of a NCPD-4 model.
ModelsClosed-Form Algorithm 1Closed-Form Algorithm 2
NCPD-4 B ( 2 ) X I 1 I 2 × R 2 ( 1 ) = X I 4 I 1 I 2 × I 3 [ ( A ( 2 ) ) T ] A ( 1 ) X I 3 I 4 × R 1 ( 2 ) = X I 1 I 3 I 4 × I 2 [ ( B ( 1 ) ) T ]
KRF ( B ^ ( 2 ) , X ^ I 1 I 2 × R 2 ( 1 ) ) KRF ( A ^ ( 1 ) , X ^ I 3 I 4 × R 1 ( 2 ) )
Reshaping : X ^ I 1 I 2 × R 2 ( 1 ) X ^ I 1 R 2 × I 2 ( 1 ) Reshaping : X ^ I 3 I 4 × R 1 ( 2 ) X ^ I 4 R 1 × I 3 ( 2 )
A ( 1 ) G T = X ^ I 1 R 2 × I 2 ( 1 ) [ ( B ( 1 ) ) T ] B ( 2 ) G = X ^ I 4 R 1 × I 3 ( 2 ) [ ( A ( 2 ) ) T ]
KRF ( A ^ ( 1 ) , G ^ ) KRF ( B ^ ( 2 ) , G ^ )
Table 19. Closed-form algorithms to estimate the parameters of the NTD-4 model.
Table 19. Closed-form algorithms to estimate the parameters of the NTD-4 model.
ModelsClosed-Form Algorithm 1Closed-Form Algorithm 2
NTD-4 A ( 2 ) X I 1 I 2 × R 3 ( 1 ) = X I 4 I 1 I 2 × I 3 [ G R 4 R 3 × I 3 ( 2 ) ] A ( 1 ) X I 3 I 4 × R 2 ( 2 ) = X I 1 I 3 I 4 × I 2 [ G R 1 R 2 × I 2 ( 1 ) ]
KronF ( A ^ ( 2 ) , X ^ I 1 I 2 × R 3 ( 1 ) ) KronF ( A ^ ( 1 ) , X ^ I 3 I 4 × R 2 ( 2 ) )
Reshaping : X ^ I 1 I 2 × R 3 ( 1 ) X ^ I 1 R 3 × I 2 ( 1 ) Reshaping : X ^ I 3 I 4 × R 2 ( 2 ) X ^ I 4 R 2 × I 3 ( 2 )
A ( 1 ) U T = X ^ I 1 R 3 × I 2 ( 1 ) ( G R 1 R 2 × I 2 ( 1 ) ) A ( 2 ) U = X ^ I 4 R 2 × I 3 ( 2 ) ( G R 4 R 3 × I 3 ( 2 ) )
KronF ( A ^ ( 1 ) , U ^ ) KronF ( A ^ ( 2 ) , U ^ )
Table 20. Closed-form algorithms to estimate the parameters of the NTD-6 model.
Table 20. Closed-form algorithms to estimate the parameters of the NTD-6 model.
ModelsClosed-Form Algorithm 1Closed-Form Algorithm 2
NTD-6 A ( 2 ) X I 1 I 2 I 3 × R 3 ( 1 ) = X I 6 I 1 I 2 I 3 × I 4 I 5 [ G R 4 R 3 × I 4 I 5 ( 2 ) ] A ( 1 ) X I 4 I 5 I 6 × R 2 ( 2 ) = X I 1 I 4 I 5 I 6 × I 2 I 3 [ G R 1 R 2 × I 2 I 3 ( 1 ) ]
KronF ( A ^ ( 2 ) , X ^ I 1 I 2 I 3 × R 3 ( 1 ) ) KronF ( A ^ ( 1 ) , X ^ I 4 I 5 I 6 × R 2 ( 2 ) )
Reshaping : X ^ I 1 I 2 I 3 × R 3 ( 1 ) X ^ I 1 R 3 × I 2 I 3 ( 1 ) Reshaping : X ^ I 4 I 5 I 6 × R 2 ( 2 ) X ^ I 6 R 2 × I 4 I 5 ( 2 )
A ( 1 ) U T = X ^ I 1 R 3 × I 2 I 3 ( 1 ) ( G R 1 R 2 × I 2 I 3 ( 1 ) ) A ( 2 ) U = X ^ I 6 R 2 × I 4 I 5 ( 2 ) ( G R 4 R 3 × I 4 I 5 ( 2 ) )
KronF ( A ^ ( 1 ) , U ^ ) KronF ( A ^ ( 2 ) , U ^ )
Table 21. Closed-form algorithms to estimate the parameters of the NGTD-7 model.
Table 21. Closed-form algorithms to estimate the parameters of the NGTD-7 model.
Closed-Form Algorithms 1Closed-Form Algorithms 2
A ( 2 ) bdiag i 2 ( X I 1 I 3 I 4 × R 3 ( 1 ) ) = bdiag i 2 ( X I 7 I 1 I 3 I 4 × I 5 I 6 ) bdiag i 2 ( G R 4 R 3 × I 5 I 6 ( 2 ) ) bdiag i 2 A I 1 × R 1 ( 1 ) X I 5 I 6 I 7 × R 2 ( 2 ) = bdiag i 2 X I 1 I 5 I 6 I 7 × I 3 I 4 bdiag i 2 G R 1 R 2 × I 3 I 4 ( 1 )
KronF A ^ ( 2 ) , bdiag i 2 ( X ^ I 1 I 3 I 4 × R 3 ( 1 ) ) KronF bdiag i 2 ( A ^ I 1 × R 1 ( 1 ) ) , bdiag i 2 ( X ^ I 5 I 6 I 7 × R 2 ( 2 ) )
Reshaping : bdiag i 2 ( X ^ I 1 I 3 I 4 × R 3 ( 1 ) ) bdiag i 2 ( X ^ I 1 R 3 × I 3 I 4 ( 1 ) ) Reshaping : bdiag i 2 ( X ^ I 5 I 6 I 7 × R 2 ( 2 ) ) bdiag i 2 ( X I 7 R 2 × I 5 I 6 ( 2 ) )
bdiag i 2 A I 1 × R 1 ( 1 ) U R 3 × R 2 = bdiag i 2 X ^ I 1 R 3 × I 3 I 4 ( 1 ) bdiag i 2 G R 1 R 2 × I 3 I 4 ( 1 ) A ( 2 ) bdiag i 2 ( U R 2 × R 3 ) = bdiag i 2 X I 7 R 2 × I 5 I 6 ( 2 ) bdiag i 2 ( G R 4 R 3 × I 5 I 6 ( 2 ) )
KRF bdiag i 2 ( A ^ I 1 × R 1 ( 1 ) ) , bdiag i 2 ( U ^ R 3 × R 2 ) KRF A ^ ( 2 ) , bdiag i 2 ( U ^ R 2 × R 3 )
Table 22. Overview of cooperative systems.
Table 22. Overview of cooperative systems.
Ref.OFDM/mmWRelay/IRS/UAVCoding/TrainingTensor ModelsReceiver Algorithms
[53] Two-hop relaySimplified KRSTCPD-PARATUCKALS + KRF
[40] Two-hop relaySimplified KRSTNCPDALS
[54] Two-hop relaySimplified KRSTNCPDKRF
[41] Two-hop relayTSTNTDALS, KronF
[55] Two-hop relayMKRSTNCPDKRF
MKronST
[56] Two-hop relayMatrices + TrainingTucker-2KRF + Structured LS
[57] Two-hop relayTSTBlock Tucker-2KronF
[58] Multi-hop relaySimplified KRSTGeneralized NCPDKRF
[59] Three-hop relayKRSTNCPDALS + KRF
[60] Two-hop relayMatrices + TrainingCPDALS, MMSE
[61] Three-hop relayTST-CPDNTDCoupled SVD, ALS
[42] Two-hop relayTSTCoupled NTDKronF
[62] Three-hop relayMatrices + TrainingCPD + structured TuckerALS
[63] Two-hop relayTSTBlock Tucker2-CPDALS, KronF
[64]OFDM/mmWTwo-hop relayMatrices + TrainingStructured CPDSS * + ESPRIT
[65]OFDM/mmWIRSMatrices + TrainingCPDTensor completion
[66]mmWTwo-hop relayMatrices + TrainingCPDALS, KRF
[67]mmWOne-hopSimplified KRSTNCPDSVP *-ALS
[68,69] IRSTrainingCPDALS
[70] UAVSimplified KRSTNCPDKRF-ALM *
+ Training
[71] UAV-IRSSimplified KRSTCPDALS + KRF
[72] Two-hop relayTST + TrainingTucker-2LM * + LMMSE *
[73]OFDMTwo-hop relayTST + simplified TSTFCoupled NTDKronF
[74]OFDMTwo-hop relayKRSTFNCPDALS
* SS = spatial smoothing; SVP = singular value projection; LM = Levenberg–Marquardt; ALM = Accelerated LM; LMMSE = linear minimum mean-square error.
Table 23. Design parameters and system matrices and tensors.
Table 23. Design parameters and system matrices and tensors.
Design ParametersDefinitions
M S , M T Numbers of transmit antennas at the source and relay nodes
M R , M D Numbers of receive antennas at the relay and destination nodes
NNumber of symbols per data stream
RNumber of data streams
FNumber of subcarriers
P S , P R Time-spreading lengths at source and relay
J S , J R Numbers of chips at source and relay
Matrices/TensorsDefinitionsDimensionsCodings
S Symbol matrix N × R TSTF, TST, STSTF, STST, DKRSTF, STST-MSMKron
S Symbol matrix N × M S SKRST, SKRST-MSMKR
H ( S R ) Source–relay channel tensor M R × F × M S TSTF, STSTF
H ( R D ) Relay–destination channel tensor M D × F × M T TSTF, STSTF
H ( S R ) Source–relay channel matrix M R × M S TST, STST, SKRST, DKRSTF,
STST-MSMKron, SKRST-MSMKR
H ( R D ) Relay–destination channel matrix M D × M T TST, STST, DKRSTF
H ( R D ) Relay–destination channel matrix M D × M R SKRST
H ( R D ) Relay–destination channel matrix M D × M S STST-MSMKron, SKRST-MSMKR
C ( S ) Source-coding tensor M S × F × P S × J S × R TSTF
C ( S ) Source-coding tensor M S × F × P S × R STSTF
C ( S ) Source-coding tensor M S × P S × J S × R TST
C ( S ) Source-coding tensor M S × P S × R STST
C ( S ) Source-coding tensor M S × P S × R 1 × R Q STST-MSMKron
C ( R ) Relay-coding tensor M T × F × P R × J R × M R TSTF
C ( R ) Relay-coding tensor M T × F × P R × M R STSTF
C ( R ) Relay-coding tensor M T × P R × J R × M R TST
C ( R ) Relay-coding tensor M T × P R × M R STST
C ( R ) Relay-coding tensor M S × P R × R 1 × R Q STST-MSMKron
C ( S ) Source space-time coding matrix P S × M S SKRST, DKRSTF, SKRST-MSMKR
C ( R ) Relay space-time coding matrix P R × M R SKRST, DKRSTF
C ( R ) Relay space-time coding matrix P R × M S SKRST-MSMKR
W ( S ) Source space-coding matrix M S × R DKRSTF
W ( R ) Relay space-coding matrix M T × M R DKRSTF
A ( S ) Source frequency-coding matrix F × R DKRSTF
Table 24. Two-hop system with DKRSTF codings.
Table 24. Two-hop system with DKRSTF codings.
Coded and Received SignalsSymbols/CodingsChannelsEncoded/Received SignalsDimensions
S C N × R
First hop
Signals coded at source A ( S ) , C ( S ) , W ( S ) V F N × M S ( S ) = ( A ( S ) S ) W ( S ) T M S × F × N
U P S F N × M S ( S ) = C ( S ) V F N × M S ( S ) M S × P S × F × N
Signals received at relay H ( S R ) X M R × P S F N ( R ) = H ( S R ) U M S × P S F N ( S ) M R × P S × F × N
Second hop
Signals coded at relay C ( R ) , W ( R ) U P R P S F N × M T ( R ) = ( C ( R ) X P S F N × M R ( R ) ) W ( R ) T M T × P R × P S × F × N
Signals received at destination H ( R D ) X M D × P R P S F N ( D ) = H ( R D ) U M T × P R P S F N ( R ) M D × P R × P S × F × N
Table 25. Tensors of encoded and received signals for two-hop relay systems.
Table 25. Tensors of encoded and received signals for two-hop relay systems.
SystemsTensor WritingScalar WritingModels of X ( D )
Tensor-based codings-AF protocol
TSTF U ( S ) = C ( S ) × 5 S C M S × F × P S × J S × N u m S , f , p S , j S , n ( S ) = r c m S , f , p S , j S , r ( S ) s n , r
X ( R ) = C ( S ) × 1 3 H ( S R ) × 5 S C M R × F × P S × J S × N x m R , f , p S , j S , n ( R ) = m S r h m R , f , m S ( S R ) c m S , f , p S , j S , r ( S ) s n , r NGTD-7
X ( D ) = C ( R ) × 1 3 H ( R D ) × 5 1 X ( R ) C M D × F × P R × J R × P S × J S × N x m D , f , p R , j R , p S , j S , n ( D ) = m T m R h m D , f , m T ( R D ) c m T , f , p R , j R , m R ( R ) x m R , f , p S , j S , n ( R )
TST U ( S ) = C ( S ) × 4 S C M S × P S × J S × N u m S , p S , j S , n ( S ) = r c m S , p S , j S , r ( S ) s n , r
X ( R ) = C ( S ) × 1 H ( S R ) × 4 S C M R × P S × J S × N x m R , p S , j S , n ( R ) = m S r h m R , m S ( S R ) c m S , p S , j S , r ( S ) s n , r NTD-6
X ( D ) = C ( R ) × 1 H ( R D ) × 4 1 X ( R ) C M D × P R × J R × P S × J S × N x m D , p R , j R , p S , j S , n ( D ) = m T m R h m D , m T ( R D ) c m T , p R , j R , m R ( R ) x m R , p S , j S , n ( R )
STSTF U ( S ) = C ( S ) × 4 S C M S × F × P S × N u m S , f , p S , n ( S ) = r c m S , f , p S , r ( S ) s n , r
X ( R ) = C ( S ) × 1 3 H ( S R ) × 4 S C M R × F × P S × N x m R , f , p S , n ( R ) = m S r h m R , f , m S ( S R ) c m S , f , p S , r ( S ) s n , r NGTD-5
X ( D ) = C ( R ) × 1 3 H ( R D ) × 4 1 X ( R ) C M D × F × P R × P S × N x m D , f , p R , p S , n ( D ) = m T m R h m D , f , m T ( R D ) c m T , f , p R , m R ( R ) x m R , f , p S , n ( R )
STST U ( S ) = C ( S ) × 3 S C M S × P S × N u m S , p S , n ( S ) = r c m S , p S , r ( S ) s n , r
X ( R ) = C ( S ) × 1 H ( S R ) × 3 S C M R × P S × N x m R , p S , n ( R ) = m S r h m R , m S ( S R ) c m S , p S , r ( S ) s n , r NTD-4
X ( D ) = C ( R ) × 1 H ( R D ) × 3 1 X ( R ) C M D × P R × P S × N x m D , p R , p S , n ( D ) = m T m R h m D , m T ( R D ) c m T , p R , m R ( R ) x m R , p S , n ( R )
Matrix-based codings-AF protocol
DKRSTF V ( S ) = I R × 1 W ( S ) × 2 A ( S ) × 3 S C M S × F × N v m S , f , n ( S ) = r w m S , r ( S ) a f , r ( S ) s n , r
X c ( R ) = I M S × 1 H ( S R ) × 2 C ( S ) × 3 V F N × M S ( S ) C M R × P S × F N x m R , p S , f , n ( R ) = m S h m R , m S ( S R ) c p S , m S ( S ) v m S , f , n NCPD-5
X c ( D ) = I M R × 1 H ( R D ) W ( R ) × 2 C ( R ) x m D , p R , p S , f , n ( D ) = m R m T h m D , m T ( R D ) w m T , m R ( R ) c p R , m R ( R ) x m R , p S , f , n ( R )
× 3 X P S F N × M R ( R ) C M D × P R × P S F N
SKRST U ( S ) = I M S × 2 C ( S ) × 3 S C M S × P S × N u m S , p S , n ( S ) = c p S , m S ( S ) s n , m S
X ( R ) = I M S × 1 H ( S R ) × 2 C ( S ) × 3 S C M R × P S × N x m R , p S , n ( R ) = m S h m R , m S ( S R ) c p S , m S ( S ) s n , m S NCPD-4
X ( D ) = I M R × 1 H ( R D ) × 2 C ( R ) × 3 1 X ( R ) C M D × P R × P S × N x m D , p R , p S , n ( D ) = m R h m D , m R ( R D ) c p R , m R ( R ) x m R , p S , n ( R )
Combined codings-DF protocol
STST-MSMKron U c ( S ) = C ( S ) × 3 S C M S × P S × N 1 N Q u m S , p S , n 1 , . . . , n Q ( S ) = r 1 r Q c m S , p S , r 1 , . . . , r Q ( S ) q = 1 Q s n q , r q ( q )
S = q = 1 Q S ( q ) X c ( R ) = C ( S ) × 1 H ( S R ) × 3 S C M R × P S × N 1 N Q x m R , p S , n 1 , . . . , n Q ( R ) = m S r 1 r Q h m R , m S ( S R ) c m S , p S , r 1 , . . . , r Q ( S ) q = 1 Q s n q , r q ( q ) TD- ( Q + 2 )
X c ( D ) = C ( R ) × 1 H ( R D ) × 3 S ^ C M D × P R × N 1 N Q x m D , p R , n 1 , . . . , n Q ( D ) = m S r 1 r Q h m D , m S ( R D ) c m S , p R , r 1 , . . . , r Q ( R ) q = 1 Q s ^ n q , r q ( q )
SKRST-MSMKR U c ( S ) = I M S × 2 C ( S ) × 3 S C M S × P S × N 1 N Q u m S , p S , n 1 , . . . , n Q ( S ) = c p S , m S ( S ) q = 1 Q s n q , m S ( q )
S = q = 1 Q S ( q ) X c ( R ) = I M S × 1 H ( S R ) × 2 C ( S ) × 3 S C M R × P S × N 1 N Q x m R , p S , n 1 , . . . , n Q ( R ) = m S h m R , m S ( S R ) c p S , m S ( S ) q = 1 Q s n q , m S ( q ) CPD- ( Q + 2 )
X c ( D ) = I M S × 1 H ( R D ) × 2 C ( R ) × 3 S ^ C M D × P R × N 1 N Q x m D , p R , n 1 , . . . , n Q ( D ) = m S h m D , m S ( R D ) c p R , m S ( R ) q = 1 Q s ^ n q , m S ( q )
Table 26. Correspondences between X ( D ) and generic nested tensor models.
Table 26. Correspondences between X ( D ) and generic nested tensor models.
Tensor-Based Codings
Models X ( 1 ) A ( 1 ) / A ( 1 ) G ( 1 ) U / U G ( 2 ) A ( 2 ) I 1 I 2 I 3 I 4 I 5 I 6 I 7 R 1 R 2 R 3 R 4
TSTF/NGTD-7 H ( S R D ) H ( R D ) C ( R ) H ( S R ) C ( S ) S M D F P R J R P S J S N M T M R M S R
TST/NTD-6 H ( S R D ) H ( R D ) C ( R ) H ( S R ) C ( S ) S M D P R J R P S J S N- M T M R M S R
STSTF/NGTD-5 H ( S R D ) H ( R D ) C ( R ) H ( S R ) C ( S ) S M D F P R P S N-- M T M R M S R
STST/NTD-4 H ( S R D ) H ( R D ) C ( R ) H ( S R ) C ( S ) S M D P R P S N--- M T M R M S R
Matrix-Based Codings
Models X ( 1 ) A ( 1 ) B ( 1 ) G A ( 2 ) B ( 2 ) I 1 I 2 I 3 I 4 --- R 1 R 2 --
SKRST/NCPD-4 H ( S R D ) H ( R D ) C ( R ) H ( S R ) C ( S ) S M D P R P S N--- M R M S --
Table 27. Closed-form receiver for SKRST and STST systems.
Table 27. Closed-form receiver for SKRST and STST systems.
System/ReceiverClosed-Form Receiver 1Closed-Form Receiver 2
SKRST/KRF S H M D P R × M S ( S R D ) = X N M D P R × P S ( D ) [ ( C ( S ) ) T ] H ( R D ) X P S N × M R ( R ) = X M D P S N × P R ( D ) [ ( C ( R ) ) T ]
KRF ( S ^ , H ^ M D P R × M S ( S R D ) ) KRF ( H ^ ( R D ) , X ^ P S N × M R ( R ) )
Reshaping : H ^ M D P R × M S ( S R D ) H ^ M D M S × P R ( S R D ) Reshaping : X ^ P S N × M R ( R ) X ^ N M R × P S ( R )
H ( RD ) H ( S R ) T = H ^ M D M S × P R ( S R D ) [ ( C ( R ) ) T ] S H ( S R ) = X ^ N M R × P S ( R ) [ ( C ( S ) ) T ]
KRF ( H ^ ( R D ) , H ^ ( S R ) ) KRF ( S ^ , H ^ ( S R ) )
STST/KronF S H M D P R × M S ( S R D ) = X N M D P R × P S ( D ) [ C R M S × P S ( S ) ] H ( R D ) X P S N × M R ( R ) = X M D P S N × P R ( D ) [ C M T M R × P R ( R ) ]
KronF ( S ^ , H ^ M D P R × M S ( S R D ) ) KronF ( H ^ ( R D ) , X ^ P S N × M R ( R ) )
Reshaping : H ^ M D P R × M S ( S R D ) H ^ M D M S × P R ( S R D ) Reshaping : X ^ P S N × M R ( R ) X ^ N M R × P S ( R )
H ( R D ) H ( S R ) T = H ^ M D M S × P R ( S R D ) ( C M T M R × P R ( R ) ) S H ( S R ) = X ^ N M R × P S ( R ) ( C R M S × P S ( S ) )
KronF ( H ^ ( R D ) , H ^ ( S R ) ) KronF ( S ^ , H ^ ( S R ) )
Table 28. Semi-blind receivers.
Table 28. Semi-blind receivers.
Tensor-Based codings—AF protocol
SystemUnfoldingsEstimated parametersCorresp.
TSTF NGTD-7
bdiag f X N M D P R J R × P S J S ( D ) ( f ) = S bdiag f H M D P R J R × M S ( S R D ) ( f ) bdiag f C R M S × P S J S ( S ) ( f ) Equation (52)
bdiag f H M D M S × P R J R ( S R D ) ( f ) = bdiag f H M D × M T ( R D ) ( f ) H M S × M R ( S R ) ( f ) bdiag f C M T M R × P R J R ( R ) ( f ) Equation (49)
ReceiverEstimation steps
KronF S bdiag f H M D P R J R × M S ( S R D ) ( f ) = bdiag f X N M D P R J R × P S J S ( D ) ( f ) bdiag f C P S J S × R M S ( S ) ( f ) * S ^ , bdiag f H ^ M D P R J R × M S ( S R D ) ( f ) Table 21
Reshaping bdiag f H ^ M D P R J R × M S ( S R D ) ( f ) bdiag f H ^ M D M S × P R J R ( S R D ) ( f ) Algorithm 1
bdiag f H M D × M T ( R D ) ( f ) H M S × M R ( S R ) ( f ) = bdiag f H ^ M D M S × P R J R ( S R D ) ( f ) bdiag f C P R J R × M T M R ( R ) ( f ) * H ^ ( R D ) , H ^ ( S R )
SystemUnfoldingsEstimated parametersCorresp.
TST NTD-6
X N M D P R J R × P S J S ( D ) = S H M D P R J R × M S ( S R D ) C R M S × P S J S ( S ) Equation (118)
H M D M S × P R J R ( S R D ) = H ( R D ) H ( S R ) T C M T M R × P R J R ( R ) Equation (114)
ReceiverEstimation steps
KronF S H M D P R J R × M S ( S R D ) = X N M D P R J R × P S J S ( D ) C P S J S × R M S ( S ) * S ^ , H ^ M D P R J R × M S ( S R D ) Table 20
Reshaping H ^ M D P R J R × M S ( S R D ) H ^ M D M S × P R J R ( S R D ) Algorithm 1
H ( R D ) H ( S R ) T = H ^ M D M S × P R J R ( S R D ) C P R J R × M T M R ( R ) * H ^ ( R D ) , H ^ ( S R )
SystemUnfoldingsEstimated parametersCorresp.
STSTF NGTD-5
bdiag f X N M D P R × P S ( D ) ( f ) = S bdiag f H M D P R × M S ( S R D ) ( f ) bdiag f C R M S × P S ( S ) ( f ) Equation (52)
bdiag f H M D M S × P R ( S R D ) ( f ) = bdiag f H M D × M T ( R D ) ( f ) H M S × M R ( S R ) ( f ) bdiag f C M T M R × P R ( R ) ( f ) Equation (49)
ReceiverEstimation steps
KronF S bdiag f H M D P R × M S ( S R D ) ( f ) = bdiag f X N M D P R × P S ( D ) ( f ) bdiag f C P S × R M S ( S ) ( f ) * S ^ , bdiag f H ^ M D P R × M S ( S R D ) ( f )
Reshaping bdiag f H ^ M D P R × M S ( S R D ) ( f ) bdiag f H ^ M D M S × P R ( S R D ) ( f )
bdiag f H M D × M T ( R D ) ( f ) H M S × M R ( S R ) ( f ) = bdiag f H ^ M D M S × P R ( S R D ) ( f ) bdiag f C P R × M T M R ( R ) ( f ) * H ^ ( R D ) , H ^ ( S R )
SystemUnfoldingsEstimated parametersCorresp.
STST NTD-4
X N M D P R × P S ( D ) = S H M D P R × M S ( S R D ) C R M S × P S ( S ) Equation (86)
H M D M S × P R ( S R D ) = H ( R D ) H ( S R ) T C M T M R × P R ( R ) Equation (87)
ReceiverEstimation steps
KronF S H M D P R × M S ( S R D ) = X N M D P R × P S ( D ) C P S × R M S ( S ) * S ^ , H ^ M D P R × M S ( S R D ) Table 20
Reshaping H ^ M D P R × M S ( S R D ) H ^ M D M S × P R ( S R D ) Algorithm 1
H ( R D ) H ( S R ) T = H ^ M D M S × P R ( S R D ) C P R × M T M R ( R ) * H ^ ( R D ) , H ^ ( S R )
SystemEstimation stepsEstimated parametersCorresp.
STST S ^ t T = I P S H ^ t 1 ( R D ) I P R C M T P R × M R ( R ) H ^ t 1 ( S R ) C P S M S × R ( S ) X P S M D P R × N ( D ) S ^ Equation (82)
Receiver H ^ t ( R D ) T = I P R I P S S ^ t C P S R × M S ( S ) H ^ t 1 ( S R ) T C P R M R × M T ( R ) X P R P S N × M D ( D ) H ^ ( R D ) Equation (83)
ALS vec H ^ t ( S R ) = ( I P S S ^ t ) ( H ^ t ( R D ) I J ) C P S R × M S ( S ) C M T P R × M R ( R ) x P S N M D P R ( D ) H ^ ( S R ) Equation (84)
Matrix-based codings—AF protocol
SystemUnfoldingsEstimated parametersCorresp.
DKRSTF NCPD-5
X F N M D P R × P S ( D ) = V F N × M S ( S ) H M D P R × M S ( S R D ) C ( S ) T    with    V F N × M S ( S ) = ( A ( S ) S ) W ( S ) T Equation (178)
H M D M S × P R ( S R D ) = B H ( S R ) T C ( R ) T    with    B = H ( R D ) W ( R ) Equation (179)
V F M S × N ( S ) = A ( S ) W ( S ) S T Equation (182)
ReceiverEstimation steps
KRF V F N × M S ( S ) H M D P R × M S ( S R D ) = X F N M D P R × P S ( D ) C ( S ) * V ^ F N × M S ( S ) , H ^ M D P R × M S ( S R D ) Equation (180)
Reshaping H ^ M D P R × M S ( S R D ) H ^ M D M S × P R ( S R D )    and    V ^ F N × M S ( S ) V ^ F M S × N ( S )
B H ( S R ) T = H ^ M D M S × P R ( S R D ) C ( R ) * B ^ , H ^ ( S R ) Equation (181)
S ^ T = A ( S ) W ( S ) H V ^ F M S × N ( S )    and    H ^ ( R D ) = B ^ W ( R ) H S ^ , H ^ ( R D ) Equation (183)
SystemUnfoldingsEstimated parametersCorresp.
SKRST NCPD-4
X N M D P R × P S ( D ) = S H M D P R × M S ( S R D ) C ( S ) T Equation (72)
H M D M S × P R ( S R D ) = H ( R D ) H ( S R ) T C ( R ) T Equation (73)
ReceiverEstimation steps
KRF S H M D P R × M S ( S R D ) = X N M D P R × P S ( D ) C ( S ) * S ^ , H ^ M D P R × M S ( S R D ) Table 18
Reshaping H ^ M D P R × M S ( S R D ) H ^ M D M S × P R ( S R D )
H ( R D ) H ( S R ) T = H ^ M D M S × P R ( S R D ) C ( R ) * H ^ ( R D ) , H ^ ( S R ) Table 18
SystemEstimation stepsEstimated parametersCorresp.
SKRST H ^ t ( R D ) T = C ( R ) C ( S ) S ^ t 1 H ^ t 1 ( S R ) T X P R P S N × M D ( D ) H ^ ( R D ) Equation (67)
Receiver S ^ t T = C ( S ) H ^ t ( R D ) C ( R ) H ^ t 1 ( S R ) X P S M D P R × N ( D ) S ^ Equation (68)
ALS vec H ^ t ( S R ) = C ( S ) S ^ t H ^ t ( R D ) C ( R ) x P S N M D P R ( D ) H ^ ( S R ) Equation (70)
Combined codings—DF protocol
SystemUnfoldingsEstimated parametersCorresp.
STST-MSMKron X N 1 N Q M R × P S ( R ) = q = 1 Q S ( q ) H ( S R ) C R 1 R Q M S × P S ( S ) Equation (185)
X N 1 N Q M D × P R ( D ) = q = 1 Q S ^ ( q ) H ( R D ) C R 1 R Q M S × P R ( R ) Equation (186)
ReceiverEstimation steps
Multiple KronFFirst hop: q = 1 Q S ( q ) H ( S R ) = X N 1 N Q M R × P S ( R ) C P S × R 1 R Q M S ( S ) * S ^ ( 1 ) , , S ^ ( Q ) , H ^ ( S R ) Equation (187)
Second hop: q = 1 Q S ^ ( q ) H ( R D ) = X N 1 N Q M D × P R ( D ) C P R × R 1 R Q M S ( R ) * S ^ ^ ( 1 ) , , S ^ ^ ( Q ) , H ^ ( R D ) Equation (188)
SystemUnfoldingsEstimated parametersCorresp.
SKRST-MSMKR X N 1 N Q M R × P S ( R ) = q = 1 Q S ( q ) H ( S R ) C ( S ) T Equation (189)
X N 1 N Q M D × P R ( D ) = q = 1 Q S ^ ( q ) H ( R D ) C ( R ) T Equation (190)
ReceiverEstimation steps
Multiple KRFFirst hop: q = 1 Q S ( q ) H ( S R ) = X N 1 N Q M R × P S ( R ) C ( S ) * S ^ ( 1 ) , , S ^ ( Q ) , H ^ ( S R ) Equation (191)
Second hop: q = 1 Q S ^ ( q ) H ( R D ) = X N 1 N Q M D × P R ( D ) C ( R ) * S ^ ^ ( 1 ) , , S ^ ^ ( Q ) , H ^ ( R D ) Equation (192)
Table 29. Identifiability conditions in terms of design parameters.
Table 29. Identifiability conditions in terms of design parameters.
Systems/ReceiversNecessary ConditionsCorresp.Transmission Rates
Tensor-based codings—AF protocol
TSTF/KronF P R J R M T M R , P S J S M S R (96)
TST/KronF P R J R M T M R , P S J S M S R (95)
STSTF/KronF P R M T M R , P S M S R (97) ( N R 1 ) / N P S ( P R + 1 )
STST/KronF P R M T M R , P S M S R (173)
STST/ALS P R P S M D R , P R P S N M T , P R P S M D N M R M S (85)
STST/ZF P R P S M D R (85) *
Matrix-based codings—AF protocol
DKRSTF/KRF P S M S , P R M R , F M S R , M R M T (184) ( N 1 ) R / N P S ( P R + 1 )
SKRST/KRF P S M S , P R M R (172) ( N 1 ) M S / N P S ( P R + 1 )
SKRST/ALS P R P S N M R , P R P S M D M S , P R P S M D N M R M S (71)
SKRST/ZF P R P S M D M S (71) *
Combined codings—DF protocol
STST-MSMKron/KronF P S R 1 R Q M S , P R R 1 R Q M S (193) ( q N q R q Q ) / q N q P S ( P R + 1 )
SKRST-MSMKR/KRF P S M S , P R M S (194) ( q N q Q ) M S / q N q P S ( P R + 1 )
* Only the condition related to the symbol matrix estimation is considered.
Table 30. Values of NMSE of estimated channels H ( S R ) and H ( R D ) and reconstructed signals X ( D ) for SNR = 0 dB.
Table 30. Values of NMSE of estimated channels H ( S R ) and H ( R D ) and reconstructed signals X ( D ) for SNR = 0 dB.
Systems/ReceiversNMSE
H ( SR ) / H ( SR ) H ( RD ) / H ( RD ) X ( D )
TSTF/KronF−12.33−22.08−12.09
TST/KronF−11.27−23.81−10.41
STSTF/KronF−8.33−16.23−8.33
STST/KronF−7.52−18.02−6.67
DKRSTF/KRF−7.48−20.98−6.61
SKRST/KRF−6.73−18.04−5.76
STST-MSMKron/KronF−8.05−8.06−6.91
SKRST-MSMKR/KRF−9.96−9.90−8.99
Table 31. SNR thresholds for desired SER and NMSE (of reconstructed signals).
Table 31. SNR thresholds for desired SER and NMSE (of reconstructed signals).
TSTFTSTSTSTFSTSTDKRSTFSKRSTSTST-MSMKronSKRST-MSMKR
KronF KronF KronF KronF KRF KRF KronF KRF
SER 10 2 −4 dB−1 dB0 dB2 dB4 dB6 dB−5 dB−1 dB
10 3 −2 dB4 dB9 dB12 dB5 dB
NMSE−10 dB−2 dB0 dB1 dB3 dB4 dB3 dB3 dB1 dB
−20 dB7 dB9 dB11 dB12 dB13 dB12 dB12 dB11 dB
Table 32. Comparison of systems’ characteristics and receivers’ performance.
Table 32. Comparison of systems’ characteristics and receivers’ performance.
Systems/ReceiversDiversities *ChannelsPerformance
M P J F FF FSF NIC AK SER H ( SR ) H ( RD ) CT
TSTF/KronF++++ +−−−−+++++++++−−−
TST/KronF+++ + −−−−++++++++−−−
STSTF/KronF++ + +−−−−−++++++−−
STST/KronF++ + −−−−−+++++−−
STST/ALS++ + −−+++++−−−
DKRSTF/KRF++ ++ −−−−−+++++
SKRST/KRF++ + −−−−−++++
SKRST/ALS++ + −−−++++−−−
STST-MSMKron/KronF++ + −−−+++++−−
SKRST-MSMKR/KRF++ + −−−−+++++
* M = antennas, P = time-spreading, J = chip, F = frequency.
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Favier, G.; Rocha, D.S. Overview of Tensor-Based Cooperative MIMO Communication Systems—Part 2: Semi-Blind Receivers. Entropy 2024, 26, 937. https://doi.org/10.3390/e26110937

AMA Style

Favier G, Rocha DS. Overview of Tensor-Based Cooperative MIMO Communication Systems—Part 2: Semi-Blind Receivers. Entropy. 2024; 26(11):937. https://doi.org/10.3390/e26110937

Chicago/Turabian Style

Favier, Gérard, and Danilo Sousa Rocha. 2024. "Overview of Tensor-Based Cooperative MIMO Communication Systems—Part 2: Semi-Blind Receivers" Entropy 26, no. 11: 937. https://doi.org/10.3390/e26110937

APA Style

Favier, G., & Rocha, D. S. (2024). Overview of Tensor-Based Cooperative MIMO Communication Systems—Part 2: Semi-Blind Receivers. Entropy, 26(11), 937. https://doi.org/10.3390/e26110937

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop