[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
entropy-logo

Journal Browser

Journal Browser

Foundations of Goal-Oriented Semantic Communication in Intelligent Networks

A special issue of Entropy (ISSN 1099-4300). This special issue belongs to the section "Information Theory, Probability and Statistics".

Deadline for manuscript submissions: 30 April 2025 | Viewed by 7836

Special Issue Editors


E-Mail Website
Guest Editor
Communication Systems Department, EURECOM, 06410 Biot, France
Interests: information theory; stochastic control; optimization; game theory; semantic goal-oriented communications

E-Mail Website
Guest Editor
IMT Nord Europe, Institut Mines-Télécom, Univ. Lille, CERI SN - Centre for Digital Systems, F-59000 Lille, France
Interests: information theory; privacy; semantic communication; compression; probability theory

E-Mail Website
Guest Editor
Department of Electrical and Electronics Engineering, Middle East Technical University, Ankara 06800, Turkey
Interests: game theory; networked control; communication theory; information theory; information security

E-Mail Website
Guest Editor
Department of Computer and Information Science, Linköping University, 58183 Linköping, Sweden
Interests: age of information; goal-oriented semantics-aware communications; performance analysis and stochastic modeling; communication networks

Special Issue Information

Dear Colleagues,

With continued momentum around the deployment of 5G technologies, research communities in communications, control, and networking have already started looking at the requirements and technology components for the next generation of intelligent networks.  In the envisioned beyond-5G era, it is expected that data demands will continue to rapidly increase, leading to a world where everything is to be sensed and endowed with connected intelligence, fueled by the interconnection of myriad autonomous devices (robots, vehicles, drones, etc.). Consider, for example, data aggregated by an autonomous vehicle starting from at least 700 Mbit/s, whereas industrial Internet of Things deployments may deal with the transmission of 1 Gbit/s of aggregated data for remote actuation and digital twins. Gradually, wireless connectivity will become a true commodity that will serve a plethora of arising societal-scale applications such as consumer robotics, environmental monitoring and healthcare.

On the other hand, wireless connectivity is traditionally seen as a non-transparent data pipe carrying information whose importance, impact and usefulness for achieving a specific task have been deliberately set aside. This communication paradigm, although suitable for classical communication, is inefficient and inadequate to support the staggering amount of data and the timely communication needs of the next generation of intelligent networks. Therefore, it is vital to elevate wireless networks to generate, process and attempt to convey excessive real-time data using a new communication paradigm that accounts for the semantic goal-oriented importance of information that is generated, processed, transmitted, and utilized. 

In this Special Issue, we will consolidate the latest ideas and findings on the applications and theory of semantics and goal-oriented communications for networked intelligent systems.

Dr. Photios A. Stavrou
Dr. Giulia Cervia
Dr. Serkan Sarıtaş
Dr. Nikolaos Pappas
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Entropy is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • goal-oriented compression
  • information bottleneck methods
  • goal-oriented joint source channel coding
  • information theoretic coordination
  • networked control systems
  • age of information
  • value of information
  • neuromorphic computing
  • semantic entropy
  • knowledge graphs
  • natural language processing
  • distributed function computation
  • machine learning
  • information theory
  • security and privacy aspects
  • game-theoretical models

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue polices can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

30 pages, 1279 KiB  
Article
Multi-Server Multi-Function Distributed Computation
by Derya Malak, Mohammad Reza Deylam Salehi, Berksan Serbetci and Petros Elia
Entropy 2024, 26(6), 448; https://doi.org/10.3390/e26060448 - 26 May 2024
Cited by 1 | Viewed by 1149
Abstract
The work here studies the communication cost for a multi-server multi-task distributed computation framework, as well as for a broad class of functions and data statistics. Considering the framework where a user seeks the computation of multiple complex (conceivably non-linear) tasks from a [...] Read more.
The work here studies the communication cost for a multi-server multi-task distributed computation framework, as well as for a broad class of functions and data statistics. Considering the framework where a user seeks the computation of multiple complex (conceivably non-linear) tasks from a set of distributed servers, we establish the communication cost upper bounds for a variety of data statistics, function classes, and data placements across the servers. To do so, we proceed to apply, for the first time here, Körner’s characteristic graph approach—which is known to capture the structural properties of data and functions—to the promising framework of multi-server multi-task distributed computing. Going beyond the general expressions, and in order to offer clearer insight, we also consider the well-known scenario of cyclic dataset placement and linearly separable functions over the binary field, in which case, our approach exhibits considerable gains over the state of the art. Similar gains are identified for the case of multi-linear functions. Full article
Show Figures

Figure 1

Figure 1
<p>The gain <math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> of the characteristic graph approach for <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> in <a href="#sec4dot1-entropy-26-00448" class="html-sec">Section 4.1</a> (Scenario I). (<b>Left</b>) <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for various distributed topologies. (<b>Right</b>) The correlation model given as (<a href="#FD17-entropy-26-00448" class="html-disp-formula">17</a>) for <math display="inline"><semantics> <mrow> <mi mathvariant="script">T</mi> <mo>(</mo> <mn>30</mn> <mo>,</mo> <mn>30</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>11</mn> <mo>,</mo> <mn>20</mn> <mo>)</mo> </mrow> </semantics></math> with different <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> values.</p>
Full article ">Figure 2
<p>Colorings of graphs in <a href="#sec4dot1-entropy-26-00448" class="html-sec">Section 4.1</a> (Scenario II). (<b>Top Left–Right</b>) Characteristic graphs <math display="inline"><semantics> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>1</mn> </msub> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>2</mn> </msub> </msub> </semantics></math>, respectively. (<b>Bottom Left–Right</b>) The minimum conditional entropy colorings of <math display="inline"><semantics> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>1</mn> </msub> </msub> </semantics></math> given <math display="inline"><semantics> <msub> <mi>c</mi> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>2</mn> </msub> </msub> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>2</mn> </msub> </msub> </semantics></math> given <math display="inline"><semantics> <msub> <mi>c</mi> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>1</mn> </msub> </msub> </msub> </semantics></math>, respectively.</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in (<a href="#FD19-entropy-26-00448" class="html-disp-formula">19</a>) versus <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math>, for distributed computing of <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>W</mi> <mn>2</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>W</mi> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>W</mi> <mn>3</mn> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, with <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, in <a href="#sec4dot1-entropy-26-00448" class="html-sec">Section 4.1</a> (Scenario II).</p>
Full article ">Figure 4
<p><math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> versus <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math>, for distributed computing of <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>W</mi> <mn>2</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>W</mi> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>W</mi> <mn>3</mn> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, in <a href="#sec4dot1-entropy-26-00448" class="html-sec">Section 4.1</a>, using different joint PMF models for <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <msub> <mi>W</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>W</mi> <mn>3</mn> </msub> </mrow> </msub> </semantics></math> (Scenario II). (<b>Left</b>) <math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in (<a href="#FD20-entropy-26-00448" class="html-disp-formula">20</a>) for the joint PMF in <a href="#entropy-26-00448-t002" class="html-table">Table 2</a> for different values of <span class="html-italic">p</span>. (<b>Right</b>) <math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> for the joint PMF in (<a href="#FD17-entropy-26-00448" class="html-disp-formula">17</a>) for different values of <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>.</p>
Full article ">Figure 5
<p><math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in a logarithmic scale versus <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for <math display="inline"><semantics> <msub> <mi>K</mi> <mi>c</mi> </msub> </semantics></math> demanded functions for various values of <math display="inline"><semantics> <msub> <mi>K</mi> <mi>c</mi> </msub> </semantics></math>, with <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for different topologies, as detailed in <a href="#sec4dot1-entropy-26-00448" class="html-sec">Section 4.1</a> (Scenario III).</p>
Full article ">Figure 6
<p>Gain <math display="inline"><semantics> <mrow> <mn>10</mn> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>η</mi> <mrow> <mi>S</mi> <mi>W</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> versus <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for computing (<a href="#FD11-entropy-26-00448" class="html-disp-formula">11</a>), where <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>r</mi> </msub> <mo>=</mo> <mi>N</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>. (<b>Left</b>) The set of parameters <span class="html-italic">N</span>, <span class="html-italic">K</span>, and <span class="html-italic">M</span> are indicated for each configuration. (<b>Right</b>) <math display="inline"><semantics> <mrow> <mn>10</mn> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>η</mi> <mrow> <mi>S</mi> <mi>W</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> versus <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> to observe the effect of <span class="html-italic">N</span> for a fixed total cache size <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>N</mi> </mrow> </semantics></math> and fixed <span class="html-italic">K</span>.</p>
Full article ">
22 pages, 1210 KiB  
Article
A Joint Communication and Computation Design for Probabilistic Semantic Communications
by Zhouxiang Zhao, Zhaohui Yang, Mingzhe Chen, Zhaoyang Zhang and H. Vincent Poor
Entropy 2024, 26(5), 394; https://doi.org/10.3390/e26050394 - 30 Apr 2024
Cited by 11 | Viewed by 2386
Abstract
In this paper, the problem of joint transmission and computation resource allocation for a multi-user probabilistic semantic communication (PSC) network is investigated. In the considered model, users employ semantic information extraction techniques to compress their large-sized data before transmitting them to a multi-antenna [...] Read more.
In this paper, the problem of joint transmission and computation resource allocation for a multi-user probabilistic semantic communication (PSC) network is investigated. In the considered model, users employ semantic information extraction techniques to compress their large-sized data before transmitting them to a multi-antenna base station (BS). Our model represents large-sized data through substantial knowledge graphs, utilizing shared probability graphs between the users and the BS for efficient semantic compression. The resource allocation problem is formulated as an optimization problem with the objective of maximizing the sum of the equivalent rate of all users, considering the total power budget and semantic resource limit constraints. The computation load considered in the PSC network is formulated as a non-smooth piecewise function with respect to the semantic compression ratio. To tackle this non-convex non-smooth optimization challenge, a three-stage algorithm is proposed, where the solutions for the received beamforming matrix of the BS, the transmit power of each user, and the semantic compression ratio of each user are obtained stage by stage. The numerical results validate the effectiveness of our proposed scheme. Full article
Show Figures

Figure 1

Figure 1
<p>Illustration of a knowledge graph.</p>
Full article ">Figure 2
<p>An illustration of the considered probabilistic semantic communication (PSC) network.</p>
Full article ">Figure 3
<p>Illustration of the probability graph considered in the PSC system.</p>
Full article ">Figure 4
<p>The framework of considered PSC network.</p>
Full article ">Figure 5
<p>Illustration of computation load versus semantic compression ratio <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>.</p>
Full article ">Figure 6
<p>Sum of equivalent rate vs. number of iterations.</p>
Full article ">Figure 7
<p>Sum of equivalent rate vs. number of users.</p>
Full article ">Figure 8
<p>Sum of equivalent rate vs. noise power.</p>
Full article ">Figure 9
<p>Sum of equivalent rate vs. computation power coefficient.</p>
Full article ">Figure 10
<p>Sum of equivalent rate vs. maximum power limit.</p>
Full article ">Figure 11
<p>The allocation of the computation power and transmission power with different computation power coefficients.</p>
Full article ">
30 pages, 662 KiB  
Article
Structural Properties of the Wyner–Ziv Rate Distortion Function: Applications for Multivariate Gaussian Sources
by Michail Gkagkos and Charalambos D. Charalambous
Entropy 2024, 26(4), 306; https://doi.org/10.3390/e26040306 - 29 Mar 2024
Viewed by 928
Abstract
The main focus of this paper is the derivation of the structural properties of the test channels of Wyner’s operational information rate distortion function (RDF), R¯(ΔX), for arbitrary abstract sources and, subsequently, the derivation of additional properties [...] Read more.
The main focus of this paper is the derivation of the structural properties of the test channels of Wyner’s operational information rate distortion function (RDF), R¯(ΔX), for arbitrary abstract sources and, subsequently, the derivation of additional properties for a tuple of multivariate correlated, jointly independent, and identically distributed Gaussian random variables, {Xt,Yt}t=1, Xt:ΩRnx, Yt:ΩRny, with average mean-square error at the decoder and the side information, {Yt}t=1, available only at the decoder. For the tuple of multivariate correlated Gaussian sources, we construct optimal test channel realizations which achieve the informational RDF, R¯(ΔX)=infM(ΔX)I(X;Z|Y), where M(ΔX) is the set of auxiliary RVs Z such that PZ|X,Y=PZ|X, X^=f(Y,Z), and E{||XX^||2}ΔX. We show the following fundamental structural properties: (1) Optimal test channel realizations that achieve the RDF and satisfy conditional independence, PX|X^,Y,Z=PX|X^,Y=PX|X^,EX|X^,Y,Z=EX|X^=X^. (2) Similarly, for the conditional RDF, RX|Y(ΔX), when the side information is available to both the encoder and the decoder, we show the equality R¯(ΔX)=RX|Y(ΔX). (3) We derive the water-filling solution for RX|Y(ΔX). Full article
Show Figures

Figure 1

Figure 1
<p>The Wyner and Ziv [<a href="#B1-entropy-26-00306" class="html-bibr">1</a>] block diagram of lossy compression. If switch A is closed, then the side information is available at both the encoder and the decoder; if switch A is open, the side information is only available at the decoder.</p>
Full article ">Figure 2
<p>Test channel when side information is only available to the decoder.</p>
Full article ">Figure 3
<p><math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>X</mi> <mo>|</mo> <mi>Y</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mo>Δ</mo> <mi>X</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>: A realization of optimal reproduction <math display="inline"><semantics> <mover accent="true"> <mi>X</mi> <mo>^</mo> </mover> </semantics></math> over parallel additive Gaussian noise channels of Theorem 4, where <math display="inline"><semantics> <mrow> <msub> <mi>h</mi> <mi>i</mi> </msub> <mover> <mo>=</mo> <mo>▵</mo> </mover> <mn>1</mn> <mo>−</mo> <mfrac> <msub> <mi>δ</mi> <mi>i</mi> </msub> <msub> <mi>λ</mi> <mi>i</mi> </msub> </mfrac> <mo>≥</mo> <mn>0</mn> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>n</mi> <mi>x</mi> </msub> </mrow> </semantics></math> are the diagonal element of the spectral decomposition of the matrix <math display="inline"><semantics> <mrow> <mi>H</mi> <mo>=</mo> <mi>U</mi> <mi>diag</mi> <mrow> <mo>{</mo> <msub> <mi>h</mi> <mn>1</mn> </msub> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>h</mi> <mrow> <mi>n</mi> <mi>x</mi> </mrow> </msub> <mo>}</mo> </mrow> <msup> <mi>U</mi> <mi>T</mi> </msup> </mrow> </semantics></math>, and <math display="inline"><semantics> <mrow> <msub> <mi>W</mi> <mi>i</mi> </msub> <mo>∈</mo> <mi>N</mi> <mrow> <mo>(</mo> <mn>0</mn> <mo>,</mo> <msub> <mi>h</mi> <mi>i</mi> </msub> <msub> <mi>δ</mi> <mi>i</mi> </msub> <mo>)</mo> </mrow> <mo>,</mo> <mi>i</mi> <mo>=</mo> <mn>1</mn> <mo>,</mo> <mo>…</mo> <mo>,</mo> <msub> <mi>n</mi> <mi>x</mi> </msub> </mrow> </semantics></math>, the additive noise introduced due to compression.</p>
Full article ">Figure 4
<p>Wyner’s realizations of optimal reproductions for RDFs <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>X</mi> <mo>|</mo> <mi>Y</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mo>Δ</mo> <mi>X</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mover> <mi>R</mi> <mo>¯</mo> </mover> <mrow> <mo>(</mo> <msub> <mo>Δ</mo> <mi>X</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>. (<b>a</b>) RDF <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>X</mi> <mo>|</mo> <mi>Y</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mo>Δ</mo> <mi>X</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>: Wyner’s [<a href="#B2-entropy-26-00306" class="html-bibr">2</a>] optimal realization of <math display="inline"><semantics> <mover accent="true"> <mi>X</mi> <mo>^</mo> </mover> </semantics></math> for RDF <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>X</mi> <mo>|</mo> <mi>Y</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mo>Δ</mo> <mi>X</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> of (<a href="#FD165-entropy-26-00306" class="html-disp-formula">165</a>)–(<a href="#FD168-entropy-26-00306" class="html-disp-formula">168</a>). (<b>b</b>) RDF <math display="inline"><semantics> <mrow> <mover> <mi>R</mi> <mo>¯</mo> </mover> <mrow> <mo>(</mo> <msub> <mo>Δ</mo> <mi>X</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>: Wyner’s [<a href="#B2-entropy-26-00306" class="html-bibr">2</a>] optimal realization <math display="inline"><semantics> <mrow> <mover accent="true"> <mi>X</mi> <mo>^</mo> </mover> <mo>=</mo> <mi>f</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>,</mo> <mi>Z</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> for RDF <math display="inline"><semantics> <mrow> <mover> <mi>R</mi> <mo>¯</mo> </mover> <mrow> <mo>(</mo> <msub> <mo>Δ</mo> <mi>X</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> of (<a href="#FD165-entropy-26-00306" class="html-disp-formula">165</a>)–(<a href="#FD168-entropy-26-00306" class="html-disp-formula">168</a>).</p>
Full article ">Figure 5
<p>Comparison of classical RDF, <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>X</mi> </msub> <mrow> <mo>(</mo> <msub> <mo>Δ</mo> <mi>X</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>, conditional RDF <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mrow> <mi>X</mi> <mo>|</mo> <mi>Y</mi> </mrow> </msub> <mrow> <mo>(</mo> <msub> <mo>Δ</mo> <mi>X</mi> </msub> <mo>)</mo> </mrow> <mo>=</mo> <mover> <mi>R</mi> <mo>¯</mo> </mover> <mrow> <mo>(</mo> <msub> <mo>Δ</mo> <mi>X</mi> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math>, and Gray’s lower bound <math display="inline"><semantics> <mrow> <msub> <mi>R</mi> <mi>X</mi> </msub> <mrow> <mo>(</mo> <msub> <mo>Δ</mo> <mi>X</mi> </msub> <mo>)</mo> </mrow> <mo>−</mo> <mi>I</mi> <mrow> <mo>(</mo> <mi>X</mi> <mo>;</mo> <mi>Y</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> (solid green line).</p>
Full article ">
22 pages, 1071 KiB  
Article
The Role of Gossiping in Information Dissemination over a Network of Agents
by Melih Bastopcu, Seyed Rasoul Etesami and Tamer Başar
Entropy 2024, 26(1), 9; https://doi.org/10.3390/e26010009 - 21 Dec 2023
Cited by 3 | Viewed by 1576
Abstract
We consider information dissemination over a network of gossiping agents. In this model, a source keeps the most up-to-date information about a time-varying binary state of the world, and n receiver nodes want to follow the information at the source as accurately as [...] Read more.
We consider information dissemination over a network of gossiping agents. In this model, a source keeps the most up-to-date information about a time-varying binary state of the world, and n receiver nodes want to follow the information at the source as accurately as possible. When the information at the source changes, the source first sends updates to a subset of mn nodes. Then, the nodes share their local information during the gossiping period, to disseminate the information further. The nodes then estimate the information at the source, using the majority rule at the end of the gossiping period. To analyze the information dissemination, we introduce a new error metric to find the average percentage of nodes that can accurately obtain the most up-to-date information at the source. We characterize the equations necessary to obtain the steady-state distribution for the average error and then analyze the system behavior under both high and low gossip rates. We develop an adaptive policy that the source can use to determine its current transmission capacity m based on its past transmission rates and the accuracy of the information at the nodes. Finally, we implement a clustered gossiping network model, to further improve the information dissemination. Full article
Show Figures

Figure 1

Figure 1
<p>A communication system that consists of a source and fully connected <span class="html-italic">n</span> nodes where (<b>a</b>) only the source sends updates to the nodes, and (<b>b</b>) the nodes share their local information, called the gossiping phase.</p>
Full article ">Figure 2
<p>A clustered gossip network that consists of a source and <math display="inline"><semantics> <mrow> <msub> <mi>m</mi> <mi>s</mi> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math> cluster heads and fully connected <math display="inline"><semantics> <mrow> <msub> <mi>n</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>6</mn> </mrow> </semantics></math> nodes.</p>
Full article ">Figure 3
<p>The average error <math display="inline"><semantics> <mo>Δ</mo> </semantics></math> with respect to (<b>a</b>) <span class="html-italic">m</span> when <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>∈</mo> <mo>{</mo> <mn>0</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>20</mn> <mo>}</mo> </mrow> </semantics></math>, (<b>b</b>) the gossip rate <math display="inline"><semantics> <mi>λ</mi> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>∈</mo> <mo>{</mo> <mn>5</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>15</mn> <mo>}</mo> </mrow> </semantics></math>, and (<b>c</b>) the source’s update rate <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>s</mi> </msub> </semantics></math> for <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>=</mo> <mo>{</mo> <mn>5</mn> <mo>,</mo> <mn>10</mn> <mo>,</mo> <mn>15</mn> <mo>}</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>Average error <math display="inline"><semantics> <mo>Δ</mo> </semantics></math> with respect to <span class="html-italic">n</span> (<b>a</b>) when <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>s</mi> </msub> <mo>∈</mo> <mrow> <mo>{</mo> <mn>0.1</mn> <mi>n</mi> <mo>,</mo> <mn>0.2</mn> <mi>n</mi> <mo>,</mo> <mn>0.5</mn> <mi>n</mi> <mo>}</mo> </mrow> </mrow> </semantics></math>, (<b>b</b>) when <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>∈</mo> <mo>{</mo> <mn>0.1</mn> <mi>n</mi> <mo>,</mo> <mn>0.2</mn> <mi>n</mi> <mo>,</mo> <mn>0.5</mn> <mi>n</mi> <mo>}</mo> </mrow> </semantics></math>, and (<b>c</b>) when <math display="inline"><semantics> <mrow> <mi>m</mi> <mo>∈</mo> <mo>{</mo> <mn>0.1</mn> <mi>n</mi> <mo>,</mo> <mn>0.2</mn> <mi>n</mi> <mo>,</mo> <mn>0.5</mn> <mi>n</mi> <mo>}</mo> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>λ</mi> <mi>s</mi> </msub> <mo>∈</mo> <mrow> <mo>{</mo> <mn>0.1</mn> <mi>n</mi> <mo>,</mo> <mn>0.2</mn> <mi>n</mi> <mo>,</mo> <mn>0.5</mn> <mi>n</mi> <mo>}</mo> </mrow> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>A sample evolution of <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="double-struck">P</mi> <mrow> <mi>T</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> </mrow> </semantics></math>, which is approximated by <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="double-struck">P</mi> <mrow> <mi>T</mi> <mo>,</mo> <mi>a</mi> <mi>p</mi> <mi>p</mi> </mrow> </msub> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> in (<a href="#FD16-entropy-26-00009" class="html-disp-formula">16</a>) when <math display="inline"><semantics> <mi>λ</mi> </semantics></math> is high compared to <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>e</mi> </msub> </semantics></math> for (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>20</mn> </mrow> </semantics></math>, (<b>b</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>200</mn> </mrow> </semantics></math>, and (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>λ</mi> <mo>=</mo> <mn>400</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 6
<p>A sample evolution of (<b>a</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="double-struck">P</mi> <mrow> <mi>T</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> and (<b>b</b>) <math display="inline"><semantics> <mrow> <msub> <mi mathvariant="double-struck">P</mi> <mrow> <mi>T</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> approximated by (<a href="#FD20-entropy-26-00009" class="html-disp-formula">20</a>) and (<a href="#FD18-entropy-26-00009" class="html-disp-formula">18</a>), respectively, when the gossiping rate is low.</p>
Full article ">Figure 7
<p>(<b>a</b>) The gossip gain <math display="inline"><semantics> <mrow> <mrow> <mo stretchy="false">|</mo> <mo>Δ</mo> <mo> </mo> <mo>−</mo> <mo> </mo> </mrow> <msub> <mo>Δ</mo> <mrow> <mi>n</mi> <mi>g</mi> </mrow> </msub> <mrow> <mo stretchy="false">|</mo> </mrow> </mrow> </semantics></math> in (22) with regard to <span class="html-italic">m</span> for <math display="inline"><semantics> <mrow> <mi>p</mi> <mo>=</mo> <mo>{</mo> <mn>0.3</mn> <mo>,</mo> <mn>0.5</mn> <mo>,</mo> <mn>0.7</mn> <mo>}</mo> </mrow> </semantics></math>. (<b>b</b>) A sample evolution of <math display="inline"><semantics> <mrow> <msup> <mi>m</mi> <mo>*</mo> </msup> <mrow> <mo>(</mo> <mi>N</mi> <mo>)</mo> </mrow> </mrow> </semantics></math> in (23) and its rounding to the nearest integer for different values of <math display="inline"><semantics> <msub> <mi>λ</mi> <mi>s</mi> </msub> </semantics></math>.</p>
Full article ">Figure 8
<p>The comparison between (<b>a</b>) the average error <math display="inline"><semantics> <mo>Δ</mo> </semantics></math> and (<b>b</b>) the average <span class="html-italic">m</span> for the adaptive <span class="html-italic">m</span> and constant <span class="html-italic">m</span> selection policies.</p>
Full article ">Figure 9
<p>The long-term average error at the clusters, <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>c</mi> </msub> </semantics></math>, and at the cluster heads, <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>s</mi> </msub> </semantics></math>, as we increase the number of clusters <math display="inline"><semantics> <msub> <mi>m</mi> <mi>s</mi> </msub> </semantics></math>.</p>
Full article ">Figure 10
<p>The comparison between (<b>a</b>) the average errors <math display="inline"><semantics> <mo>Δ</mo> </semantics></math> and <math display="inline"><semantics> <msub> <mo>Δ</mo> <mi>c</mi> </msub> </semantics></math> and (<b>b</b>) the optimum <span class="html-italic">m</span> selections for the network models w and w/o clustering.</p>
Full article ">
Back to TopTop