Multi-Server Multi-Function Distributed Computation
<p>The gain <math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> of the characteristic graph approach for <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> in <a href="#sec4dot1-entropy-26-00448" class="html-sec">Section 4.1</a> (Scenario I). (<b>Left</b>) <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for various distributed topologies. (<b>Right</b>) The correlation model given as (<a href="#FD17-entropy-26-00448" class="html-disp-formula">17</a>) for <math display="inline"><semantics> <mrow> <mi mathvariant="script">T</mi> <mo>(</mo> <mn>30</mn> <mo>,</mo> <mn>30</mn> <mo>,</mo> <mn>1</mn> <mo>,</mo> <mn>11</mn> <mo>,</mo> <mn>20</mn> <mo>)</mo> </mrow> </semantics></math> with different <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> values.</p> "> Figure 2
<p>Colorings of graphs in <a href="#sec4dot1-entropy-26-00448" class="html-sec">Section 4.1</a> (Scenario II). (<b>Top Left–Right</b>) Characteristic graphs <math display="inline"><semantics> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>1</mn> </msub> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>2</mn> </msub> </msub> </semantics></math>, respectively. (<b>Bottom Left–Right</b>) The minimum conditional entropy colorings of <math display="inline"><semantics> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>1</mn> </msub> </msub> </semantics></math> given <math display="inline"><semantics> <msub> <mi>c</mi> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>2</mn> </msub> </msub> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>2</mn> </msub> </msub> </semantics></math> given <math display="inline"><semantics> <msub> <mi>c</mi> <msub> <mi>G</mi> <msub> <mi>X</mi> <mn>1</mn> </msub> </msub> </msub> </semantics></math>, respectively.</p> "> Figure 3
<p><math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in (<a href="#FD19-entropy-26-00448" class="html-disp-formula">19</a>) versus <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math>, for distributed computing of <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>W</mi> <mn>2</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>W</mi> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>W</mi> <mn>3</mn> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, with <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, in <a href="#sec4dot1-entropy-26-00448" class="html-sec">Section 4.1</a> (Scenario II).</p> "> Figure 4
<p><math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> versus <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math>, for distributed computing of <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>1</mn> </msub> <mo>=</mo> <msub> <mi>W</mi> <mn>2</mn> </msub> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msub> <mi>f</mi> <mn>2</mn> </msub> <mo>=</mo> <msub> <mi>W</mi> <mn>2</mn> </msub> <mo>+</mo> <msub> <mi>W</mi> <mn>3</mn> </msub> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>r</mi> </msub> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>, in <a href="#sec4dot1-entropy-26-00448" class="html-sec">Section 4.1</a>, using different joint PMF models for <math display="inline"><semantics> <msub> <mi>P</mi> <mrow> <msub> <mi>W</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>W</mi> <mn>3</mn> </msub> </mrow> </msub> </semantics></math> (Scenario II). (<b>Left</b>) <math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in (<a href="#FD20-entropy-26-00448" class="html-disp-formula">20</a>) for the joint PMF in <a href="#entropy-26-00448-t002" class="html-table">Table 2</a> for different values of <span class="html-italic">p</span>. (<b>Right</b>) <math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> for the joint PMF in (<a href="#FD17-entropy-26-00448" class="html-disp-formula">17</a>) for different values of <math display="inline"><semantics> <mi>ρ</mi> </semantics></math>.</p> "> Figure 5
<p><math display="inline"><semantics> <msub> <mi>η</mi> <mrow> <mi>l</mi> <mi>i</mi> <mi>n</mi> </mrow> </msub> </semantics></math> in a logarithmic scale versus <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for <math display="inline"><semantics> <msub> <mi>K</mi> <mi>c</mi> </msub> </semantics></math> demanded functions for various values of <math display="inline"><semantics> <msub> <mi>K</mi> <mi>c</mi> </msub> </semantics></math>, with <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> for different topologies, as detailed in <a href="#sec4dot1-entropy-26-00448" class="html-sec">Section 4.1</a> (Scenario III).</p> "> Figure 6
<p>Gain <math display="inline"><semantics> <mrow> <mn>10</mn> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>η</mi> <mrow> <mi>S</mi> <mi>W</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> versus <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> for computing (<a href="#FD11-entropy-26-00448" class="html-disp-formula">11</a>), where <math display="inline"><semantics> <mrow> <msub> <mi>K</mi> <mi>c</mi> </msub> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>ρ</mi> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>N</mi> <mi>r</mi> </msub> <mo>=</mo> <mi>N</mi> <mo>−</mo> <mn>1</mn> </mrow> </semantics></math>. (<b>Left</b>) The set of parameters <span class="html-italic">N</span>, <span class="html-italic">K</span>, and <span class="html-italic">M</span> are indicated for each configuration. (<b>Right</b>) <math display="inline"><semantics> <mrow> <mn>10</mn> <msub> <mo form="prefix">log</mo> <mn>10</mn> </msub> <mrow> <mo>(</mo> <msub> <mi>η</mi> <mrow> <mi>S</mi> <mi>W</mi> </mrow> </msub> <mo>)</mo> </mrow> </mrow> </semantics></math> versus <math display="inline"><semantics> <mi>ϵ</mi> </semantics></math> to observe the effect of <span class="html-italic">N</span> for a fixed total cache size <math display="inline"><semantics> <mrow> <mi>M</mi> <mi>N</mi> </mrow> </semantics></math> and fixed <span class="html-italic">K</span>.</p> ">
Abstract
:1. Introduction
1.1. The Multi-Server Multi-Function Distributed Computing Setting and the Need for Accounting for General Non-Linear Functions
1.2. Data Correlation and Structure
1.3. Characteristic Graphs
1.4. Contributions
1.5. Paper Organization
2. System Model
2.1. Datasets, Subfunctions, and Placement into Distributed Servers
2.2. Cyclic Dataset Placement Model, Computation Capacity, and Recovery Threshold
2.3. User Demands and Structure of the Computation
2.4. Communication Cost for the Characteristic-Graph-Based Computing Approach
3. Main Results
- is the union characteristic graph (We refer the reader to (A12) (Appendix A.2) for the definition of a union of characteristic graphs.) that server builds for computing ,
- denotes a codebook of functions that server uses for computing ,
- each subfunction , is defined over a q-ary field such that the characteristic is at least 2, and
- such that denotes the transmitted information from server .
- denotes a codebook of Boolean functions that server uses,
- such that denotes the transmitted information from server ,
- has two maximal independent sets (MISs), namely, and , yielding and , respectively, and
- the probability that yields the function value is given as
- denotes the probability that the product of M subfunctions, with being i.i.d. across , take the value one, i.e., ,
- the variable denotes the minimum number of servers needed to compute , given as in (11), where each of these servers computes a disjoint product of M subfunctions, and
- the variable represents whether an additional server is needed to aid the computation, and if , then denotes the number of subfunctions to be computed by the additional server, and similarly to the above, .
4. Numerical Evaluations to Demonstrate the Achievable Gains
4.1. Example Case: Distributed Computing of Linearly Separable Functions over
- Scenario I. The number of demanded functions is , where the subfunctions could be uncorrelated or correlated.
- Scenario II. The number of demanded functions is , where the subfunctions could be uncorrelated or correlated.
- Scenario III. The number of demanded functions is , and the number of datasets is equal to the number of servers, i.e., , where the subfunctions are uncorrelated.
4.2. Distributed Computation of K-Multi-Linear Functions over
5. Conclusions
Author Contributions
Funding
Institutional Review Board Statement
Informed Consent Statement
Data Availability Statement
Acknowledgments
Conflicts of Interest
Abbreviations
ach | Achievable |
Bern | Bernoulli |
G | Graph |
i.i.d. | Independent and identically distributed |
lin | Linearly separable encoding |
MIS | Maximal independent set |
PMF | Probability mass function |
SW | Slepian–Wolf encoding |
Appendix A. Technical Preliminary
Appendix A.1. Distributed Source Compression, and Communication Cost
Appendix A.2. Characteristic Graphs, Distributed Functional Compression, and Communication Cost
- 1.
- Let be a uniform PMF over the set . Assume that has only one edge, i.e., . Hence, the set of MISs is given as .To determine the entropy of a characteristic graph, i.e., , from (A3), our objective is to minimize , which is a convex function of . Hence, is minimized when the conditional distribution of is selected as , , and . As a result of this PMF, we have
- 2.
- Let be a uniform PMF over the set and . Note that given the joint PMF. To determine the conditional characteristic graph entropy, i.e., , using (A4), our objective is to minimize , which is convex in . Hence, is minimized when is selected as , and . Hence, we obtain
Appendix A.2.1. A Characteristic-Graph-Based Encoding Framework for Simultaneously Computing a Set of Functions
Appendix A.2.2. Distributed Functional Compression
Appendix B. Proofs of Main Results
Appendix B.1. Proof of Theorem 1
Appendix B.2. Proof of Proposition 1
- (i)
- When , it is sufficient for each server to transmit linearly independent combinations of their subfunctions. This leads to resolving linear combinations of K subfunctions from servers that are sufficient to derive the demanded linear functions. Because , there are unresolved linear combinations of K subfunctions.
- (ii)
- When , it is sufficient for each server to transmit at most linearly independent combinations of their subfunctions. This leads to resolving linear combinations of K subfunctions and unresolved linear combinations of K subfunctions.
- (iii)
- When , each server needs to transmit at a rate where and , which gives the number of linearly independent combinations needed to meet the demand. This yields a sum-rate of . The subset of servers may need to provide up to an additional linear combinations, and defines the maximum number of additional linear combinations per server, i.e., the required number of combinations when .
- (iv)
- When , it is easy to note that since any K linearly independent equation in (A23) suffices to recover , the sum-rate K is achievable.
Appendix B.3. Proof of Proposition 2
Appendix B.4. Proof of Proposition 3
References
- Yang, C.; Wu, H.; Huang, Q.; Li, Z.; Li, J. Using spatial principles to optimize distributed computing for enabling the physical science discoveries. Proc. Natl. Acad. Sci. USA 2011, 108, 5498–5503. [Google Scholar] [CrossRef]
- Shamsi, J.; Khojaye, M.A.; Qasmi, M.A. Data-intensive cloud computing: Requirements, expectations, challenges, and solutions. J. Grid Comput. 2013, 11, 281–310. [Google Scholar] [CrossRef]
- Yang, H.; Ding, T.; Yuan, X. Federated Learning With Lossy Distributed Source Coding: Analysis and Optimization. IEEE Trans. Commun. 2023, 71, 4561–4576. [Google Scholar] [CrossRef]
- Gan, G. Evaluation of room air distribution systems using computational fluid dynamics. Energy Build. 1995, 23, 83–93. [Google Scholar] [CrossRef]
- Gao, Y.; Wang, L.; Zhou, J. Cost-efficient and quality of experience-aware provisioning of virtual machines for multiplayer cloud gaming in geographically distributed data centers. IEEE Access 2019, 7, 142574–142585. [Google Scholar] [CrossRef]
- Lushbough, C.; Brendel, V. An overview of the BioExtract Server: A distributed, Web-based system for genomic analysis. In Advances in Computational Biology; Springer: New York, NY, USA, 2010; pp. 361–369. [Google Scholar]
- Dean, J.; Ghemawat, S. MapReduce: Simplified data processing on large clusters. Commun. ACM 2008, 51, 107–113. [Google Scholar] [CrossRef]
- Grolinger, K.; Hayes, M.; Higashino, W.A.; L’Heureux, A.; Allison, D.S.; Capretz, M.A. Challenges for MapReduce in Big Data. In Proceedings of the IEEE World Congress Services, Anchorage, AK, USA, 27 June–2 July 2014; pp. 182–189. [Google Scholar]
- Al-Khasawneh, M.A.; Shamsuddin, S.M.; Hasan, S.; Bakar, A.A. MapReduce a Comprehensive Review. In Proceedings of the International Conference on Smart Computing and Electronic Enterprise (ICSCEE), Shah Alam, Malaysia, 11–12 July 2018; pp. 1–6. [Google Scholar]
- Zaharia, M.; Chowdhury, M.; Franklin, M.J.; Shenker, S.; Stoica, I. Spark: Cluster computing with working sets. In Proceedings of the USENIX Workshop on Hot Topics in Cloud Computing, Boston, MA, USA, 22 June 2010. [Google Scholar]
- Khumoyun, A.; Cui, Y.; Hanku, L. Spark based distributed deep learning framework for big data applications. In Proceedings of the International Conference on Information Science and Communications Technologies (ICISCT), Tashkent, Uzbekistan, 2–4 November 2016; pp. 1–5. [Google Scholar]
- Orgerie, A.C.; Assuncao, M.D.d.; Lefevre, L. A survey on techniques for improving the energy efficiency of large-scale distributed systems. ACM Comput. Surv. 2014, 46, 1–31. [Google Scholar] [CrossRef]
- Keralapura, R.; Cormode, G.; Ramamirtham, J. Communication-Efficient Distributed Monitoring of Thresholded Counts. In Proceedings of the ACM SIGMOD International Conference on Management of Data, New York, NY, USA, 27–29 June 2006; pp. 289–300. [Google Scholar]
- Li, W.; Chen, Z.; Wang, Z.; Jafar, S.A.; Jafarkhani, H. Flexible constructions for distributed matrix multiplication. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Melbourne, Australia, 12–20 July 2021; pp. 1576–1581. [Google Scholar]
- Liu, Y.; Yu, F.R.; Li, X.; Ji, H.; Leung, V.C. Distributed resource allocation and computation offloading in fog and cloud networks with non-orthogonal multiple access. IEEE Trans. Veh. Tech. 2018, 67, 12137–12151. [Google Scholar] [CrossRef]
- Noormohammadpour, M.; Raghavendra, C.S. Datacenter traffic control: Understanding techniques and tradeoffs. IEEE Commun. Surv. Tutor. 2017, 20, 1492–1525. [Google Scholar] [CrossRef]
- Shivaratri, N.; Krueger, P.; Singhal, M. Load distributing for locally distributed systems. Computer 1992, 25, 33–44. [Google Scholar] [CrossRef]
- Bestavros, A. Demand-based document dissemination to reduce traffic and balance load in distributed information systems. In Proceedings of the IEEE Symposium on Parallel and Distributed Processing, San Antonio, TX, USA, 25–28 October 1995; pp. 338–345. [Google Scholar]
- Reisizadeh, A.; Prakash, S.; Pedarsani, R.; Avestimehr, A.S. Tree Gradient Coding. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Paris, France, 7–12 July 2019; pp. 2808–2812. [Google Scholar]
- Ozfatura, E.; Gündüz, D.; Ulukus, S. Gradient Coding with Clustering and Multi-Message Communication. In Proceedings of the IEEE Data Science Workshop, Minneapolis, MN, USA, 2–7 June 2019; pp. 42–46. [Google Scholar]
- Tandon, R.; Lei, Q.; Dimakis, A.G.; Karampatziakis, N. Gradient Coding: Avoiding Stragglers in Distributed Learning. In Proceedings of the International Conference on Machine Learning, Sydney, Australia, 31 July–3 August 2017. [Google Scholar]
- Ye, M.; Abbe, E. Communication-computation efficient gradient coding. In Proceedings of the International Conference on Machine Learning, PMLR, Stockholm, Sweden, 10–15 July 2018; pp. 5610–5619. [Google Scholar]
- Halbawi, W.; Azizan, N.; Salehi, F.; Hassibi, B. Improving Distributed Gradient Descent Using Reed-Solomon Codes. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Vail, CO, USA, 17–22 June 2018; pp. 2027–2031. [Google Scholar]
- Maddah-Ali, M.A.; Niesen, U. Fundamental limits of caching. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Istanbul, Türkiye, 7–12 July 2013; pp. 1077–1081. [Google Scholar]
- Karamchandani, N.; Niesen, U.; Maddah-Ali, M.A.; Diggavi, S.N. Hierarchical coded caching. IEEE Trans. Info Theory 2016, 62, 3212–3229. [Google Scholar] [CrossRef]
- Li, S.; Supittayapornpong, S.; Maddah-Ali, M.A.; Avestimehr, S. Coded TeraSort. In Proceedings of the IEEE International Parallel and Distributed Processing Symposium Workshops (IPDPSW), Lake Buena Vista, FL, USA, 29 May–2 June 2017. [Google Scholar]
- Li, S.; Maddah-Ali, M.A.; Yu, Q.; Avestimehr, A.S. A fundamental tradeoff between computation and communication in distributed computing. IEEE Trans. Inf. Theory 2017, 64, 109–128. [Google Scholar] [CrossRef]
- Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. The exact rate-memory tradeoff for caching with uncoded prefetching. IEEE Trans. Inf. Theory 2018, 64, 1281–1296. [Google Scholar] [CrossRef]
- Naderializadeh, N.; Maddah-Ali, M.A.; Avestimehr, A.S. Fundamental limits of cache-aided interference management. IEEE Trans. Inf. Theory 2017, 63, 3092–3107. [Google Scholar] [CrossRef]
- Subramaniam, A.M.; Heidarzadeh, A.; Narayanan, K.R. Collaborative decoding of polynomial codes for distributed computation. In Proceedings of the IEEE Information Theory Workshop (ITW), Visby, Sweden, 25–28 August 2019; pp. 1–5. [Google Scholar]
- Dutta, S.; Fahim, M.; Haddadpour, F.; Jeong, H.; Cadambe, V.; Grover, P. On the optimal recovery threshold of coded matrix multiplication. IEEE Trans. Inf. Theory 2019, 66, 278–301. [Google Scholar] [CrossRef]
- Yosibash, R.; Zamir, R. Frame codes for distributed coded computation. In Proceedings of the International Symposium on Topics in Coding, Montreal, QC, Canada, 18–21 August 2021; pp. 1–5. [Google Scholar]
- Dimakis, A.G.; Godfrey, P.B.; Wu, Y.; Wainwright, M.J.; Ramchandran, K. Network coding for distributed storage systems. IEEE Trans. Inf. Theory 2010, 56, 4539–4551. [Google Scholar] [CrossRef]
- Wan, K.; Sun, H.; Ji, M.; Tuninetti, D.; Caire, G. Cache-aided matrix multiplication retrieval. IEEE Trans. Inf. Theory 2022, 68, 4301–4319. [Google Scholar] [CrossRef]
- Jia, Z.; Jafar, S.A. On the capacity of secure distributed batch matrix multiplication. IEEE Trans. Inf. Theory 2021, 67, 7420–7437. [Google Scholar] [CrossRef]
- Soleymani, M.; Mahdavifar, H.; Avestimehr, A.S. Analog lagrange coded computing. IEEE J. Sel. Areas Inf. Theory 2021, 2, 283–295. [Google Scholar] [CrossRef]
- Yu, Q.; Maddah-Ali, M.A.; Avestimehr, S. Polynomial codes: An optimal design for high-dimensional coded matrix multiplication. In Proceedings of the International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 4403–4413. [Google Scholar]
- López, H.H.; Matthews, G.L.; Valvo, D. Secure MatDot codes: A secure, distributed matrix multiplication scheme. In Proceedings of the IEEE Information Theory Workshop (ITW), Mumbai, India, 6–9 November 2022; pp. 149–154. [Google Scholar]
- Wan, K.; Sun, H.; Ji, M.; Caire, G. Distributed linearly separable computation. IEEE Trans. Inf. Theory 2021, 68, 1259–1278. [Google Scholar] [CrossRef]
- Zhu, J.; Li, S.; Li, J. Information-theoretically private matrix multiplication from MDS-coded storage. IEEE Trans. Inf. Forensics Secur. 2023, 18, 1680–1695. [Google Scholar] [CrossRef]
- Das, A.B.; Ramamoorthy, A.; Vaswani, N. Efficient and Robust Distributed Matrix Computations via Convolutional Coding. IEEE Trans. Inf. Theory. 2021, 67, 6266–6282. [Google Scholar] [CrossRef]
- Yu, Q.; Maddah-Ali, M.A.; Avestimehr, A.S. Straggler Mitigation in Distributed Matrix Multiplication: Fundamental Limits and Optimal Coding. IEEE Trans. Inf. Theory. 2020, 66, 1920–1933. [Google Scholar] [CrossRef]
- Fawzi, A.; Balog, M.; Huang, A.; Hubert, T.; Romera-Paredes, B.; Barekatain, M.; Novikov, A.; R Ruiz, F.J.; Schrittwieser, J.; Swirszcz, G.; et al. Discovering faster matrix multiplication algorithms with reinforcement learning. Nature 2022, 610, 47–53. [Google Scholar] [CrossRef] [PubMed]
- Aliasgari, M.; Simeone, O.; Kliewer, J. Private and secure distributed matrix multiplication with flexible communication load. IEEE Trans. Inf. Forensics Secur. 2020, 15, 2722–2734. [Google Scholar] [CrossRef]
- D’Oliveira, R.G.; El Rouayheb, S.; Heinlein, D.; Karpuk, D. Notes on communication and computation in secure distributed matrix multiplication. In Proceedings of the IEEE Conference on Communications and Network Security, Virtual, 29 June–1 July 2020; pp. 1–6. [Google Scholar]
- Rashmi, K.V.; Shah, N.B.; Kumar, P.V. Optimal exact-regenerating codes for distributed storage at the MSR and MBR points via a product-matrix construction. IEEE Trans. Inf. Theory 2011, 57, 5227–5239. [Google Scholar] [CrossRef]
- Cancès, E.; Friesecke, G. Density Functional Theory: Modeling, Mathematical Analysis, Computational Methods, and Applications, 1st ed.; Springer Nature: Berlin/Heidelberg, Germany, 2023. [Google Scholar]
- Hanna, O.A.; Ezzeldin, Y.H.; Sadjadpour, T.; Fragouli, C.; Diggavi, S. On distributed quantization for classification. IEEE J. Sel. Areas Inf. Theory 2020, 1, 237–249. [Google Scholar] [CrossRef]
- Luo, P.; Xiong, H.; Lü, K.; Shi, Z. Distributed classification in peer-to-peer networks. In Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, San Jose, CA, USA, 12–15 August 2007; pp. 968–976. [Google Scholar]
- Karakus, C.; Sun, Y.; Diggavi, S.; Yin, W. Straggler mitigation in distributed optimization through data encoding. In Proceedings of the International Conference on Neural Information Processing Systems, Long Beach, CA, USA, 4–9 December 2017; pp. 5434–5442. [Google Scholar]
- Jia, Z.; Jafar, S.A. Cross subspace alignment codes for coded distributed batch computation. IEEE Trans. Inf. Theory 2021, 67, 2821–2846. [Google Scholar] [CrossRef]
- Wang, J.; Jia, Z.; Jafar, S.A. Price of Precision in Coded Distributed Matrix Multiplication: A Dimensional Analysis. In Proceedings of the IEEE Information Theory Workshop (ITW), Kanazawa, Japan, 17–21 October 2021; pp. 1–6. [Google Scholar]
- Chang, W.T.; Tandon, R. On the capacity of secure distributed matrix multiplication. In Proceedings of the IEEE Global Communications Conference, Abu Dhabi, United Arab Emirates, 9–13 December 2018; pp. 1–6. [Google Scholar]
- Monagan, M.; Pearce, R. Parallel sparse polynomial multiplication using heaps. In Proceedings of the International Symposium on Symbolic and Algebraic Computation, Seoul, Republic of Korea, 28–31 July 2009; pp. 263–270. [Google Scholar]
- Hsu, C.D.; Jeong, H.; Pappas, G.J.; Chaudhari, P. Scalable reinforcement learning policies for multi-agent control. In Proceedings of the IEEE/RSJ International Conference on Intelligent Robots and Systems, Prague, Czech Republic, 27 September 27–1 October 2021; pp. 4785–4791. [Google Scholar]
- Goldenbaum, M.; Boche, H.; Stańczak, S. Nomographic functions: Efficient computation in clustered Gaussian sensor networks. IEEE Trans. Wirel. Commun. 2014, 14, 2093–2105. [Google Scholar] [CrossRef]
- Goldenbaum, M.; Boche, H.; Stańczak, S. Harnessing interference for analog function computation in wireless sensor networks. IEEE Trans. Signal Process. 2013, 61, 4893–4906. [Google Scholar] [CrossRef]
- Huang, W.; Wan, K.; Sun, H.; Ji, M.; Qiu, R.C.; Caire, G. Fundamental Limits of Distributed Linearly Separable Computation under Cyclic Assignment. In Proceedings of the IEEE International Symposium on Information Theory (ISIT’23), Taipei, Taiwan, 25–30 June 2023. [Google Scholar]
- Wan, K.; Sun, H.; Ji, M.; Caire, G. On Secure Distributed Linearly Separable Computation. IEEE J. Sel. Areas Commun. 2022, 40, 912–926. [Google Scholar] [CrossRef]
- Slepian, D.; Wolf, J.K. Noiseless coding of correlated information sources. IEEE Trans. Inf. Theory 1973, 19, 471–480. [Google Scholar] [CrossRef]
- Cover, T. A proof of the data compression theorem of Slepian and Wolf for ergodic sources. IEEE Trans. Inf. Theory 1975, 21, 226–228. [Google Scholar] [CrossRef]
- Korner, J.; Marton, K. How to encode the modulo-two sum of binary sources. IEEE Trans. Inf. Theory 1979, 25, 219–221. [Google Scholar] [CrossRef]
- Lalitha, V.; Prakash, N.; Vinodh, K.; Kumar, P.V.; Pradhan, S.S. Linear coding schemes for the distributed computation of subspaces. IEEE J. Sel. Areas Commun. 2013, 31, 678–690. [Google Scholar] [CrossRef]
- Yamamoto, H. Wyner-Ziv theory for a general function of the correlated sources. IEEE Trans. Inf. Theory 1982, 28, 803–807. [Google Scholar] [CrossRef]
- Wyner, A.; Ziv, J. The rate-distortion function for source coding with side information at the decoder. IEEE Trans. Inf. Theoy 1976, 22, 1–10. [Google Scholar] [CrossRef]
- Wan, K.; Sun, H.; Ji, M.; Tuninetti, D.; Caire, G. Cache-Aided General Linear Function Retrieval. Entropy 2020, 23, 25. [Google Scholar] [CrossRef]
- Khalesi, A.; Elia, P. Multi-User Linearly-Separable Distributed Computing. IEEE. Trans. Inf. Theory 2023, 69, 6314–6339. [Google Scholar] [CrossRef]
- Wan, K.; Sun, H.; Ji, M.; Caire, G. On the Tradeoff Between Computation and Communication Costs for Distributed Linearly Separable Computation. IEEE Trans. Commun. 2021, 69, 7390–7405. [Google Scholar] [CrossRef]
- Erickson, B.J.; Korfiatis, P.; Akkus, Z.; Kline, T.L. Machine learning for medical imaging. Radiographics 2017, 37, 505–515. [Google Scholar] [CrossRef] [PubMed]
- Correa, N.M.; Adali, T.; Li, Y.O.; Calhoun, V.D. Canonical Correlation Analysis for Data Fusion and Group Inferences. IEEE Signal Process. Mag. 2010, 27, 39–50. [Google Scholar] [CrossRef] [PubMed]
- Kant, G.; Sangwan, K.S. Predictive modeling for power consumption in machining using artificial intelligence techniques. Procedia CIRP 2015, 26, 403–407. [Google Scholar] [CrossRef]
- Han, T.; Kobayashi, K. A dichotomy of functions F(X, Y) of correlated sources (X, Y). IEEE Trans. Inf. Theory 1987, 33, 69–76. [Google Scholar] [CrossRef]
- Alon, N.; Orlitsky, A. Source coding and graph entropies. IEEE Trans. Inf. Theory 1996, 42, 1329–1339. [Google Scholar] [CrossRef]
- Orlitsky, A.; Roche, J.R. Coding for computing. IEEE Trans. Inf. Theory 2001, 47, 903–917. [Google Scholar] [CrossRef]
- Körner, J. Coding of an information source having ambiguous alphabet and the entropy of graphs. In Proceedings of the Prague Conference on Information Theory, Prague, Czech Republic, 19–25 September 1973. [Google Scholar]
- Malak, D. Fractional Graph Coloring for Functional Compression with Side Information. In Proceedings of the IEEE Information Theory Workshop (ITW), Mumbai, India, 6–9 November 2022. [Google Scholar]
- Malak, D. Weighted graph coloring for quantized computing. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Taipei, Taiwan, 25–30 June 2023; pp. 2290–2295. [Google Scholar]
- Charpenay, N.; Le Treust, M.; Roumy, A. Complementary Graph Entropy, AND Product, and Disjoint Union of Graphs. In Proceedings of the IEEE International Symposium on Information Theory (ISIT), Taipei, Taiwan, 25–30 June 2023; pp. 2446–2451. [Google Scholar]
- Deylam Salehi, M.R.; Malak, D. An Achievable Low Complexity Encoding Scheme for Coloring Cyclic Graphs. In Proceedings of the Annual Allerton Conference on Communication, Control, and Computing, Monticello, IL, USA, 26–29 September 2023; pp. 1–8. [Google Scholar]
- Maugey, T.; Rizkallah, M.; Mahmoudian Bidgoli, N.; Roumy, A.; Guillemot, C. Graph Spectral 3D Image Compression. In Graph Spectral Image Processing; Wiley: Hoboken, NJ, USA, 2021; pp. 105–128. [Google Scholar]
- Sevilla, J.L.; Segura, V.; Podhorski, A.; Guruceaga, E.; Mato, J.M.; Martinez-Cruz, L.A.; Corrales, F.J.; Rubio, A. Correlation between gene expression and GO semantic similarity. IEEE/ACM Trans. Comput. Biol. Bioinf. 2005, 2, 330–338. [Google Scholar] [CrossRef] [PubMed]
- Feizi, S.; Médard, M. On network functional compression. IEEE Trans. Inf. Theory 2014, 60, 5387–5401. [Google Scholar] [CrossRef]
- Maddah-Ali, M.A.; Niesen, U. Fundamental limits of caching. IEEE Trans. Inf. Theory 2014, 60, 2856–2867. [Google Scholar] [CrossRef]
- Mosk-Aoyama, D.; Shah, D. Fast Distributed Algorithms for Computing Separable Functions. IEEE. Trans. Info. Theory 2008, 54, 2997–3007. [Google Scholar] [CrossRef]
- Kaur, G. A comparison of two hybrid ensemble techniques for network anomaly detection in spark distributed environment. J. Inform. Secur. Appl. 2020, 55, 102601. [Google Scholar] [CrossRef]
- Chen, J.; Li, J.; Huang, R.; Yue, K.; Chen, Z.; Li, W. Federated learning for bearing fault diagnosis with dynamic weighted averaging. In Proceedings of the International Conference on Sensing, Measurement & Data Analytics in the era of Artificial Intelligence, Nanjing, China, 21–23 October 2021; pp. 1–6. [Google Scholar]
- Zhao, J.; Govindan, R.; Estrin, D. Computing aggregates for monitoring wireless sensor networks. In Proceedings of the IEEE International Workshop on Sensor Network Protocols and Applications, Anchorage, AK, USA, 1 January 2003; pp. 139–148. [Google Scholar]
- Giselsson, P.; Rantzer, A. Large-Scale and Distributed Optimization: An Introduction, 1st ed.; Springer: Berlin/Heidelberg, Germany, 2018; Volume 2227. [Google Scholar]
- Kavadias, S.; Chao, R.O. Resource Allocation and New Product Development Portfolio Management, 1st ed.; Elsevier: Amsterdam, The Netherlands; Butterworth-Heinemann: Oxford, UK, 2007; pp. 135–163. [Google Scholar]
- Diniz, C.A.R.; Tutia, M.H.; Leite, J.G. Bayesian analysis of a correlated binomial model. Braz. J. Probab. Stat. 2010, 24, 68–77. [Google Scholar] [CrossRef]
- Boland, P.J.; Proschan, F.; Tong, Y. Some majorization inequalities for functions of exchangeable random variables. Lect. Not.-Mono. Ser. 1990, 85–91. [Google Scholar]
- Witsenhausen, H. The zero-error side information problem and chromatic numbers (corresp.). IEEE Trans. Inf. Theory 1976, 22, 592–593. [Google Scholar] [CrossRef]
- Moon, J.W.; Moser, L. On cliques in graphs. Israel J. Math. 1965, 3, 23–28. [Google Scholar] [CrossRef]
- Cover, T.M.; Thomas, J.A. Elements of Information Theory, 2nd ed.; John Wiley & Sons: New York, NY, USA, 1991. [Google Scholar]
Distributed-Computation-System-Related Definitions | Symbols |
---|---|
Number of distributed servers; set of distributed servers; capacity of a server | N; ; M |
Set of datasets; dataset catalog size | ; |
Subfunction | |
The number of symbols in each ; blocklength | L; n |
Set of indices of datasets assigned to server such that | |
Set of subfunctions corresponding to a subset of servers with indices for | |
Recovery threshold | |
Number of demanded functions by the user | |
Number of symbols per transmission of server | |
Topology of the multi-server multi-function distributed computing setting | |
Graph-Theoretic Definitions | Symbols |
Characteristic graph that server i builds for computing | , |
Union of characteristic graphs that server i builds for computing | , |
Maximal independent set (MIS); set of all MISs of | ; |
A valid coloring of | |
n-th OR power graph; a valid coloring of the n-th OR power graph | ; |
Characteristic graph entropy of | |
Conditional characteristic graph entropy of such that given |
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content. |
© 2024 by the authors. Licensee MDPI, Basel, Switzerland. This article is an open access article distributed under the terms and conditions of the Creative Commons Attribution (CC BY) license (https://creativecommons.org/licenses/by/4.0/).
Share and Cite
Malak, D.; Deylam Salehi, M.R.; Serbetci, B.; Elia, P. Multi-Server Multi-Function Distributed Computation. Entropy 2024, 26, 448. https://doi.org/10.3390/e26060448
Malak D, Deylam Salehi MR, Serbetci B, Elia P. Multi-Server Multi-Function Distributed Computation. Entropy. 2024; 26(6):448. https://doi.org/10.3390/e26060448
Chicago/Turabian StyleMalak, Derya, Mohammad Reza Deylam Salehi, Berksan Serbetci, and Petros Elia. 2024. "Multi-Server Multi-Function Distributed Computation" Entropy 26, no. 6: 448. https://doi.org/10.3390/e26060448
APA StyleMalak, D., Deylam Salehi, M. R., Serbetci, B., & Elia, P. (2024). Multi-Server Multi-Function Distributed Computation. Entropy, 26(6), 448. https://doi.org/10.3390/e26060448