[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Mathematical Modelling of Complex Systems

A special issue of Axioms (ISSN 2075-1680). This special issue belongs to the section "Mathematical Physics".

Deadline for manuscript submissions: closed (28 February 2025) | Viewed by 8009

Special Issue Editors


E-Mail Website
Guest Editor
Graduate School of Information Sciences, Tohoku University, Sendai 980-8579, Japan
Interests: meta-heuristic algorithms; neuron model; complex systems; renewable energy
Special Issues, Collections and Topics in MDPI journals

E-Mail Website
Guest Editor
Advanced Institute of Industrial Technology, Tokyo 140-0011, Japan
Interests: wireless communication technology; cloud computing; internet of things; cyber physical system; mobile computing

Special Issue Information

Dear Colleagues,

Complex systems, defined as systems composed of a large number of interacting parts and characterized by non-linearity, adaptability, and dynamic changes, span various significant fields such as climatology, ecology, economics, social networks, cyber physical system, and biology. Understanding these complex systems is crucial for interpreting many key phenomena in our world. However, due to their inherent characteristics, it is often challenging to accurately describe and predict complex systems using traditional mathematical and computational methods.

Against this backdrop, artificial intelligence (AI) technologies such as neural networks and evolutionary algorithms have become critically important in the study of complex systems. These AI algorithms themselves are examples of complex systems, displaying numerous interactions, non-linearity, dynamic changes, and adaptability. This complexity poses a challenge but also offers a new approach to understanding and researching complex systems.

On the one hand, AI technologies like neural networks and evolutionary algorithms can be employed for the simulation and optimization of complex systems. They can handle high-dimensional, non-linear, and dynamic data, thereby providing deep insights into and effective predictions of the behaviour of complex systems. Mathematical modelling plays a pivotal role in this process, offering a rigorous theoretical framework for the modelling of complex systems and providing an accurate computational foundation for the application of AI technologies such as neural networks and evolutionary algorithms.

On the other hand, the mathematical tools and concepts in complex systems theory provide valuable insights for understanding and improving AI algorithms. The dynamic, adaptive, and non-linear properties of complex systems offer new perspectives and tools for understanding the training process of neural networks, improving the search strategies of evolutionary algorithms, and designing more efficient AI systems.

This Special Issue, "Mathematical Modelling of Complex Systems," aims to explore this two-way relationship, with a particular focus on research using mathematical modelling methods to aid AI technologies in the modelling and optimization of complex systems. At the same time, we look forward to works that use mathematical tools from complex systems theory to understand and improve AI algorithms. We encourage interdisciplinary research, intending to study complex systems from an AI perspective and to investigate AI through the lens of complex systems, thereby promoting the development of this interdisciplinary field.

Dr. Haichuan Yang
Dr. Chaofeng Zhang
Guest Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the special issue website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Axioms is an international peer-reviewed open access monthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2400 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • complex systems
  • mathematical modelling
  • artificial intelligence
  • neural networks
  • evolutionary algorithms
  • adaptability
  • dynamics
  • system optimization
  • Internet of Things
  • cyber physical system

Benefits of Publishing in a Special Issue

  • Ease of navigation: Grouping papers by topic helps scholars navigate broad scope journals more efficiently.
  • Greater discoverability: Special Issues support the reach and impact of scientific research. Articles in Special Issues are more discoverable and cited more frequently.
  • Expansion of research network: Special Issues facilitate connections among authors, fostering scientific collaborations.
  • External promotion: Articles in Special Issues are often promoted through the journal's social media, increasing their visibility.
  • e-Book format: Special Issues with more than 10 articles can be published as dedicated e-books, ensuring wide and rapid dissemination.

Further information on MDPI's Special Issue policies can be found here.

Published Papers (4 papers)

Order results
Result details
Select all
Export citation of selected articles as:

Research

Jump to: Other

18 pages, 543 KiB  
Article
AHD-SLE: Anomalous Hyperedge Detection on Hypergraph Symmetric Line Expansion
by Yingle Li, Hongtao Yu, Haitao Li, Fei Pan and Shuxin Liu
Axioms 2024, 13(6), 387; https://doi.org/10.3390/axioms13060387 - 7 Jun 2024
Viewed by 1191
Abstract
Graph anomaly detection aims to identify unusual patterns or structures in graph-structured data. Most existing research focuses on anomalous nodes in ordinary graphs with pairwise relationships. However, complex real-world systems often involve relationships that go beyond pairwise relationships, and insufficient attention is paid [...] Read more.
Graph anomaly detection aims to identify unusual patterns or structures in graph-structured data. Most existing research focuses on anomalous nodes in ordinary graphs with pairwise relationships. However, complex real-world systems often involve relationships that go beyond pairwise relationships, and insufficient attention is paid to hypergraph anomaly detection, especially anomalous hyperedge detection. Some existing methods for researching hypergraphs involve transforming hypergraphs into ordinary graphs for learning, which can result in poor detection performance due to the loss of high-order information. We propose a new method for Anomalous Hyperedge Detection on Symmetric Line Expansion (AHD-SLE). The SLE of a hypergraph is an ordinary graph with pairwise relationships and can be backmapped to the hypergraph, so the SLE is able to preserve the higher-order information of the hypergraph. The AHD-SLE first maps the hypergraph to the SLE; then, the information is aggregated by Graph Convolutional Networks (GCNs) in the SLE. After that, the hyperedge embedding representation is obtained through a backmapping operation. Finally, an anomaly function is designed to detect anomalous hyperedges using the hyperedge embedding representation. Experiments on five different types of real hypergraph datasets show that AHD-SLE outperforms the baseline algorithm in terms of Area Under the receiver operating characteristic Curve(AUC) and Recall metrics. Full article
(This article belongs to the Special Issue Mathematical Modelling of Complex Systems)
Show Figures

Figure 1

Figure 1
<p>Hypergraph expansions. (<b>a</b>) A hypergraph with 7 nodes and 3 hyperedges. (<b>b</b>) Star expansion. (<b>c</b>) Clique expansion. (<b>d</b>) Line expansion. (<b>e</b>) Symmetric line expansion.</p>
Full article ">Figure 2
<p>The bijection between node–hyperedge pairs and SLE graph nodes.</p>
Full article ">Figure 3
<p>Anomalous Hyperedge Detection method on hypergraph Symmetric Line Expansion.</p>
Full article ">Figure 4
<p>Impact of different anomalous proportions for anomaly detection. (<b>a</b>) is the impact on AUC, (<b>b</b>) is the impact on R@k.</p>
Full article ">Figure 5
<p>Impact of different <math display="inline"><semantics> <msub> <mi>w</mi> <mi>v</mi> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>w</mi> <mi>e</mi> </msub> </semantics></math> for detection performance. The x-axis is the value of the <math display="inline"><semantics> <msub> <mi>w</mi> <mi>v</mi> </msub> </semantics></math>. Accordingly, the <math display="inline"><semantics> <mrow> <msub> <mi>w</mi> <mi>e</mi> </msub> <mo>=</mo> <mn>2</mn> <mo>−</mo> <msub> <mi>w</mi> <mi>v</mi> </msub> </mrow> </semantics></math>. The y-axis in (<b>a</b>) is the AUC value under different <math display="inline"><semantics> <msub> <mi>w</mi> <mi>v</mi> </msub> </semantics></math>, and the y-axis in (<b>b</b>) is the R@k value under different <math display="inline"><semantics> <msub> <mi>w</mi> <mi>v</mi> </msub> </semantics></math>.</p>
Full article ">Figure 6
<p>Impact of different GCN hidden sizes for detection performance. (<b>a</b>) is the impact on AUC, (<b>b</b>) is the impact on R@k.</p>
Full article ">
15 pages, 817 KiB  
Article
A Hyperparameter Self-Evolving SHADE-Based Dendritic Neuron Model for Classification
by Haichuan Yang, Yuxin Zhang, Chaofeng Zhang, Wei Xia, Yifei Yang and Zhenwei Zhang
Axioms 2023, 12(11), 1051; https://doi.org/10.3390/axioms12111051 - 15 Nov 2023
Cited by 2 | Viewed by 1501
Abstract
In recent years, artificial neural networks (ANNs), which are based on the foundational model established by McCulloch and Pitts in 1943, have been at the forefront of computational research. Despite their prominence, ANNs have encountered a number of challenges, including hyperparameter tuning and [...] Read more.
In recent years, artificial neural networks (ANNs), which are based on the foundational model established by McCulloch and Pitts in 1943, have been at the forefront of computational research. Despite their prominence, ANNs have encountered a number of challenges, including hyperparameter tuning and the need for vast datasets. It is because many strategies have predominantly focused on enhancing the depth and intricacy of these networks that the essence of the processing capabilities of individual neurons is occasionally overlooked. Consequently, a model emphasizing a biologically accurate dendritic neuron model (DNM) that mirrors the spatio-temporal features of real neurons was introduced. However, while the DNM shows outstanding performance in classification tasks, it struggles with complexities in parameter adjustments. In this study, we introduced the hyperparameters of the DNM into an evolutionary algorithm, thereby transforming the method of setting DNM’s hyperparameters from the previous manual adjustments to adaptive adjustments as the algorithm iterates. The newly proposed framework, represents a neuron that evolves alongside the iterations, thus simplifying the parameter-tuning process. Comparative evaluation on benchmark classification datasets from the UCI Machine Learning Repository indicates that our minor enhancements lead to significant improvements in the performance of DNM, surpassing other leading-edge algorithms in terms of both accuracy and efficiency. In addition, we also analyzed the iterative process using complex networks, and the results indicated that the information interaction during the iteration and evolution of the DNM follows a power-law distribution. With this finding, some insights could be provided for the study of neuron model training. Full article
(This article belongs to the Special Issue Mathematical Modelling of Complex Systems)
Show Figures

Figure 1

Figure 1
<p>The structure of dendritic neuron model.</p>
Full article ">Figure 2
<p>The learning process of a DNM.</p>
Full article ">Figure 3
<p>Connection cases of the synaptic layer.</p>
Full article ">Figure 4
<p>Convergence graphs.</p>
Full article ">Figure 5
<p>Box graphs.</p>
Full article ">
20 pages, 6303 KiB  
Article
Optimizing Port Multi-AGV Trajectory Planning through Priority Coordination: Enhancing Efficiency and Safety
by Yongjun Chen, Shuquan Shi, Zong Chen, Tengfei Wang, Longkun Miao and Huiting Song
Axioms 2023, 12(9), 900; https://doi.org/10.3390/axioms12090900 - 21 Sep 2023
Cited by 2 | Viewed by 1925
Abstract
Efficient logistics and transport at the port heavily relies on efficient AGV scheduling and planning for container transshipment. This paper presents a comprehensive approach to address the challenges in AGV path planning and coordination within the domain of intelligent transportation systems. We propose [...] Read more.
Efficient logistics and transport at the port heavily relies on efficient AGV scheduling and planning for container transshipment. This paper presents a comprehensive approach to address the challenges in AGV path planning and coordination within the domain of intelligent transportation systems. We propose an enhanced graph search method for constructing the global path of a single AGV that mitigates the issues associated with paths closely aligned with obstacle corner points. Moreover, a centralized global planning module is developed to facilitate planning and scheduling. Each individual AGV establishes real-time communication with the upper layers to accurately determine its position at complex intersections. By computing its priority sequence within a coordination circle, the AGV effectively treats the high-priority trajectories of other vehicles as dynamic obstacles for its local trajectory planning. The feasibility of trajectory information is ensured by solving the online real-time Optimal Control Problem (OCP). In the trajectory planning process for a single AGV, we incorporate a linear programming-based obstacle avoidance strategy. This strategy transforms the obstacle avoidance optimization problem into trajectory planning constraints using Karush-Kuhn-Tucker (KKT) conditions. Consequently, seamless and secure AGV movement within the port environment is guaranteed. The global planning module encompasses a global regulatory mechanism that provides each AGV with an initial feasible path. This approach not only facilitates complexity decomposition for large-scale problems, but also maintains path feasibility through continuous real-time communication with the upper layers during AGV travel. A key advantage of our progressive solution lies in its flexibility and scalability. This approach readily accommodates extensions based on the original problem and allows adjustments in the overall problem size in response to varying port cargo throughput, all without requiring a complete system overhaul. Full article
(This article belongs to the Special Issue Mathematical Modelling of Complex Systems)
Show Figures

Figure 1

Figure 1
<p>Illustration of the path generated by RRT* (<b>left</b>), where the red line represents the path generated by the RRT* algorithm, while the blue lines represent the branches of the exploring random tree. The red points indicate the random sampling points used in the algorithm, (<b>right</b>) Voronoi Graph, the yellow lines represent the feasible edges generated by the Voronoi Graph, while the red points indicate the obstacles in the environment.</p>
Full article ">Figure 2
<p>Illustration of the Path that generated by the A* algorithm (<b>left</b>) and Augmented Graph Search (<b>right</b>). Where the blue star and red cross demonstrate the start and goal point, respectively. and the black line (<b>left</b>) and the thin blue line (<b>right</b>) are the result of the A* algorithm, the red line is the result of the enhanced graph search, and the black dots are the virtual obstacles added in the enhanced graph search.</p>
Full article ">Figure 3
<p>Illustration of Augmented Graph-search-based Motion Planning Framework.</p>
Full article ">Figure 4
<p>Illustration of pose variables of the Robot.</p>
Full article ">Figure 5
<p>Illustration of the Path that generated by the A* algorithm in Rviz. Where the blue and red arrow demonstrate the start and goal pose.</p>
Full article ">Figure 6
<p>Illustration that collision happens during the interval [<math display="inline"><semantics> <msub> <mi>t</mi> <mi>j</mi> </msub> </semantics></math>,<math display="inline"><semantics> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math>]. We add new fit point <math display="inline"><semantics> <mrow> <msub> <mi>t</mi> <msup> <mi>j</mi> <mo>′</mo> </msup> </msub> <mo>∈</mo> <mrow> <mo>[</mo> <msub> <mi>t</mi> <mi>j</mi> </msub> <mo>,</mo> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math>, and modified spline is collision free in [<math display="inline"><semantics> <msub> <mi>t</mi> <mi>j</mi> </msub> </semantics></math>,<math display="inline"><semantics> <msub> <mi>t</mi> <mrow> <mi>j</mi> <mo>+</mo> <mn>1</mn> </mrow> </msub> </semantics></math>].</p>
Full article ">Figure 7
<p>Illustration of collision avoidance of two convex polygons <math display="inline"><semantics> <mi mathvariant="script">M</mi> </semantics></math> and <math display="inline"><semantics> <mi mathvariant="script">N</mi> </semantics></math>.</p>
Full article ">Figure 8
<p>Illustration of numerical simulation of obstacle avoidance using J-function under rectangular obstacle.</p>
Full article ">Figure 9
<p>Illustration of numerical simulation of obstacle avoidance using J-function under circle obstacles.</p>
Full article ">Figure 10
<p>Variation curve of heading angle <math display="inline"><semantics> <mi>θ</mi> </semantics></math> with time <span class="html-italic">s</span> with J obstacle avoidance. (<b>a</b>) Rectangle collision avoidance. (<b>b</b>) Circle collision avoidance.</p>
Full article ">Figure 11
<p>Simulation in S and L environments was conducted under Gazebo. The static obstacle is a wall, and cuboids and spheres were randomly added as dynamic obstacles.</p>
Full article ">Figure 12
<p>Simulation in S and L environments using Rviz, where the green lines represent the global paths and the red lines represent the actual tracking trajectories produced by the warehouse model.</p>
Full article ">Figure 13
<p>Port overview scenes. Where the black block demonstrate the container yard, the shadow blue circle is the coordinate circle, the dark blue block represents the container ship berth, the gap in the upper right corner is the exit from the port area.</p>
Full article ">Figure 14
<p>Multi AGVs coordinate transportation in port scenes.</p>
Full article ">Figure 15
<p>Comparison of our proposed method and coupled method [<a href="#B51-axioms-12-00900" class="html-bibr">51</a>] and the total time in global planning. (<b>a</b>) Comparison of our method and coupled method. (<b>b</b>) Total planning and Enhanced graph search time.</p>
Full article ">Figure 16
<p>Multi AGVs coordinate transportation in double intersections scenes.</p>
Full article ">

Other

Jump to: Research

48 pages, 1898 KiB  
Essay
The Code Underneath
by Julio Rives
Axioms 2025, 14(2), 106; https://doi.org/10.3390/axioms14020106 - 30 Jan 2025
Viewed by 424
Abstract
An inverse-square probability mass function (PMF) is at the Newcomb–Benford law (NBL)’s root and ultimately at the origin of positional notation and conformality. PrZ=2Z2, where ZZ+. Under its tail, we find information [...] Read more.
An inverse-square probability mass function (PMF) is at the Newcomb–Benford law (NBL)’s root and ultimately at the origin of positional notation and conformality. PrZ=2Z2, where ZZ+. Under its tail, we find information as harmonic likelihood Ls,t=Ht1Hs1, where Hn is the nth harmonic number. The global Q-NBL is Prb,q=Lq,q+1L1,b=qHb11, where b is the base and q is a quantum (1q<b). Under its tail, we find information as logarithmic likelihood i,j=lnji. The fiducial R-NBL is Prr,d=d,d+11,r=logr1+1d, where rb is the radix of a local complex system. The global Bayesian rule multiplies the correlation between two numbers, s and t, by a likelihood ratio that is the NBL probability of bucket s,t relative to b’s support. To encode the odds of quantum j against i locally, we multiply the prior odds Prb,jPrb,i by a likelihood ratio, which is the NBL probability of bin i,j relative to r’s support; the local Bayesian coding rule is o˜j:i|r=ijlogrji. The Bayesian rule to recode local data is o˜j:i|r=o˜j:i|rlnrlnr. Global and local Bayesian data are elements of the algebraic field of “gap ratios”, ABCD. The cross-ratio, the central tool in conformal geometry, is a subclass of gap ratio. A one-dimensional coding source reflects the global Bayesian data of the harmonic external world, the annulus xQ|1x<b, into the local Bayesian data of its logarithmic coding space, the ball xQ|x<11b. The source’s conformal encoding function is y=logr2x1, where x is the observed Euclidean distance to an object’s position. The conformal decoding function is x=121+ry. Both functions, unique under basic requirements, enable information- and granularity-invariant recursion to model the multiscale reality. Full article
(This article belongs to the Special Issue Mathematical Modelling of Complex Systems)
Show Figures

Figure 1

Figure 1
<p>The road to conformal coding. We find information under the canonical PMF’s tail as harmonic likelihood <math display="inline"><semantics> <mrow> <mi mathvariant="script">L</mi> <mfenced separators="" open="(" close=")"> <mfenced separators="" open="[" close=")"> <mi>s</mi> <mo>,</mo> <mi>t</mi> </mfenced> </mfenced> <mo>=</mo> <msub> <mi>H</mi> <mrow> <mi>t</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> <mo>−</mo> <msub> <mi>H</mi> <mrow> <mi>s</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mrow> </semantics></math>. The global <math display="inline"><semantics> <mi mathvariant="double-struck">Q</mi> </semantics></math>-NBL is <math display="inline"><semantics> <mrow> <mo form="prefix">Pr</mo> <mfenced separators="" open="(" close=")"> <mi>b</mi> <mo>,</mo> <mi>q</mi> </mfenced> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <mrow> <mi mathvariant="script">L</mi> <mfenced separators="" open="(" close=")"> <mfenced separators="" open="[" close=")"> <mi>q</mi> <mo>,</mo> <mi>q</mi> <mo>+</mo> <mn>1</mn> </mfenced> </mfenced> </mrow> <mrow> <mi mathvariant="script">L</mi> <mfenced separators="" open="(" close=")"> <mfenced separators="" open="[" close=")"> <mn>1</mn> <mo>,</mo> <mi>b</mi> </mfenced> </mfenced> </mrow> </mfrac> </mstyle> <mo>=</mo> <msup> <mfenced separators="" open="(" close=")"> <mi>q</mi> <msub> <mi>H</mi> <mrow> <mi>b</mi> <mo>−</mo> <mn>1</mn> </mrow> </msub> </mfenced> <mrow> <mo>−</mo> <mn>1</mn> </mrow> </msup> </mrow> </semantics></math>, where <span class="html-italic">b</span> is the base, and <span class="html-italic">q</span> is a quantum (<math display="inline"><semantics> <mrow> <mn>1</mn> <mo>≤</mo> <mi>q</mi> <mo>&lt;</mo> <mi>b</mi> </mrow> </semantics></math>). We find information under the <math display="inline"><semantics> <mi mathvariant="double-struck">Q</mi> </semantics></math>-NBL’s tail as logarithmic likelihood <math display="inline"><semantics> <mrow> <mo>ℓ</mo> <mfenced separators="" open="(" close=")"> <mfenced separators="" open="[" close=")"> <mi>i</mi> <mo>,</mo> <mi>j</mi> </mfenced> </mfenced> <mo>=</mo> <mo form="prefix">ln</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <mi>j</mi> <mi>i</mi> </mfrac> </mstyle> </mrow> </semantics></math>. The fiducial <math display="inline"><semantics> <mi mathvariant="double-struck">R</mi> </semantics></math>-NBL is <math display="inline"><semantics> <mrow> <mo form="prefix">Pr</mo> <mfenced separators="" open="(" close=")"> <mi>r</mi> <mo>,</mo> <mi>d</mi> </mfenced> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <mrow> <mo>ℓ</mo> <mfenced separators="" open="(" close=")"> <mfenced separators="" open="[" close=")"> <mi>d</mi> <mo>,</mo> <mi>d</mi> <mo>+</mo> <mn>1</mn> </mfenced> </mfenced> </mrow> <mrow> <mo>ℓ</mo> <mfenced separators="" open="(" close=")"> <mfenced separators="" open="[" close=")"> <mn>1</mn> <mo>,</mo> <mi>r</mi> </mfenced> </mfenced> </mrow> </mfrac> </mstyle> <mo>=</mo> <msub> <mo form="prefix">log</mo> <mi>r</mi> </msub> <mfenced separators="" open="(" close=")"> <mn>1</mn> <mo>+</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <mn>1</mn> <mi>d</mi> </mfrac> </mstyle> </mfenced> </mrow> </semantics></math>, where <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>&lt;</mo> <mi>b</mi> </mrow> </semantics></math> is the radix of a local complex system. The global Bayesian rule multiplies the prior correlation between numbers <span class="html-italic">s</span> and <span class="html-italic">t</span>, <math display="inline"><semantics> <mrow> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <mrow> <mo form="prefix">Pr</mo> <mfenced separators="" open="(" close=")"> <mi>b</mi> <mo>,</mo> <mi>t</mi> </mfenced> </mrow> <mrow> <mo form="prefix">Pr</mo> <mfenced separators="" open="(" close=")"> <mi>b</mi> <mo>,</mo> <mi>s</mi> </mfenced> </mrow> </mfrac> </mstyle> <mo>=</mo> <msup> <mfenced separators="" open="(" close=")"> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <mi>s</mi> <mi>t</mi> </mfrac> </mstyle> </mfenced> <mn>2</mn> </msup> </mrow> </semantics></math>, by a likelihood ratio that is the NBL probability of bucket <math display="inline"><semantics> <mfenced separators="" open="[" close=")"> <mi>s</mi> <mo>,</mo> <mi>t</mi> </mfenced> </semantics></math> relative to <span class="html-italic">b</span>’s support. The local Bayesian rule multiplies the prior odds of quantum <math display="inline"><semantics> <msub> <mi>n</mi> <mn>2</mn> </msub> </semantics></math> against <math display="inline"><semantics> <msub> <mi>n</mi> <mn>1</mn> </msub> </semantics></math>, <math display="inline"><semantics> <mrow> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <mrow> <mo form="prefix">Pr</mo> <mfenced separators="" open="(" close=")"> <mi>r</mi> <mo>,</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> </mfenced> </mrow> <mrow> <mo form="prefix">Pr</mo> <mfenced separators="" open="(" close=")"> <mi>r</mi> <mo>,</mo> <msub> <mi>n</mi> <mn>1</mn> </msub> </mfenced> </mrow> </mfrac> </mstyle> <mo>=</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <msub> <mi>n</mi> <mn>1</mn> </msub> <msub> <mi>n</mi> <mn>2</mn> </msub> </mfrac> </mstyle> </mrow> </semantics></math>, by a likelihood ratio that is the NBL probability of bin <math display="inline"><semantics> <mfenced separators="" open="[" close=")"> <msub> <mi>n</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>n</mi> <mn>2</mn> </msub> </mfenced> </semantics></math> relative to <span class="html-italic">r</span>’s support. Jump and bipartition odds are problems where the synergy between Benford’s and Bayes’ rules manifests. However, we appreciate its full potential when a coding source, typically a complex system, conformally encodes the information perceived from the external (harmonic) world.</p>
Full article ">Figure 2
<p>Example with <math display="inline"><semantics> <mrow> <mi>b</mi> <mo>=</mo> <mn>101</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mi mathvariant="normal">e</mi> </mrow> </semantics></math> of how the three scales interact using Bayes’ rules. <math display="inline"><semantics> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <msub> <mi>q</mi> <mn>2</mn> </msub> <msub> <mi>q</mi> <mn>1</mn> </msub> </mfrac> </mstyle> </semantics></math> is a global Bayesian datum corresponding to the local Bayesian datum <math display="inline"><semantics> <mrow> <mi>Q</mi> <mo>=</mo> <mn>1</mn> <mo>−</mo> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <msub> <mi>q</mi> <mn>1</mn> </msub> <msub> <mi>q</mi> <mn>2</mn> </msub> </mfrac> </mstyle> </mrow> </semantics></math>; both represent the Euclidean distance to the origin. Within the logarithmic space of a coding source, the hyperbolic distance and the differential entropy from the origin to <span class="html-italic">Q</span> are <math display="inline"><semantics> <mrow> <mn>2</mn> <mspace width="0.222222em"/> <mi>artanh</mi> <mfenced open="(" close=")"> <mi>Q</mi> </mfenced> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>artanh</mi> <mn>2</mn> </msup> <mfenced open="(" close=")"> <mi>Q</mi> </mfenced> <mo>≈</mo> <mn>0.036913</mn> </mrow> </semantics></math>, respectively. The decoding function returns the latter value to <math display="inline"><semantics> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <msub> <mi>q</mi> <mn>2</mn> </msub> <msub> <mi>q</mi> <mn>1</mn> </msub> </mfrac> </mstyle> </semantics></math>. Alternatively, the coding source can directly encode <math display="inline"><semantics> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <msub> <mi>q</mi> <mn>2</mn> </msub> <msub> <mi>q</mi> <mn>1</mn> </msub> </mfrac> </mstyle> </semantics></math> as <math display="inline"><semantics> <mrow> <msub> <mo form="prefix">log</mo> <mi>r</mi> </msub> <mfenced separators="" open="(" close=")"> <mn>2</mn> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <msub> <mi>q</mi> <mn>2</mn> </msub> <msub> <mi>q</mi> <mn>1</mn> </msub> </mfrac> </mstyle> <mo>−</mo> <mn>1</mn> </mfenced> <mo>≈</mo> <mn>0.384255</mn> </mrow> </semantics></math> and decode it as <math display="inline"><semantics> <mrow> <mstyle scriptlevel="0" displaystyle="true"> <mfrac> <mn>1</mn> <mn>2</mn> </mfrac> </mstyle> <mfenced separators="" open="(" close=")"> <mn>1</mn> <mo>+</mo> <msup> <mi>r</mi> <mrow> <mn>0.384255</mn> <mo>…</mo> </mrow> </msup> </mfenced> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>A comparison of the global with the local (fiducial) NBL, where “b” stands for “global base” and “r” for “local radix”. Vertical axes represent the occurrence probability of the horizontal axes’ quanta or digits. The plot on the top left shows the PMFs of the global and local standard ternary numeral system along with the PMFs of the global and local standard quaternary numeral system. The plot on the top right shows the PMFs of the global and local standard decimal numeral system. The plot on the bottom right shows the PMFs of the global and local standard undecimal numeral system. The plot on the bottom left shows the PMFs of the global and local standard undecimal numeral system divided into the partitions <math display="inline"><semantics> <mfenced separators="" open="[" close=")"> <mn>1</mn> <mo>,</mo> <mn>6</mn> </mfenced> </semantics></math> and <math display="inline"><semantics> <mfenced separators="" open="[" close=")"> <mn>6</mn> <mo>,</mo> <mn>11</mn> </mfenced> </semantics></math> and the PMFs of standard decimal divided into the partitions <math display="inline"><semantics> <mfenced separators="" open="[" close=")"> <mn>1</mn> <mo>,</mo> <mn>4</mn> </mfenced> </semantics></math>, <math display="inline"><semantics> <mfenced separators="" open="[" close=")"> <mn>4</mn> <mo>,</mo> <mn>7</mn> </mfenced> </semantics></math>, and <math display="inline"><semantics> <mfenced separators="" open="[" close=")"> <mn>7</mn> <mo>,</mo> <mn>10</mn> </mfenced> </semantics></math>.</p>
Full article ">Figure 4
<p>These are the information gaps the undecimal numeral system induces. <span class="html-italic">b</span> stands for standard base, and <span class="html-italic">r</span> for standard radix. Note that the fiducial NBL for decimal numerals (Equation (<a href="#FD7-axioms-14-00106" class="html-disp-formula">7</a>)) is steeper than the digit plot (in green, Equation (<a href="#FD17-axioms-14-00106" class="html-disp-formula">17</a>)), and the digit plot is steeper than the quantum (in red, Equation (<a href="#FD16-axioms-14-00106" class="html-disp-formula">16</a>)).</p>
Full article ">Figure 5
<p>These are information gaps induced by the quanta of global standard base 10,000 (in red, Equation (<a href="#FD16-axioms-14-00106" class="html-disp-formula">16</a>)) and the digits of local standard radix 10,000 (in green, Equation (<a href="#FD17-axioms-14-00106" class="html-disp-formula">17</a>)), compared with the fiducial NBL (Equation (<a href="#FD7-axioms-14-00106" class="html-disp-formula">7</a>)).</p>
Full article ">Figure 6
<p>Plots of the odds that yield the past (<math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>o</mi> <mo>←</mo> </mover> <mi>r</mi> </msub> <mfenced open="(" close=")"> <mi>x</mi> </mfenced> </mrow> </semantics></math>, in yellow), future (<math display="inline"><semantics> <mrow> <msub> <mover accent="true"> <mi>o</mi> <mo>→</mo> </mover> <mi>r</mi> </msub> <mfenced open="(" close=")"> <mi>x</mi> </mfenced> <mo>,</mo> </mrow> </semantics></math> in red), and bipartite entropy (the local likelihood distribution function given by (<a href="#FD18-axioms-14-00106" class="html-disp-formula">18</a>), in black) with radix 100 (99 applicants). The three points correspond to the maxima and the minimum, whose abscissas give place to the bipartitions with the most information and the lowest bipartite distinguishability, respectively.</p>
Full article ">Figure 7
<p>Example of representing a gap ratio and its arithmetic. We display the gap ratios <math display="inline"><semantics> <mfenced separators="" open="(" close=")"> <mo>−</mo> <mn>2</mn> <mo>:</mo> <mn>3</mn> <mspace width="0.166667em"/> <mo>|</mo> <mspace width="0.166667em"/> <mn>1</mn> <mo>:</mo> <mo>−</mo> <mn>1</mn> </mfenced> </semantics></math> (in red) and <math display="inline"><semantics> <mfenced separators="" open="(" close=")"> <mn>1</mn> <mo>:</mo> <mspace width="0.166667em"/> <mo>|</mo> <mspace width="0.166667em"/> <mn>2</mn> <mo>:</mo> <mo>−</mo> <mn>1</mn> </mfenced> </semantics></math> (in green) and the result of the basic operations between them, i.e., addition <math display="inline"><semantics> <mfenced separators="" open="(" close=")"> <mo>−</mo> <mn>6</mn> <mo>:</mo> <mn>7</mn> <mspace width="0.166667em"/> <mo>|</mo> <mspace width="0.166667em"/> <mn>3</mn> <mo>:</mo> <mo>−</mo> <mn>3</mn> </mfenced> </semantics></math> in yellow (<a href="#FD19-axioms-14-00106" class="html-disp-formula">19</a>), subtraction <math display="inline"><semantics> <mfenced separators="" open="(" close=")"> <mo>−</mo> <mn>8</mn> <mo>:</mo> <mn>9</mn> <mspace width="0.166667em"/> <mo>|</mo> <mspace width="0.166667em"/> <mn>3</mn> <mo>:</mo> <mo>−</mo> <mn>3</mn> </mfenced> </semantics></math> in gray (<a href="#FD20-axioms-14-00106" class="html-disp-formula">20</a>), multiplication <math display="inline"><semantics> <mfenced separators="" open="(" close=")"> <mo>−</mo> <mn>2</mn> <mo>:</mo> <mn>3</mn> <mspace width="0.166667em"/> <mo>|</mo> <mspace width="0.166667em"/> <mn>3</mn> <mo>:</mo> <mo>−</mo> <mn>3</mn> </mfenced> </semantics></math> in orange (<a href="#FD21-axioms-14-00106" class="html-disp-formula">21</a>), and division <math display="inline"><semantics> <mfenced separators="" open="(" close=")"> <mo>−</mo> <mn>7</mn> <mo>:</mo> <mn>8</mn> <mspace width="0.166667em"/> <mo>|</mo> <mspace width="0.166667em"/> <mn>1</mn> <mo>:</mo> <mo>−</mo> <mn>1</mn> </mfenced> </semantics></math> in blue (<a href="#FD22-axioms-14-00106" class="html-disp-formula">22</a>). The diagonals from bottom left to top right indicate a negative sign, while the diagonal from top left to bottom right indicates a positive sign.</p>
Full article ">Figure 8
<p>The coding functions (Equations (<a href="#FD33-axioms-14-00106" class="html-disp-formula">33</a>) and (<a href="#FD34-axioms-14-00106" class="html-disp-formula">34</a>)) of the 1-ball conformal model.</p>
Full article ">Figure 9
<p>The three most profound levels of granularity <math display="inline"><semantics> <mfenced separators="" open="{" close="}"> <mo>±</mo> <msub> <mover accent="true"> <mi>C</mi> <mo>←</mo> </mover> <mi>r</mi> </msub> <mfenced open="(" close=")"> <mn>0</mn> </mfenced> <mo>=</mo> <mo>±</mo> <mn>1</mn> <mo>,</mo> <mo>±</mo> <msub> <mover accent="true"> <mi>C</mi> <mo>←</mo> </mover> <mrow> <mi>r</mi> <mo>∘</mo> <mn>2</mn> </mrow> </msub> <mfenced open="(" close=")"> <mn>0</mn> </mfenced> <mo>=</mo> <msub> <mover accent="true"> <mi>C</mi> <mo>←</mo> </mover> <mi>r</mi> </msub> <mfenced separators="" open="(" close=")"> <mo>±</mo> <mn>1</mn> </mfenced> <mo>=</mo> <mo>±</mo> <mn>2</mn> <mo>,</mo> <mo>±</mo> <msub> <mover accent="true"> <mi>C</mi> <mo>←</mo> </mover> <mrow> <mi>r</mi> <mo>∘</mo> <mn>3</mn> </mrow> </msub> <mfenced open="(" close=")"> <mn>0</mn> </mfenced> <mo>=</mo> <msub> <mover accent="true"> <mi>C</mi> <mo>←</mo> </mover> <mi>r</mi> </msub> <mfenced separators="" open="(" close=")"> <mo>±</mo> <mn>2</mn> </mfenced> <mo>=</mo> <mo>±</mo> <mn>5</mn> </mfenced> </semantics></math> (out of 5) the coding source in green generates to encode the number googol with radix <math display="inline"><semantics> <mrow> <mi>r</mi> <mo>=</mo> <mn>3</mn> </mrow> </semantics></math>, the minor four points of the conformal orbit, and the conformal encoding (in red, Equation (<a href="#FD33-axioms-14-00106" class="html-disp-formula">33</a>)) and decoding (in blue, Equation (<a href="#FD34-axioms-14-00106" class="html-disp-formula">34</a>)) functions.</p>
Full article ">Figure 10
<p>The canonical PMF for the integer numbers (<a href="#FD42-axioms-14-00106" class="html-disp-formula">42</a>) could germinate a fundamentally unitary, parity-invariant, indeterministic, discrete, and maximally random universal field. The plot resembles the (inverted) cross-section of a sombrero potential (e.g., the quartic function <math display="inline"><semantics> <mrow> <mo>−</mo> <msup> <mrow> <mfenced separators="" open="(" close=")"> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <mi>Z</mi> <mn>2</mn> </mfrac> </mstyle> </mfenced> </mrow> <mn>2</mn> </msup> <mo>+</mo> <mn>2</mn> <msup> <mrow> <mfenced separators="" open="(" close=")"> <mstyle scriptlevel="0" displaystyle="true"> <mfrac bevelled="true"> <mi>Z</mi> <mn>2</mn> </mfrac> </mstyle> </mfenced> </mrow> <mn>4</mn> </msup> </mrow> </semantics></math>) of a scalar field with an unstable (indeterminate) center and a nonzero “vacuum expectation value”; the multiplicative units <math display="inline"><semantics> <mrow> <mo>∓</mo> <mn>1</mn> </mrow> </semantics></math> provide this field with the ground (vacuum) state, enabling spontaneous symmetry breaking. Thus, the sombrero potential would be the physical manifestation of a fundamental improbability mass function.</p>
Full article ">
Back to TopTop