1. Introduction
In the event of a hazardous incident, e.g., an accidental or malicious release of chemical, biological, radiological, or nuclear (CBRN) materials, the timely acquisition of information related to the incident (position of the source and structure of associated contamination field) is vital in ensuring comprehensive situational awareness, prompt decision-making, and optimal mitigation strategy. In this regard, a well designed sensor system that can rapidly be deployed in the affected area can be considered as a valuable capability in gaining such information.
In recent years, the use of mobile autonomous vehicles (unmanned aerial vehicles (UAVs) or unmanned ground vehicles (UGVs)) with sensors mounted onboard have attracted increasing attention. These vehicles have been employed for hazard source localization/backtracking [
1,
2,
3,
4,
5,
6,
7,
8,
9,
10] and inference of field structure [
11,
12,
13,
14,
15,
16,
17]. For the case of static or slowly varying scenarios, the application of these vehicles for hazard field estimation and reconstruction has proven to be favourable [
11,
12,
13,
14,
15,
16,
17]. The problem becomes more challenging for the case of dynamic estimation when the field changes more rapidly with time. The issue is related to the logistical constraints preventing timely collection of measurements. Indeed, the number of measurements required to estimate the structure of the field in an operational scenario is usually at least on the order of hundreds [
11,
14,
15,
17]. Due to the finite speed of the UAVs/UGVs, this may result in a significant time for a vehicle (or vehicles) to travel to different locations and collect the informative measurements. Other potential issues may be related to constraints on the flight time of small UAVs (which is often less than half an hour [
13]), or the high operational cost of running multiple UGVs simultaneously [
18]. These factors necessitate the investigation of some alternative solutions for the dynamic measurements that can subsequently be used as an input in a data fusion system for inferring the time-varying contaminated field structure. One of the favourable options, considered in the literature, is the application of a variation in sensor networks that can be promptly deployed [
19,
20] after the hazardous release.
We assume that the sensor network consists of a large number of small, low cost (and possibly disposable) sensor nodes with basic sensing, communication, and computation capabilities (Such sensors are also known as low size, weight, power, and cost (SWaP-C) sensors). In the event of a CBRN incident, many sensors could, for example, be dropped into an affected area from an air platform to form a dynamic sensor network [
19,
20]. In our study, we consider the scenario where each node calculates its own estimate of the field in a distributed manner, where nodes communicate locally with others within a given communication range. The sensors are assumed to have access only to coarsely quantized measurements, which is motivated by the use of low cost sensors with limited capabilities, as well as the fact that many chemical sensors only give output from a small number of discrete bars [
21,
22].
Quantization is a nonlinear process which is encountered in many areas such as signal processing [
23], wireless communications [
24], system identification [
25], and feedback control [
26]. Quantization is inevitable whenever physical quantities are represented numerically in digital systems [
27]. While quantization can offer advantages such as reducing the number of bits required to represent information and hence reduce transmission bandwidth, the nonlinear nature of quantization also makes it challenging in the design of systems and algorithms, especially when the quantization is coarse (i.e., the number of quantization levels is small).
Previous studies on field estimation using sensor networks include [
28,
29,
30,
31,
32,
33,
34,
35,
36]. Distributed estimation schemes are considered in [
31,
33,
35], while [
28,
29,
30,
32,
34,
36] assume centralized estimation using a fusion centre. Poisson distributed measurements are used in [
30], binary measurements in [
36], general quantized measurements in [
34], and noisy (non-quantized) measurements are assumed in [
28,
29,
31,
32,
33,
35]. Furthermore, time-varying fields are considered by assuming a PDE model for the field in [
29,
36]. However, field estimation algorithms, which are simultaneously (1) distributed, (2) can handle quantized measurements, and (3) can handle time-varying fields, appear to be lacking in the literature, which motivates the current work.
Our approach to field estimation entails approximation of the original field as a weighted sum of radial basis functions [
37], where the coefficients of this sum are then estimated. To account for the local communication between neighbouring nodes, two conceptual schemes are considered. In the first scheme, which we call
measurement diffusion, each node broadcasts to its neighbours the latest received measurement of every node in the network. After receiving the broadcasts from its neighbours, each node will then update their field parameter estimates using the newly acquired measurements via an online optimization approach [
38]. In the second scheme, called
estimate diffusion, each node first forms local (pre-)estimates and Hessians of the field parameters, which are then broadcast to its neighbours. Subsequently, each node will combine the received pre-estimates using an estimate fusion method such as covariance intersection [
39] or inverse covariance intersection [
40].
The key contributions of this paper are the following:
We propose novel distributed estimation algorithms that can estimate the global field structure at each node in the sensor network, using quantized measurements and local communications.
The proposed algorithms are iterative, and can handle time variations in the fields and adjust their estimates accordingly.
We present comprehensive numerical studies of the algorithms, which demonstrate that the proposed algorithms can achieve steady state performance close to that of centralized estimation.
The paper is organized as follows.
Section 2 presents the system model and problem statement. The measurement diffusion scheme is presented in
Section 3. A brief overview of estimate fusion is given in
Section 4, which serves as a preliminary to the estimate diffusion scheme that is considered in
Section 5. Some discussions and extensions of the algorithms can be found in
Section 6. Numerical results are given in
Section 7.
Section 8 draws conclusions.
Notation: Given a set
, the cardinality of
is denoted by
. A list of some commonly used symbols in this paper is given in
Table 1.
2. System Model
Consider a region of interest . We have a field taking values at position . The field could, for instance, represent the distribution of concentration levels of some contaminant that we are interested in estimating. For notational simplicity, we do not explicitly include time dependence on the field , although our developed algorithms will be able to handle time-varying fields.
We consider a sensor network with
M nodes located within
. We assume that the nodes are numbered
and each node knows its own ID. (We note that algorithms exist for assigning each node in a sensor network a unique ID, see, e.g., [
41].) The sensor network can be represented as a graph
where
and vertex/node
m is placed at location
. We assume that accurate knowledge of the sensor locations can be obtained via GPS. (If GPS is unavailable, then location estimation will also need to be considered [
42,
43,
44].) An edge
between nodes
m and
n exists if
for some communication radius
. Given a node
m, the direct neighbours of
m are denoted by
We will assume that the network is connected.
At time step
k (in practice, due to response and clear-down times, chemical sensors can obtain new measurements once every few seconds), the sensor at each node
m will take noisy and quantized measurements
of the field, with
Similar measurements models have been considered in, e.g., [
21,
22].
The term
in (
1) is a noise term, which is assumed to be zero mean. The quantizer
is a quantizer of
L levels, say
, of the form
where the quantizer thresholds
satisfy
. The case
corresponds to an ‘on–off’ model with binary measurements, while large
L can be used to approximate real-valued measurements. Sensor saturation is naturally incorporated into the quantization model. The noise term
can also be used to take into account false positives (in the case of binary measurements) and uncertain quantizer thresholds [
21].
Problem Statement
We wish to construct an estimate of the field from the quantized measurements collected by the nodes. Distributed estimation schemes will be considered, where each node will form its own estimate of the field. We assume local communications, so that each node m can only transmit/receive information (e.g., measurements or estimates) to/from its direct neighbours . As we are assuming that measurements will be collected increasingly over time, we also want the estimation algorithms to be iterative, i.e., that the estimates will be progressively updated over time.
In order to estimate the field, in this paper, we consider that the field can be sufficiently well approximated by
where
p is the number of radial basis functions,
is the
i-th radial basis function, and
the
i-th coefficient. Similar models to represent fields have been used in, e.g., [
11,
12,
14,
15,
16,
30,
33]. (Other field models considered in the literature include spatial random processes [
31,
34,
45], PDE models [
29,
32,
36], and as a sum of Fourier components [
46]). The use of model (
3) is motivated by results from approximation theory, which show that many fields can be approximated arbitrarily closely provided a sufficiently large number of basis functions are used [
37]. For the basis functions, we will use the Gaussian kernel
where
and
can be regarded as the centre and width, respectively, of the
i-th basis function.
For a given number of basis functions
p, we assume that
and
,
, are chosen. (The case where
and
,
, are also estimated can also be considered, but in previous work using mobile sensors, this situation was found to suffer from identifiability issues and sometimes give very unreliable results [
15].) Estimation of the field
then becomes a problem of estimating the parameters
In this paper, we will consider two schemes for distributed estimation of the field
, which we call measurement diffusion and estimate diffusion, which will be presented in
Section 3 and
Section 5, respectively. The term ‘diffusion’ is used to convey the idea that information from any node is eventually diffused to the rest of the network via a sequence of local communications between direct neighbours [
47,
48].
3. Measurement Diffusion
In this section, we consider a scheme which we call
measurement diffusion, where each node will, at each discrete time step
k, broadcast to its direct neighbours the latest received measurement of
every node in the network. We assume a single broadcast between two successive time steps, in order to reduce the amount of communication and hence energy usage of the nodes [
47,
48], noting that there exist other distributed estimation schemes based on consensus algorithms, which can involve multiple rounds of communication between successive time steps [
49,
50].
Consider a node
m. At time
k, measurement
will be broadcast to the nodes in
, since a new measurement has been collected at time
k. While for a node
, the latest measurement of node
n available at node
m at time
k will be
, as there is a one hop delay. In general, let
denote the length, in terms of the number of hops, of the shortest path between nodes
m and
n. Then, under this scheme, assuming that no transmissions are lost, the latest measurement of node
n available to node
m at time
k will be
. As an example, suppose we have the sensor network shown in
Figure 1, and consider node 2. Nodes 1, 3, and 5 are one hop away from node 2, while node 4 is two hops away from node 2. Thus, at time
k, the latest measurements available at node 2 are
,
,
,
, and
. It is not too difficult to see that for a connected network, assuming there are no transmission losses, each measurement from node
m will eventually reach any other node
n in the network, with a delay given by the number of hops
between nodes
m and
n.
Let us be a bit more specific about the communication requirements. At time step
k, node
m will broadcast the set
to its direct neighbours, i.e., the node ID, time, and latest measurement of every node is transmitted. Here,
is the time of the most recent measurement from node
n available to node
m, and is equal to
if no transmissions are lost. The locations
of the nodes are not transmitted, as we assume that this information has been previously made available to every node beforehand and can be determined given the node ID.
As well as transmitting, each node will also receive the broadcasts of its direct neighbours. At time
k, after receiving broadcasts of its neighbours, each node
m will update
to a set
of latest received measurements of every node, together with their corresponding IDs and times. (At time
, the set
updated with the new measurement
, will then become
and be broadcast by node
m.) Since the measurements are time-stamped, this update can be performed efficiently. In order to avoid possible double counting, we then further refine
to a set
, which contains only new information that have not yet been incorporated into the field estimate of node
m. This can also be performed efficiently if the times of the last received measurement of each of the nodes are stored [
15], and updated whenever a new measurement is received. Note that if there are no transmission losses, then
.
We next present the estimation procedure for updating the field estimates. The estimation algorithm is based on a computationally efficient online optimization technique [
38]. (A sequential Monte Carlo approach can also be used to estimate
[
15]; however, the online optimization technique is considerably less computationally intensive, while having similar estimation performance [
16].) The aim is to estimate
at node
m by trying to iteratively minimize a cost function
:
where the following per stage costs are used:
with
being a parameter in the logistic function
, and
For notational simplicity. we have also used
z in place of
. A similar per-stage cost to (
6) has been previously shown to be suitable for multi-level quantized measurements [
46].
The gradient and Hessian of
can be derived as, respectively,
and
We then iteratively estimate the parameters
at node
m by performing an approximate Newton update similar to [
15,
46] using the new measurements in
. The formal description of the measurement diffusion procedure that is run at node
m is given as Algorithm 1 (each node in the network will perform the same operations), with the equations for the approximate Newton update given in lines 11–14. In Algorithm 1,
G and
H represent approximate gradients and Hessians for the cost function (
5). The quantity
in line 12 is a forgetting factor [
51] to allow for time-varying fields (where
can vary with
k) to be estimated. In a slightly different context of sensors mounted on a moving vehicle, previous work in [
46] has demonstrated that, for time-varying fields, using a forgetting factor close to but strictly less than 1 allows for quicker tracking of the time-variations than a forgetting factor equal to 1. The quantity
in line 13 is a Levenberg–Marquardt modification parameter [
52] to ensure that the Hessian approximation
H is invertible at all times.
Algorithm 1 Field estimation at node m using measurement diffusion |
- 1:
Algorithm Parameters: Logistic function parameter , Levenberg–Marquardt parameter , forgetting factor - 2:
Outputs: Parameter estimates - 3:
Initialize , , and - 4:
for do - 5:
Collect a measurement - 6:
Update using and , and broadcast to neighbours - 7:
Receive from neighbours - 8:
Update and construct - 9:
Set - 10:
for do - 11:
where is computed using ( 7) - 12:
where is computed using ( 8) - 13:
- 14:
- 15:
end for - 16:
Update - 17:
end for
|
4. Estimate Fusion
In this section, we briefly describe the concept of estimate fusion, which will be used later in the estimate diffusion scheme of
Section 5. Consider a quantity
, which we wish to estimate. Suppose two unbiased estimates
and
of
are available, with corresponding estimation error covariances
and
. We wish to fuse the two estimates
and
together to provide an improved estimate. In the case of unknown cross-correlations in the errors, elimination of common information [
53] used in obtaining
and
is difficult, and one may encounter issues such as double counting, leading to overly confident fused estimates [
54]. One way to overcome this issue is to design sub-optimal but consistent fusion rules that over-estimate the true error covariance, for which various different fusion methods have been proposed. In the following, we will describe the covariance intersection [
39] and inverse covariance intersection methods [
40] methods, which can be used as part of the estimate diffusion scheme to be presented in
Section 5, noting that other consistent fusion rules can, in principle, also be considered.
4.1. Covariance Intersection
The most commonly used method for computing a consistent estimate when cross-correlations are unknown is covariance intersection [
39]. In this method, the fused estimate and associated covariance are computed as
where the parameter
can be chosen to minimize quantities such as the trace or determinant of
.
In the case where
N estimates
with corresponding error covariances
are to be fused, covariance intersection can be extended to (see [
55]):
where
and
.
4.2. Inverse Covariance Intersection
Inverse covariance intersection [
40] follows a similar principle to covariance intersection, but by considering inverse covariance ellipsoids, it can produce less conservative estimates than covariance intersection. The fused estimate and associated covariance are now computed as
The parameter
can also be optimized to minimize quantities such as the trace or determinant of
. We denote
as the function that computes and returns
using (
11).
5. Estimate Diffusion
In this section, we will consider an alternative scheme to measurement diffusion, which we call
estimate diffusion. In this scheme, after a new measurement has been collected at time step
k, each node
m will: (1) broadcast its own measurement
and receive measurements
from neighbours
, (2) form a local pre-estimate
(we follow the terminology of [
47]) and (approximate) Hessian
using the received measurements, (3) broadcast
and receive
to/from neighbours
, (4) form an updated estimate
using the received estimates and Hessians. For example, suppose we again have the sensor network shown in
Figure 1, and concentrate on node 2. At time
k, node 2 will broadcast
to nodes 1, 3, and 5. After receiving
,
, and
from nodes 1, 3, and 5, respectively, node 2 computes and broadcasts the pre-estimate
and Hessian
. After receiving the broadcasts of its neighbours, node 2 will then combine
,
,
,
,
,
,
,
in order to form
. In contrast to measurement diffusion, there are now two transmissions between successive time instances (Steps 1 and 3), which is similar to [
47,
48]. One could remove the broadcast of measurements in Step 1 and compute
using only
; however, we found that this degrades the estimation performance significantly.
The formal description of the procedure that is run at each node
m is given as Algorithm 2. Similar to Algorithm 1, a forgetting factor
is used to allow time-varying fields to be estimated. Below, we will provide additional details and explanations of the individual steps.
Algorithm 2 Field estimation at node m using estimate diffusion |
- 1:
Algorithm Parameters: Logistic function parameter , Levenberg–Marquardt parameter , forgetting factor , FusionMethod - 2:
Outputs: Parameter estimates - 3:
Initialize and - 4:
for do - 5:
Collect a measurement - 6:
Broadcast to neighbours - 7:
Receive measurements from neighbours - 8:
Set - 9:
for do - 10:
where is computed using ( 7) - 11:
where is computed using ( 8) - 12:
- 13:
- 14:
end for - 15:
Update pre-estimate and - 16:
Broadcast to neighbours - 17:
Receive from neighbours - 18:
if FusionMethod is CovarianceIntersection then - 19:
Set and compute - 20:
Update and - 21:
else if FusionMethod is InverseCovarianceIntersection then - 22:
Set and - 23:
for do - 24:
Set - 25:
using ( 12) - 26:
end for - 27:
Update and - 28:
end if - 29:
end for
|
Step 1 involves a broadcast by node m of a single measurement to its neighbours, together with the node ID m, to allow recipients to determine where the measurement was taken.
Step 2 involves using the collected and received measurements
to form a local pre-estimate
and Hessian
. Similar to
Section 3, this is performed using an approximate Newton update on the received measurements, shown in lines 10–13 of Algorithm 2.
Step 3 involves broadcasts of to neighbours. is a vector of p numbers (recall p is the number of radial basis functions), while is a matrix, where the number of basis functions p is usually chosen to be less than the number of nodes M.
Step 4 combines the pre-estimates
and Hessians to form the estimate
, as shown in lines 18 or 21 of Algorithm 2. It does this by using one of the fusion methods in
Section 4, i.e., covariance intersection or inverse covariance intersection. In the case of covariance intersection, the batch form given in (
10) is used, which is less computationally intensive than combining two estimates at a time using (
9) in a sequential manner, while having similar estimation performance. In the case of inverse covariance intersection, the estimates are combined two at a time using (
11). (Various ways to extend inverse covariance intersection to a batch form have been proposed in [
56,
57]; however, we have been unable to get these methods to work in our situation). Note that (
10) and (
11) require error covariances, while the approximate online Newton method computes Hessians. In this paper, we will use the inverse Hessians as a surrogate for the covariances, which is motivated by the result that the Hessian matrix of the negative log-likelihood with respect to the parameters is approximately equal to the inverse of the covariance matrix [
58], with equality holding for Gaussian random vectors [
59]. For simplicity, we have set
for batch covariance intersection and
when combining two estimates sequentially in inverse covariance intersection. As mentioned in
Section 4, one could also try to optimize
or
; however, this will increase the computational load of the algorithm. Furthermore, we note that the estimation diffusion scheme is actually agnostic to the particular estimate fusion method, and other consistent fusion rules can also be used. Optimizing
and the choice of fusion method will be a topic for future investigation.
7. Numerical Studies
In this section, we will first describe the measures for performance evaluation of our algorithms in
Section 7.1, and a centralized scheme for baseline comparison in
Section 7.2, before presenting numerical examples in
Section 7.3–
Section 7.5. The simulations in this section are written in Python and run on an Intel Core i7 9700 PC with 16 GB of memory.
7.1. Performance Criteria
In this paper, we will consider two performance measures, the mean square error (MSE), and the structural similarity (SSIM) index [
60]. The MSE is defined as
where
is the true field value at position
, and
is a discretized set of points in the region
.
The structural similarity index is a measure of the similarity between two images, first introduced in [
60]. Suppose the discretized set of points
are located on a rectangular grid. We can then regard
and
as the image representations of the true and estimated fields, respectively, and compute the SSIM between these two images. The SSIM gives a scalar value between 0 and 1, with
if the two images to be compared are identical. The specific equations to compute the SSIM can be found in [
60]; see also [
61].
7.2. Centralized Scheme
For comparison with our proposed algorithms, we also consider a centralized scheme, where at each time instant k, all of the measurements are transmitted to a central node or fusion centre which then performs the estimation. For large networks, this could involve information from some nodes needing to be transmitted over large distances, requiring either high transmission power or multi-hop communications. Nevertheless, the centralized scheme will serve as a useful benchmark on achievable performance. Estimation of using the centralized scheme will be performed using the approximate Newton update, as in lines 11–14 of Algorithm 1, but now with all of the measurements collected at time k being used to update .
7.3. Example 1: Effect of Number of Sensors on Estimated Field
In the process of forming a sensor network, knowing the minimum number of sensors needed to estimate fields to an acceptable level of accuracy could aid in reducing costs and network complexity. This subsection studies the effect that varying the number of sensors used can have on the estimated fields. We concentrate on the centralized scheme described in
Section 7.2 above, in order to focus on the best performance achievable for a given number of sensors.
We consider a randomly generated example true field shown in
Figure 2, in the region of interest
. Sensor measurements are noisy, with the measurement noise assumed to be Gaussian with zero mean and variance 0.1. The quantizer is a four-level quantizer, with quantizer thresholds
. For approximating the estimated fields,
basis functions were used with
spaced uniformly in a square grid within
, with
for
. The logistic function parameter was chosen as
. The Levenberg–Marquardt parameter was set to
, while the forgetting factor was set to
. The initial estimate
was initialized by randomly choosing each component from a uniform distribution between 0 and 1.
Figure 3 depicts the estimated fields for the centralized scheme after 10 time steps, where the sensor number
M ranges from 100 to 900 in increments of 100. The sensor locations were randomly generated, and shown as light blue dots in
Figure 3. We see that a sensor number of 200–300 is already able to produce estimated fields to a reasonable level of accuracy, with the estimate reliability generally increasing as the number of sensors is increased.
7.4. Example 2: Estimation of Time-Varying Fields
For fields which are constantly evolving in real-time, it is crucial that the sensor system is able to adapt to these changing conditions without significant time delay. This subsection explores how each of the field estimation algorithms proposed in this paper responds to time-varying fields.
For modelling the time-varying fields, we consider the six fields shown in
Figure 4, where each field will be held constant for 50 time steps before switching to the next field. We will consider the sensor network shown in
Figure 5, consisting of 500 randomly placed nodes. The communication range is such that two sensors within a distance of
can communicate directly with each other. For this communication range and the chosen set of nodes, it was found that the network was connected. (In general, a larger communication range will make a network more likely to be connected; however, it will also require more transmission power). In this subsection, we will present field estimation results at the node represented by the large orange dot in
Figure 5. The maximum number of hops
(recall the notation in
Section 3) between the chosen node
m and any other node
n in this network can be found to be 13.
In the field estimation algorithms, the Levenberg–Marquardt parameter is set to
for the centralized and measurement diffusion schemes, and
for the estimate diffusion scheme. For time-varying fields the forgetting factor is now set to
for centralized and measurement diffusion, and
for estimate diffusion. Other parameters were the same as those in
Section 7.3.
Figure 6,
Figure 7,
Figure 8 and
Figure 9 plot the MSE and SSIM over time for centralized, measurement diffusion, estimate diffusion using covariance intersection, and estimate diffusion using inverse covariance intersection. As stated before, the fields change every 50 time steps, although we stress that the algorithms themselves do not know when the field changes occur. For the centralized scheme, the field estimate quality drops (MSE increases while SSIM decreases) immediately after the field changes, but after just 1–2 time steps (note that 500 measurements, equal to the number of sensors in the network, are collected at every time step) is able to quickly adapt to the new field. For the measurement diffusion scheme, adaptation to the field changes takes a bit longer, with the number of time steps needed to converge to a steady state roughly equal to 13, the maximum number of hops from the chosen node to any other node in the network. Comparing
Figure 7 with
Figure 6, we see that the MSE and SSIM values of measurement diffusion at steady state are close to that of centralized estimation. Estimate diffusion is also able to adapt to field changes, although at a slower rate than measurement diffusion, with inverse covariance intersection performing better than covariance intersection and also able to achieve steady state performance close to that of centralized estimation.
The delay in adapting to field changes is due to our assumption that each sensor communicates with its neighbours once (or twice for estimate diffusion) per time step, in order to limit the amount of communication. Whether this delay is tolerable for real time application will depend on the particular situation. Delay can be reduced if we allow for multi-hop communication within each time step, but this will increase the communication requirements. Thus, there exists a trade-off between delay and the amount of communication.
7.5. Example 3: Studies on a Real Dataset
We now consider the testing of our algorithms on a publicly available dataset provided by the European Environment Agency [
62]. This dataset provides interpolated average concentration data in
for PM10 (particulate matter 10
m or less in diameter) air pollutants in Europe during the year 2021.
Figure 10 shows a plot of the available data.
For testing of our field estimation algorithms, we consider the 200 km × 200 km area bounded by the green box in
Figure 10. A close-up of this area is shown in
Figure 11. We regard this as the true field that is to be estimated.
We use a sensor network consisting of 500 randomly placed sensors, a plot of which is shown in
Figure 12. For this network, the maximum number of hops from the orange node to any other node is 19. We assume that the sensor measurements are noisy, with the noise being Gaussian of zero mean and variance 1, with the quantizer thresholds being
. The number of basis functions used is
, with
uniformly spaced in a square grid within the area and with
.
We will consider results for measurement diffusion and estimate diffusion using inverse covariance intersection, with logistic function parameter
, and forgetting factor set to
. For the orange node shown in
Figure 12,
Figure 13 plots the MSE and SSIM over time for the two schemes. For comparison, results for the centralized scheme are also plotted. Similar to
Section 7.4, the performance of measurement diffusion reaches a steady state after around 19 time steps, which is the maximum number of hops from the orange node to any other node. While for estimate diffusion, the number of time steps needed to reach steady state is longer. For both schemes, we again see that the MSE and SSIM values at steady state are close to that of centralized estimation. The estimated fields at this node after 50 time steps are shown in
Figure 14. We can see that qualitatively the estimated fields for both schemes look similar to smoothed versions of
Figure 11.