[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 
sensors-logo

Journal Browser

Journal Browser

Industrial Applications of Smart Sensors and Smart Data in Cyber-Physical Systems

A topical collection in Sensors (ISSN 1424-8220). This collection belongs to the section "Intelligent Sensors".

Viewed by 36634

Editors


E-Mail Website
Collection Editor
Department of Engineering and Mathematics, Bielefeld University of Applied Sciences, 33619 Bielefeld, Germany
Interests: predictive maintenance; artificial intelligence; machine learning

E-Mail Website
Collection Editor
Institute of Industrial Information Technology (IIIT), Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
Interests: image processing; automated visual inspection; machine vision; machine learning; multisensor fusion and information fusion

Topical Collection Information

Dear Colleagues,

Sensors and their data are the basis of most recent trends in the field of Cyber-Physical Systems (CPS) and the Internet of Things (IoT): artificial intelligence algorithms use sensor data, e.g. generated by process-embedded sensors, for industrial applications such as predictive maintenance or optimization. Assistance systems and robotics rely on sensors to interact with changing environments and sensors systems become more and more a platform for edge computing and for semantically annotated smart data. This collection addresses the methodological and technical buildings blocks of these trends. These trends have several key aspects which are addressed in this collection. One key element are new sensor architectures. This includes new measurement principles such as new vision systems, algorithms for sensor data processing and information fusion, the development towards edge-computing and also new ideas for a fast and automated integration of sensors into complex CPS. A second key topic is the usage of systems of sensors for an integrated interaction between machines and humans and new robotic applications. Furthermore, the issue welcomes methods for data semantics and the usage of these semantics for a faster CPS ramp-up and reconfiguration. Finally, the issue will discuss machine learning algorithms suited for CPS, the usage of learned models for optimization and typical artificial intelligence applications in the industrial domain.

Prof. Dr. Alexander Maier
Prof. Dr. Michael Heizmann
Collection Editors

Manuscript Submission Information

Manuscripts should be submitted online at www.mdpi.com by registering and logging in to this website. Once you are registered, click here to go to the submission form. Manuscripts can be submitted until the deadline. All submissions that pass pre-check are peer-reviewed. Accepted papers will be published continuously in the journal (as soon as accepted) and will be listed together on the collection website. Research articles, review articles as well as short communications are invited. For planned papers, a title and short abstract (about 100 words) can be sent to the Editorial Office for announcement on this website.

Submitted manuscripts should not have been published previously, nor be under consideration for publication elsewhere (except conference proceedings papers). All manuscripts are thoroughly refereed through a single-blind peer-review process. A guide for authors and other relevant information for submission of manuscripts is available on the Instructions for Authors page. Sensors is an international peer-reviewed open access semimonthly journal published by MDPI.

Please visit the Instructions for Authors page before submitting a manuscript. The Article Processing Charge (APC) for publication in this open access journal is 2600 CHF (Swiss Francs). Submitted papers should be well formatted and use good English. Authors may use MDPI's English editing service prior to publication or during author revisions.

Keywords

  • cyber-physical systems
  • Internet of Things
  • artificial intelligence
  • sensor data
  • machine learning
  • digital twin

Published Papers (9 papers)

2021

Jump to: 2020

18 pages, 7034 KiB  
Article
SmartSpectrometer—Embedded Optical Spectroscopy for Applications in Agriculture and Industry
by Julius Krause, Heinrich Grüger, Lucie Gebauer, Xiaorong Zheng, Jens Knobbe, Tino Pügner, Anna Kicherer, Robin Gruna, Thomas Längle and Jürgen Beyerer
Sensors 2021, 21(13), 4476; https://doi.org/10.3390/s21134476 - 30 Jun 2021
Cited by 11 | Viewed by 4142
Abstract
The ongoing digitization of industry and agriculture can benefit significantly from optical spectroscopy. In many cases, optical spectroscopy enables the estimation of properties such as substance concentrations and compositions. Spectral data can be acquired and evaluated in real time, and the results can [...] Read more.
The ongoing digitization of industry and agriculture can benefit significantly from optical spectroscopy. In many cases, optical spectroscopy enables the estimation of properties such as substance concentrations and compositions. Spectral data can be acquired and evaluated in real time, and the results can be integrated directly into process and automation units, saving resources and costs. Multivariate data analysis is needed to integrate optical spectrometers as sensors. Therefore, a spectrometer with integrated artificial intelligence (AI) called SmartSpectrometer and its interface is presented. The advantages of the SmartSpectrometer are exemplified by its integration into a harvesting vehicle, where quality is determined by predicting sugar and acid in grapes in the field. Full article
Show Figures

Figure 1

Figure 1
<p>Smart NIR spectrometer, optical bench measuring 6.5 × 10 × 10 mm<sup>3</sup> on a printed board with mounting bail and flat wire to connector; 50 Euro cent coin for size indication.</p>
Full article ">Figure 2
<p>In the ongoing process of digitalization, networking is emerging at machine and factory level. Local data processing enables high availability with low latencies. In this architecture, programmable logic controller (PLC) for controlling actuators can directly access processed sensor data.</p>
Full article ">Figure 3
<p><span class="html-italic">AnniNet</span> is essentially made up of three components. An encoder network, which extracts the features through a convolutional layer and then condenses them. A decoder network, which is used in training to improve feature extraction. And the evaluation of the spectral information is done in a regression network of dense layers, here the non-linear and overlapping spectral features can be evaluated. In addition, prior knowledge, for example the sample temperature, can be added to the regression network.</p>
Full article ">Figure 4
<p>Grape harvester during harvest in the vineyard.</p>
Full article ">Figure 5
<p>Shown is the first derivative of the measured absorbance of all samples at a sample temperature of 20 °C. For a better representation, a SNV was carried out. The confidence interval shows the differences between samples with different sugar and acid contents.</p>
Full article ">Figure 6
<p>The attribution maps show the regression error caused by mask-ing out individual spectral bands. Areas marked in red have a high importance for the regression of the corresponding target parameter.</p>
Full article ">Figure A1
<p>MEMS chips with tiltable grating, integrated position sensor and two slits in one device; left image shows the front side, right image shows the rear side with the cavities etched into the chip.</p>
Full article ">Figure A2
<p>Stacked component spectrometer (“Sugar cube spectrometer”), left in the image the optical bench is depicted, fiber to a standard SMA head which indicates size on the right side of the image.</p>
Full article ">Figure A3
<p>Principle of a scanning mirror micro spectrometer: 1: entrance slit, 2: collimation mirror, 3: mirror plate, 4: fixed grating, 5: refocusing mirror, 6: exit slit, 7: detector element.</p>
Full article ">Figure A4
<p>Folded optical bench for a MEMS based micro spectrometer, the fixed grating is in the left middle, both off-axis mirrors were realized in one substrate placed in the upper region of the substrate, printed board with scanner mirror and detector unit is not shown here.</p>
Full article ">
36 pages, 931 KiB  
Article
A Redundancy Metric Set within Possibility Theory for Multi-Sensor Systems
by Christoph-Alexander Holst and Volker Lohweg
Sensors 2021, 21(7), 2508; https://doi.org/10.3390/s21072508 - 3 Apr 2021
Cited by 9 | Viewed by 3349
Abstract
In intelligent technical multi-sensor systems, information is often at least partly redundant—either by design or inherently due to the dynamic processes of the observed system. If sensors are known to be redundant, (i) information processing can be engineered to be more robust against [...] Read more.
In intelligent technical multi-sensor systems, information is often at least partly redundant—either by design or inherently due to the dynamic processes of the observed system. If sensors are known to be redundant, (i) information processing can be engineered to be more robust against sensor failures, (ii) failures themselves can be detected more easily, and (iii) computational costs can be reduced. This contribution proposes a metric which quantifies the degree of redundancy between sensors. It is set within the possibility theory. Information coming from sensors in technical and cyber–physical systems are often imprecise, incomplete, biased, or affected by noise. Relations between information of sensors are often only spurious. In short, sensors are not fully reliable. The proposed metric adopts the ability of possibility theory to model incompleteness and imprecision exceptionally well. The focus is on avoiding the detection of spurious redundancy. This article defines redundancy in the context of possibilistic information, specifies requirements towards a redundancy metric, details the information processing, and evaluates the metric qualitatively on information coming from three technical datasets. Full article
Show Figures

Figure 1

Figure 1
<p>Examples of variables <math display="inline"><semantics> <mrow> <msub> <mi>x</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>x</mi> <mn>2</mn> </msub> <mo>∈</mo> <mi mathvariant="double-struck">R</mi> </mrow> </semantics></math> showing (<b>a</b>) similar behaviour which is not apparent in the sample data and showing (<b>b</b>) non-similar behaviour although sample data indicate otherwise (which is an example of spurious correlation). These kinds of biased or skewed sample data commonly occur, for example, in production systems. Production systems execute tasks repetitively in a normal (as in functioning properly) condition. In this case, data are not sampled randomly and do not match the population distribution.</p>
Full article ">Figure 2
<p>A possibility distribution <math display="inline"><semantics> <msub> <mi>π</mi> <mi>v</mi> </msub> </semantics></math>. For any element <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>∈</mo> <mi>B</mi> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>v</mi> <mo>=</mo> <mi>x</mi> </mrow> </semantics></math> is fully plausible; for <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>∈</mo> <mo stretchy="false">(</mo> <mi>A</mi> <mo>∩</mo> <msup> <mi>B</mi> <mi mathvariant="sans-serif">c</mi> </msup> <mo stretchy="false">)</mo> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>v</mi> <mo>=</mo> <mi>x</mi> </mrow> </semantics></math> is only partially plausible; and for <math display="inline"><semantics> <mrow> <mi>x</mi> <mo>∈</mo> <msup> <mi>A</mi> <mi mathvariant="sans-serif">c</mi> </msup> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>v</mi> <mo>=</mo> <mi>x</mi> </mrow> </semantics></math> is impossible. The accompanying possibility and necessity measures for <math display="inline"><semantics> <mrow> <mi>A</mi> <mo>,</mo> <mi>B</mi> </mrow> </semantics></math> are: <math display="inline"><semantics> <mrow> <mo>Π</mo> <mo stretchy="false">(</mo> <mi>A</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo stretchy="false">(</mo> <mi>A</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mo>Π</mo> <mo stretchy="false">(</mo> <mi>B</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <mi>N</mi> <mo stretchy="false">(</mo> <mi>B</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mn>0.5</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 3
<p>Different fusion approaches in possibility theory. Part (<b>a</b>) shows conjunctive fusion (<a href="#FD4-sensors-21-02508" class="html-disp-formula">4</a>) using the minimum operator as t-norm, (<b>b</b>) illustrates disjunctive fusion (<a href="#FD5-sensors-21-02508" class="html-disp-formula">5</a>) using the maximum operator as s-norm, and (<b>c</b>) shows the adaptive fusion rule (<a href="#FD6-sensors-21-02508" class="html-disp-formula">6</a>) presented in [<a href="#B69-sensors-21-02508" class="html-bibr">69</a>] (also relying on minimum and maximum operators).</p>
Full article ">Figure 4
<p>Possibility distributions and their fusion results as examples for the proposed type I redundancy metric. In (<b>a</b>), <math display="inline"><semantics> <mrow> <msup> <mi>r</mi> <mrow> <mo stretchy="false">(</mo> <mi mathvariant="normal">I</mi> <mo stretchy="false">)</mo> </mrow> </msup> <mo stretchy="false">(</mo> <mrow> <msub> <mi>π</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>π</mi> <mn>2</mn> </msub> </mrow> <mo stretchy="false">)</mo> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>&lt;</mo> <msup> <mi>r</mi> <mrow> <mo stretchy="false">(</mo> <mi mathvariant="normal">I</mi> <mo stretchy="false">)</mo> </mrow> </msup> <mo stretchy="false">(</mo> <mrow> <msub> <mi>π</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>π</mi> <mn>1</mn> </msub> </mrow> <mo stretchy="false">)</mo> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math>. Subfigure (<b>b</b>) shows a case in which both possibility distributions are not redundant, i.e., <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>&lt;</mo> <msup> <mi>r</mi> <mrow> <mo stretchy="false">(</mo> <mi mathvariant="normal">I</mi> <mo stretchy="false">)</mo> </mrow> </msup> <mo stretchy="false">(</mo> <mrow> <msub> <mi>π</mi> <mn>1</mn> </msub> <mo>,</mo> <msub> <mi>π</mi> <mn>2</mn> </msub> </mrow> <mo stretchy="false">)</mo> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mn>0</mn> <mo>&lt;</mo> <msup> <mi>r</mi> <mrow> <mo stretchy="false">(</mo> <mi mathvariant="normal">I</mi> <mo stretchy="false">)</mo> </mrow> </msup> <mo stretchy="false">(</mo> <mrow> <msub> <mi>π</mi> <mn>2</mn> </msub> <mo>,</mo> <msub> <mi>π</mi> <mn>1</mn> </msub> </mrow> <mo stretchy="false">)</mo> <mo>&lt;</mo> <mn>1</mn> </mrow> </semantics></math>. Although the fusion result is less specific (more uncertain) in (<b>c</b>) due to renormalisation, both <math display="inline"><semantics> <msub> <mi>π</mi> <mn>1</mn> </msub> </semantics></math> and <math display="inline"><semantics> <msub> <mi>π</mi> <mn>2</mn> </msub> </semantics></math> are not redundant (similar to (<b>b</b>)).</p>
Full article ">Figure 5
<p>An incorrect (<math display="inline"><semantics> <msub> <mi>π</mi> <mn>1</mn> </msub> </semantics></math>), a partially erroneous (<math display="inline"><semantics> <msub> <mi>π</mi> <mn>2</mn> </msub> </semantics></math>), and a correct possibility distribution (<math display="inline"><semantics> <msub> <mi>π</mi> <mn>3</mn> </msub> </semantics></math>). The degree of error is dependent on the level of possibility <math display="inline"><semantics> <mrow> <mi>π</mi> <mi>v</mi> </mrow> </semantics></math>, <span class="html-italic">v</span> being the unknown ground truth. Note that it is difficult to determine the error of a possibility distribution since <span class="html-italic">v</span> is unknown and it is precisely the task of <math display="inline"><semantics> <mi>π</mi> </semantics></math> to give an imprecise estimation of <span class="html-italic">v</span>.</p>
Full article ">Figure 6
<p>Modifying possibility distributions depending on the reliability of their information source <span class="html-italic">S</span>. Subfigure (<b>a</b>) shows the approach of Yager and Kelman (<a href="#FD18-sensors-21-02508" class="html-disp-formula">18</a>), (<b>b</b>) shows the method of Dubois et al. (<a href="#FD19-sensors-21-02508" class="html-disp-formula">19</a>), and (<b>c</b>) shows the proposed method (<a href="#FD20-sensors-21-02508" class="html-disp-formula">20</a>). Only the method in (<b>c</b>) has a widening effect, both methods in (<b>a</b>,<b>b</b>) raise the level of possibility along the complete frame of discernment. All methods result in total ignorance for <math display="inline"><semantics> <mrow> <mi mathvariant="italic">rel</mi> <mo stretchy="false">(</mo> <mi>S</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mn>0</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <msup> <mi>π</mi> <mo>′</mo> </msup> <mo>=</mo> <mi>π</mi> </mrow> </semantics></math> for <math display="inline"><semantics> <mrow> <mi mathvariant="italic">rel</mi> <mo stretchy="false">(</mo> <mi>S</mi> <mo stretchy="false">)</mo> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>. For these plots, parameter <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>2</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 7
<p>Information items in the form of triangular possibility distributions provided by two information sources. Available (e.g., measured) information is scattered throughout the frame of discernment <math display="inline"><semantics> <mrow> <mi>X</mi> <mo>=</mo> <mo>[</mo> <msub> <mi>x</mi> <mi mathvariant="normal">a</mi> </msub> <mo>,</mo> <msub> <mi>x</mi> <mi mathvariant="normal">b</mi> </msub> <mo>]</mo> </mrow> </semantics></math>. The left side shows a two-dimensional scatter plot in which each marker represents the maximum of each possibility distribution. The right side depicts the possibility distributions of three exemplary selected datapoints (marked by an encompassing circle). Each cluster considered in isolation represents a case of incomplete information because only parts of the frame of discernment are covered. For example, cluster 1 (marked by <span style="color:red">×</span>) suggests redundancy (as long as information items are similar). This may not hold when new information from both sources become available. Clusters 1 (<span style="color:red">×</span>) and 2 (<span style="color:#75a4d1">✶</span>) together suggest redundancy more strongly. Any data containing cluster 3 (<span style="color:green">+</span>) evidences no redundancy. Relying esclusively on <math display="inline"><semantics> <msub> <mi>e</mi> <mi mathvariant="normal">c</mi> </msub> </semantics></math> (<a href="#FD22-sensors-21-02508" class="html-disp-formula">22</a>) may result in detecting redundancy prematurely. A second evidence measure is needed to put <math display="inline"><semantics> <msub> <mi>e</mi> <mi mathvariant="normal">c</mi> </msub> </semantics></math> into context. This second measure—denoted as evidence pro redundancy <math display="inline"><semantics> <msub> <mi>e</mi> <mi mathvariant="normal">p</mi> </msub> </semantics></math>—is presented in the following.</p>
Full article ">Figure 8
<p>Preprocessing steps (i)–(v) carried out on three information items provided as probability distributions <math display="inline"><semantics> <mrow> <mi>p</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math>—as (<b>a</b>) singular value, (<b>b</b>) uniform probability density function, and (<b>c</b>) Gaussian probability density function. Each item gives information regarding an unknown measurand in its own frame of discernment (<math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mn>1</mn> </msub> <mo>=</mo> <mrow> <mo>[</mo> <msub> <mi>x</mi> <mrow> <mi mathvariant="normal">a</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>x</mi> <mrow> <mi mathvariant="normal">b</mi> <mo>,</mo> <mn>1</mn> </mrow> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mn>2</mn> </msub> <mo>=</mo> <mrow> <mo>[</mo> <msub> <mi>x</mi> <mrow> <mi mathvariant="normal">a</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>x</mi> <mrow> <mi mathvariant="normal">b</mi> <mo>,</mo> <mn>2</mn> </mrow> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math>, <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mn>3</mn> </msub> <mo>=</mo> <mrow> <mo>[</mo> <msub> <mi>x</mi> <mrow> <mi mathvariant="normal">a</mi> <mo>,</mo> <mn>3</mn> </mrow> </msub> <mo>,</mo> <msub> <mi>x</mi> <mrow> <mi mathvariant="normal">b</mi> <mo>,</mo> <mn>3</mn> </mrow> </msub> <mo>]</mo> </mrow> </mrow> </semantics></math>). As a result of this, preprocessing is necessary to be able to derive conclusions about potential redundancy. First, in step (ii) the probability distributions are transformed into possibility distributions via the truncated triangular probability-possibility transformation [<a href="#B53-sensors-21-02508" class="html-bibr">53</a>,<a href="#B65-sensors-21-02508" class="html-bibr">65</a>,<a href="#B66-sensors-21-02508" class="html-bibr">66</a>]. Step (iii) takes account of potential unreliability of information sources by widening <math display="inline"><semantics> <mrow> <mi>π</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> using (<a href="#FD20-sensors-21-02508" class="html-disp-formula">20</a>) (here with <math display="inline"><semantics> <mrow> <mi mathvariant="italic">rel</mi> <mo>=</mo> <mn>0.95</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>β</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math>). Steps (iv), (v) transform the frame of discernment into fuzzy memberships <math display="inline"><semantics> <mrow> <msub> <mi>X</mi> <mi>μ</mi> </msub> <mo>=</mo> <mrow> <mo>[</mo> <msub> <mi>μ</mi> <mi mathvariant="normal">a</mi> </msub> <mo>,</mo> <msub> <mi>μ</mi> <mi mathvariant="normal">b</mi> </msub> <mo>]</mo> </mrow> <mo>=</mo> <mrow> <mo>[</mo> <mn>0</mn> <mo>,</mo> <mn>1</mn> <mo>]</mo> </mrow> </mrow> </semantics></math>. Assuming a binary fuzzy classification task, one fuzzy class (e.g., the normal condition in condition monitoring) is represented by a unimodal potential function (UPF) (<a href="#FD29-sensors-21-02508" class="html-disp-formula">29</a>) either learned from training data or provided by an expert (iv) (here: arbitrary selected UPFs are shown as an example). Whereas <math display="inline"><semantics> <mrow> <mi>π</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> in (iii) represents the imprecision of a single information item, <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> represents the fuzzy set of the given class. In the final step (v), <math display="inline"><semantics> <mrow> <mi>π</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> is transformed into <math display="inline"><semantics> <mrow> <mi>π</mi> <mo stretchy="false">(</mo> <mi>μ</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> (<a href="#FD30-sensors-21-02508" class="html-disp-formula">30</a>). Note that <math display="inline"><semantics> <mrow> <mi>π</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> aligns with <math display="inline"><semantics> <mrow> <mi>μ</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> in such a way that (<b>a</b>) <math display="inline"><semantics> <mrow> <mi>π</mi> <mo stretchy="false">(</mo> <mi>μ</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> is close to 0 and (<b>b</b>), (<b>c</b>) <math display="inline"><semantics> <mrow> <mi>π</mi> <mo stretchy="false">(</mo> <mi>μ</mi> <mo stretchy="false">)</mo> </mrow> </semantics></math> is close to 1.</p>
Full article ">Figure 9
<p>Information items of the selected information sources. Each row, consisting of a scatter and linear plot, belongs to sources from the datasets Sensorless Drive Diagnosis (SDD) (<b>a</b>–<b>c</b>), HAR (<b>d</b>–<b>f</b>), and Typical Sensor Defects (TSD) (<b>g</b>–<b>i</b>). Each point in the scatter plots represents the center of gravity (<a href="#FD24-sensors-21-02508" class="html-disp-formula">24</a>) of an information item, i.e., of <math display="inline"><semantics> <mrow> <msub> <mi>π</mi> <mi>μ</mi> </msub> <mo stretchy="false">(</mo> <mrow> <mi>μ</mi> <mo stretchy="false">(</mo> <mi>x</mi> <mo stretchy="false">)</mo> </mrow> <mo stretchy="false">)</mo> </mrow> </semantics></math>. To get an intuition about the imprecision in the information, the possibility distributions of a single pair of information items are plotted below each scatter plot. The selected cases show linear relations, non-linear relations, non-redundancy, and aleatoric noise. In some only part of the frame of discernment is perceived. Note that plots (<b>g</b>–<b>i</b>) are zoomed in for better visibility.</p>
Full article ">
14 pages, 1456 KiB  
Article
Generating Artificial Sensor Data for the Comparison of Unsupervised Machine Learning Methods
by Bernd Zimmering, Oliver Niggemann, Constanze Hasterok, Erik Pfannstiel, Dario Ramming and Julius Pfrommer
Sensors 2021, 21(7), 2397; https://doi.org/10.3390/s21072397 - 30 Mar 2021
Cited by 5 | Viewed by 3142
Abstract
In the field of Cyber-Physical Systems (CPS), there is a large number of machine learning methods, and their intrinsic hyper-parameters are hugely varied. Since no agreed-on datasets for CPS exist, developers of new algorithms are forced to define their own benchmarks. This leads [...] Read more.
In the field of Cyber-Physical Systems (CPS), there is a large number of machine learning methods, and their intrinsic hyper-parameters are hugely varied. Since no agreed-on datasets for CPS exist, developers of new algorithms are forced to define their own benchmarks. This leads to a large number of algorithms each claiming benefits over other approaches but lacking a fair comparison. To tackle this problem, this paper defines a novel model for a generation process of data, similar to that found in CPS. The model is based on well-understood system theory and allows many datasets with different characteristics in terms of complexity to be generated. The data will pave the way for a comparison of selected machine learning methods in the exemplary field of unsupervised learning. Based on the synthetic CPS data, the data generation process is evaluated by analyzing the performance of the methods of the Self-Organizing Map, One-Class Support Vector Machine and Long Short-Term Memory Neural Net in anomaly detection. Full article
Show Figures

Figure 1

Figure 1
<p>General concept of the CPS data generation model: An automaton (e.g., containing four modes <math display="inline"><semantics> <msub> <mi>M</mi> <mi>i</mi> </msub> </semantics></math>) interacts with a dynamical system. Its control values <math display="inline"><semantics> <mrow> <mi mathvariant="bold">u</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> force the dynamical system to move its response <math display="inline"><semantics> <mrow> <mi mathvariant="bold">y</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> towards the control values. While performing this movement, the internal states <math display="inline"><semantics> <mrow> <mi mathvariant="bold">x</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>∈</mo> <msup> <mi mathvariant="double-struck">R</mi> <mi>m</mi> </msup> </mrow> </semantics></math>, that define the order of the dynamical system change their values according to Equation (<a href="#FD1-sensors-21-02397" class="html-disp-formula">1</a>). As the automaton cyclically evaluates whether <math display="inline"><semantics> <msub> <mi>e</mi> <mi>i</mi> </msub> </semantics></math> from Equation (<a href="#FD2-sensors-21-02397" class="html-disp-formula">2</a>) is true, mode changes are triggered automatically. The resulting trajectory of <math display="inline"><semantics> <mrow> <mi mathvariant="bold">x</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> is then transformed trough the nonlinear mapping function <math display="inline"><semantics> <mrow> <mi>g</mi> <mo>(</mo> <mo>·</mo> <mo>)</mo> </mrow> </semantics></math> (Equation (<a href="#FD3-sensors-21-02397" class="html-disp-formula">3</a>)) into the observation space <math display="inline"><semantics> <mrow> <mi mathvariant="bold">o</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>∈</mo> <msup> <mi mathvariant="bold">R</mi> <mi>n</mi> </msup> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>The latent space of <math display="inline"><semantics> <mrow> <mi mathvariant="bold">x</mi> <mrow> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> <mo>∈</mo> <msup> <mi mathvariant="bold">R</mi> <mn>3</mn> </msup> </mrow> </semantics></math> of a third-order dynamical system over 25 cycles of a five-mode automaton. Every dimension of <math display="inline"><semantics> <mrow> <mi mathvariant="bold">x</mi> <mo>(</mo> <mi>t</mi> <mo>)</mo> </mrow> </semantics></math> is shown as an axis and allows a graphical interpretation of the internal dynamics of the dynamical system from Equation (<a href="#FD1-sensors-21-02397" class="html-disp-formula">1</a>). The colors indicate the trajectory taken to follow the commands within the automaton’s modes. It can be seen that each state has a unique shape.</p>
Full article ">Figure 3
<p>Using anomaly detection for the evaluation of unsupervised machine learning methods.</p>
Full article ">Figure 4
<p>Relative difference of average area-under-the-curve (AUC) values <math display="inline"><semantics> <mrow> <msub> <mo>Δ</mo> <mi>rel</mi> </msub> <mover> <mi>AUC</mi> <mo>¯</mo> </mover> </mrow> </semantics></math> (see text for exact definition) as a function of five parameters that are relevant for data complexity.</p>
Full article ">Figure 5
<p>Comparison of Maximum Likelihood Estimation (MLE) versus mean-squared-error (MSE) as function of variations in the number of dimensions in the observation space and pole shift.</p>
Full article ">Figure 6
<p>Average AUC for the four Machine Learning (ML) algorithms as function of the number of dimensions in the observation space.</p>
Full article ">Figure 7
<p>AUC receiver operating characteristic (ROC) values for a decreasing number of rising exponential factors.</p>
Full article ">Figure 8
<p>AUC ROC values for real world dataset.</p>
Full article ">
12 pages, 3096 KiB  
Communication
Efficient Reject Options for Particle Filter Object Tracking in Medical Applications
by Johannes Kummert, Alexander Schulz, Tim Redick, Nassim Ayoub, Ali Modabber, Dirk Abel and Barbara Hammer
Sensors 2021, 21(6), 2114; https://doi.org/10.3390/s21062114 - 17 Mar 2021
Cited by 2 | Viewed by 2601
Abstract
Reliable object tracking that is based on video data constitutes an important challenge in diverse areas, including, among others, assisted surgery. Particle filtering offers a state-of-the-art technology for this challenge. Becaise a particle filter is based on a probabilistic model, it provides explicit [...] Read more.
Reliable object tracking that is based on video data constitutes an important challenge in diverse areas, including, among others, assisted surgery. Particle filtering offers a state-of-the-art technology for this challenge. Becaise a particle filter is based on a probabilistic model, it provides explicit likelihood values; in theory, the question of whether an object is reliably tracked can be addressed based on these values, provided that the estimates are correct. In this contribution, we investigate the question of whether these likelihood values are suitable for deciding whether the tracked object has been lost. An immediate strategy uses a simple threshold value to reject settings with a likelihood that is too small. We show in an application from the medical domain—object tracking in assisted surgery in the domain of Robotic Osteotomies—that this simple threshold strategy does not provide a reliable reject option for object tracking, in particular if different settings are considered. However, it is possible to develop reliable and flexible machine learning models that predict a reject based on diverse quantities that are computed by the particle filter. Modeling the task in the form of a regression enables a flexible handling of different demands on the tracking accuracy; modeling the challenge as an ensemble of classification tasks yet surpasses the results, while offering the same flexibility. Full article
Show Figures

Figure 1

Figure 1
<p>Schematic view of material for jaw reconstruction surgery. (<b>a</b>) Three-dimensional (3D) model of graft fitted to patient’s pelvic bone. To reconstruct the geometry of the jaw, two separate transplants are necessary (colored in grey and purple). (<b>b</b>) Use of bone graft to reconstruct jaw. Because of the separation into two transplants the bend of the jaw bone can be rebuilt.</p>
Full article ">Figure 2
<p>Experimental Setup: 3D camera tracks model of bone graft in depth image of patient’s pelvis (blue). Projector displays cutting lines at tracked position for the surgeon (red).</p>
Full article ">Figure 3
<p>Recorded rosbag of test surgery on corpse shows depth image with a registered color image. The projection on the bone can be seen in light green, the current tracking output in green.</p>
Full article ">Figure 4
<p>Example settings in which tracking by particle filters faces difficulties. (<b>a</b>) Partial occlusion of area to track. Here, a newly recorded track (in blue) is still stable. (<b>b</b>) Newly recorded track (in blue) is lost after body was relocated.</p>
Full article ">Figure 5
<p>Overview of our training results shows precision and recall for our baseline evaluation, the best performing regression model, Random Forest regression (RFR), and the best performing classification model, Random Forest classification (RFC).</p>
Full article ">
17 pages, 3345 KiB  
Article
A Reinforcement Learning Approach to View Planning for Automated Inspection Tasks
by Christian Landgraf, Bernd Meese, Michael Pabst, Georg Martius and Marco F. Huber
Sensors 2021, 21(6), 2030; https://doi.org/10.3390/s21062030 - 13 Mar 2021
Cited by 18 | Viewed by 4841
Abstract
Manual inspection of workpieces in highly flexible production facilities with small lot sizes is costly and less reliable compared to automated inspection systems. Reinforcement Learning (RL) offers promising, intelligent solutions for robotic inspection and manufacturing tasks. This paper presents an RL-based approach to [...] Read more.
Manual inspection of workpieces in highly flexible production facilities with small lot sizes is costly and less reliable compared to automated inspection systems. Reinforcement Learning (RL) offers promising, intelligent solutions for robotic inspection and manufacturing tasks. This paper presents an RL-based approach to determine a high-quality set of sensor view poses for arbitrary workpieces based on their 3D computer-aided design (CAD). The framework extends available open-source libraries and provides an interface to the Robot Operating System (ROS) for deploying any supported robot and sensor. The integration into commonly used OpenAI Gym and Baselines leads to an expandable and comparable benchmark for RL algorithms. We give a comprehensive overview of related work in the field of view planning and RL. A comparison of different RL algorithms provides a proof of concept for the framework’s functionality in experimental scenarios. The obtained results exhibit a coverage ratio of up to 0.8 illustrating its potential impact and expandability. The project will be made publicly available along with this article. Full article
Show Figures

Figure 1

Figure 1
<p>Exemplary robot cell in real-world (<b>left</b>) and simulation (<b>right</b>).</p>
Full article ">Figure 2
<p>The framework architecture separated by application layer. Each instance of a layer inherits its upper layer and displays a one-to-many relationship, e.g., multiple RL task environments descend from a robot environment.</p>
Full article ">Figure 3
<p>A real point cloud taken by an Ensenso N35 (<b>left</b>) and a simulated pointcloud (<b>right</b>).</p>
Full article ">Figure 4
<p>Sampling discrete actions (poses, respectively) in (<b>a</b>) a squared grid or (<b>b</b>) in a triangular grid with four sensor orientations per position or (<b>c</b>) randomly inside a continuous space. Figure (<b>d</b>) depicts a continuous action space.</p>
Full article ">Figure 5
<p>PPO approach for view planning. Here, <math display="inline"><semantics> <msub> <mi>e</mi> <mrow> <mi>S</mi> <mi>A</mi> <mi>G</mi> </mrow> </msub> </semantics></math> represents the surface area gain, <math display="inline"><semantics> <msub> <mi>p</mi> <mi>s</mi> </msub> </semantics></math> the sensor pose, <math display="inline"><semantics> <msub> <mi>p</mi> <mrow> <mi>w</mi> <mi>p</mi> </mrow> </msub> </semantics></math> the current workpiece pose, <math display="inline"><semantics> <msub> <mi>p</mi> <mi>e</mi> </msub> </semantics></math> the current robot pose and <math display="inline"><semantics> <mrow> <msub> <mi>p</mi> <mi>t</mi> </msub> <mo>=</mo> <msup> <mrow> <mo>(</mo> <msub> <mi>p</mi> <mi>x</mi> </msub> <mo>,</mo> <msub> <mi>p</mi> <mi>y</mi> </msub> <mo>)</mo> </mrow> <mi>T</mi> </msup> </mrow> </semantics></math> the selected sensor pose.</p>
Full article ">Figure 6
<p>An illustration of the test workpieces used in our experiments. Each of them is part of the ABC Dataset [<a href="#B22-sensors-21-02030" class="html-bibr">22</a>], except for the custom workpiece number 9.</p>
Full article ">Figure 7
<p>Training results of Q-learning, DQN and PPO using different action spaces (squared grid, triangular grid, random poses, or continuous) and trained on three different workpieces as denoted above each plot.</p>
Full article ">Figure 8
<p>Training results of DQN and PPO aiming at a high coverage ratio.</p>
Full article ">
15 pages, 1317 KiB  
Article
Toward Smart Traceability for Digital Sensors and the Industrial Internet of Things
by Sascha Eichstädt, Maximilian Gruber, Anupam Prasad Vedurmudi, Benedikt Seeger, Thomas Bruns and Gertjan Kok
Sensors 2021, 21(6), 2019; https://doi.org/10.3390/s21062019 - 12 Mar 2021
Cited by 26 | Viewed by 3685
Abstract
The Internet of Things (IoT) is characterized by a large number of interconnected devices or assets. Measurement instruments in the IoT are typically digital in the sense that their indications are available only as digital output. Moreover, a growing number of IoT sensors [...] Read more.
The Internet of Things (IoT) is characterized by a large number of interconnected devices or assets. Measurement instruments in the IoT are typically digital in the sense that their indications are available only as digital output. Moreover, a growing number of IoT sensors contain a built-in pre-processing system, e.g., for compensating unwanted effects. This paper considers the application of metrological principles to such so-called “smart sensors” in the IoT. It addresses the calibration of digital sensors, mathematical and semantic approaches, the communication of data quality and the meaning of traceability for the IoT in general. Full article
Show Figures

Figure 1

Figure 1
<p>Example: X-axis angular velocity frequency response of an MPU-9250 yielded by dynamic calibration [<a href="#B7-sensors-21-02019" class="html-bibr">7</a>]. The frequency response over the calibrated frequency range (4 Hz to 250 Hz) can be described with Equation (<a href="#FD1-sensors-21-02019" class="html-disp-formula">1</a>) with <math display="inline"><semantics> <mrow> <mi>K</mi> <mo>=</mo> <mn>1</mn> </mrow> </semantics></math> and <math display="inline"><semantics> <mrow> <mi>n</mi> <mo>,</mo> <mi>m</mi> <mo>=</mo> <mn>8</mn> </mrow> </semantics></math>.</p>
Full article ">Figure 2
<p>(<b>Top</b>) Dataflow from sensor via “Smartup Unit” to a receiving PC running an agent framework for data processing. (<b>Bottom</b>) Visualization of stateless data protocol.</p>
Full article ">Figure 3
<p>Chain of agents to estimate a measurand using a dynamic calibration model. A generic template for the agents is depicted above the agent chain.</p>
Full article ">Figure 4
<p>Implemented example: Screenshot of the network topology as shown on the dashboard.</p>
Full article ">Figure 5
<p>Implemented example: Comparison of the incoming (raw) and processed (deconvolved) signal. Note that the processed signal shows an increase in amplitude that corresponds to <a href="#sensors-21-02019-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 6
<p>Example: Computed deconvolution filter and compensation behavior of the sensor presented in <a href="#sensors-21-02019-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 7
<p>Definitions of the calibration, sensor and measurement models in OWL notation. The prefixes refer to the specific ontologies imported.</p>
Full article ">Figure 8
<p>Abstract representation of a time series corresponding to data from a given sensor. The metadata necessary to interpret the actual measured quantities and time instances is contained in the scheme. The fields with green shading correspond to the metadata fields.</p>
Full article ">Figure 9
<p>Relative uncertainty increase and redundancy loss values for condition monitoring of EMCs in ZeMA testbed when using 5 sensors and taking out 0 to 4 sensors.</p>
Full article ">
28 pages, 1620 KiB  
Article
Developing Industrial CPS: A Multi-Disciplinary Challenge
by Martin W. Hoffmann, Somayeh Malakuti, Sten Grüner, Soeren Finster, Jörg Gebhardt, Ruomu Tan, Thorsten Schindler and Thomas Gamer
Sensors 2021, 21(6), 1991; https://doi.org/10.3390/s21061991 - 11 Mar 2021
Cited by 34 | Viewed by 5084
Abstract
Industrial Cyber–Physical System (CPS) is an emerging approach towards value creation in modern industrial production. The development and implementation of industrial CPS in real-life production are rewarding yet challenging. This paper aims to present a concept to develop, commercialize, operate, and maintain industrial [...] Read more.
Industrial Cyber–Physical System (CPS) is an emerging approach towards value creation in modern industrial production. The development and implementation of industrial CPS in real-life production are rewarding yet challenging. This paper aims to present a concept to develop, commercialize, operate, and maintain industrial CPS which can motivate the advance of the research and the industrial practice of industrial CPS in the future. We start with defining our understanding of an industrial CPS, specifying the components and key technological aspects of the industrial CPS, as well as explaining the alignment with existing work such as Industrie 4.0 concepts, followed by several use cases of industrial CPS in practice. The roles of each component and key technological aspect are described and the differences between traditional industrial systems and industrial CPS are elaborated. The multidisciplinary nature of industrial CPS leads to challenges when developing such systems, and we present a detailed description of several major sub-challenges that are key to the long-term sustainability of industrial CPS design. Since the research of industrial CPS is still emerging, we also discuss existing approaches and novel solutions to overcome these sub-challenges. These insights will help researchers and industrial practitioners to develop and commercialize industrial CPS. Full article
Show Figures

Figure 1

Figure 1
<p>Concept of components and key technological aspects of a purpose-driven industrial Cyber–Physical System (CPS) and their relations to each other.</p>
Full article ">Figure 2
<p>The CPS of a medium-voltage switch gear structured along the concept presented in <a href="#sensors-21-01991-f001" class="html-fig">Figure 1</a>. The switch gear cabinets (bottom) are augmented with infrared sensor arrays. The live images, together with engineering know-how of the switchgear (e.g., CAD drawings of the mechanical parts), form a digital twin.</p>
Full article ">Figure 3
<p>The CPS of an industrial robot structured along the concept presented in <a href="#sensors-21-01991-f001" class="html-fig">Figure 1</a>. The purpose of the CPS for the industrial robot application [<a href="#B21-sensors-21-01991" class="html-bibr">21</a>] is to enrich the physical robot with distributed learning and optimization capabilities for non-expert users. Therefore, the digital twin consists of a CAD, simulation and a programming environment.</p>
Full article ">Figure 4
<p>The CPS of a process plant structured along the concept presented in <a href="#sensors-21-01991-f001" class="html-fig">Figure 1</a>. The purpose of the CPS for the process plant application [<a href="#B23-sensors-21-01991" class="html-bibr">23</a>] is to assist the operator by continuously providing essential information about the status of the process. The digital twin here is a tool to automatically identify deviations in the process based on process data.</p>
Full article ">Figure 5
<p>The CPS of a powertrain structured along the concept presented in <a href="#sensors-21-01991-f001" class="html-fig">Figure 1</a>. External sensors are attached to a motor and bearing, e.g., using an AAS implementation to create a digital twin of their engineering and condition data, on which basis data-based services, like operations optimizations, are provided.</p>
Full article ">Figure 6
<p>Pillars of industrial AI and adjacent topics of discussion [<a href="#B39-sensors-21-01991" class="html-bibr">39</a>,<a href="#B43-sensors-21-01991" class="html-bibr">43</a>].</p>
Full article ">Figure 7
<p>Identified major challenges in developing and managing industrial CPS, structured along the concept proposed in <a href="#sensors-21-01991-f001" class="html-fig">Figure 1</a>.</p>
Full article ">Figure 8
<p>A CPS combines a vast amount of aspects and can be interpreted as a system of systems. Red marks: existing (sub)systems for existing purposes. Green mark: enhanced (sub)system for enriched purposes.</p>
Full article ">Figure 9
<p>Physical asset and domain understanding, data and information description as well as first-principles or data-driven modeling have to be combined in a proper way to achieve the next level of sustainable and valuable industrial solutions, i.e., as simple as possible but not simpler.</p>
Full article ">
25 pages, 5443 KiB  
Article
Smart Node Networks Orchestration: A New E2E Approach for Analysis and Design for Agile 4.0 Implementation
by Annalisa Bertoli, Andrea Cervo, Carlo Alberto Rosati and Cesare Fantuzzi
Sensors 2021, 21(5), 1624; https://doi.org/10.3390/s21051624 - 26 Feb 2021
Cited by 6 | Viewed by 2793
Abstract
The field of cyber-physical systems is a growing IT research area that addresses the deep integration of computing, communication and process control, possibly with humans in the loop. The goal of such area is to define modelling, controlling and programming methodologies for designing [...] Read more.
The field of cyber-physical systems is a growing IT research area that addresses the deep integration of computing, communication and process control, possibly with humans in the loop. The goal of such area is to define modelling, controlling and programming methodologies for designing and managing complex mechatronics systems, also called industrial agents. Our research topic mainly focuses on the area of data mining and analysis by means of multi-agent orchestration of intelligent sensor nodes using internet protocols, providing also web-based HMI visualizations for data interpretability and analysis. Thanks to the rapid spreading of IoT systems, supported by modern and efficient telecommunication infrastructures and new decentralized control paradigms, the field of service-oriented programming finds new application in wireless sensor networks and microservices paradigm: we adopted such paradigm in the implementation of two different industrial use cases. Indeed, we expect a concrete and deep use of such technologies with 5G spreading. In the article, we describe the common software architectural pattern in IoT applications we used for the distributed smart sensors, providing also design and implementation details. In the use case section, the prototypes developed as proof of concept and the KPIs used for the system validation are described to provide a concrete solution overview. Full article
Show Figures

Figure 1

Figure 1
<p>Cyber-Physical system key characteristics.</p>
Full article ">Figure 2
<p>Star, mesh and cluster-tree architecture schema.</p>
Full article ">Figure 3
<p>Multi-layered IoT infrastructure.</p>
Full article ">Figure 4
<p>REST API synchronous communication schema.</p>
Full article ">Figure 5
<p>Publisher-subscriber architecture based on asynchronous communication of services inter-platform.</p>
Full article ">Figure 6
<p>Functional architecture.</p>
Full article ">Figure 7
<p>System architecture.</p>
Full article ">Figure 8
<p>Vehicle dashboard page.</p>
Full article ">Figure 9
<p>Server dashboard page.</p>
Full article ">Figure 10
<p>Alert table in server page.</p>
Full article ">Figure 11
<p>Website map of the HMI application.</p>
Full article ">Figure 12
<p>Functional architecture.</p>
Full article ">Figure 13
<p>System architecture.</p>
Full article ">Figure 14
<p>Website main page.</p>
Full article ">Figure 15
<p>Debugging phase of BIM updater: data retrieved from System BUS, saved into IFC file and then visualized into the BIM model visualizer.</p>
Full article ">

2020

Jump to: 2021

41 pages, 2339 KiB  
Article
Process-Driven and Flow-Based Processing of Industrial Sensor Data
by Klaus Kammerer, Rüdiger Pryss, Burkhard Hoppenstedt, Kevin Sommer and Manfred Reichert
Sensors 2020, 20(18), 5245; https://doi.org/10.3390/s20185245 - 14 Sep 2020
Cited by 16 | Viewed by 4920
Abstract
For machine manufacturing companies, besides the production of high quality and reliable machines, requirements have emerged to maintain machine-related aspects through digital services. The development of such services in the field of the Industrial Internet of Things (IIoT) is dealing with solutions such [...] Read more.
For machine manufacturing companies, besides the production of high quality and reliable machines, requirements have emerged to maintain machine-related aspects through digital services. The development of such services in the field of the Industrial Internet of Things (IIoT) is dealing with solutions such as effective condition monitoring and predictive maintenance. However, appropriate data sources are needed on which digital services can be technically based. As many powerful and cheap sensors have been introduced over the last years, their integration into complex machines is promising for developing digital services for various scenarios. It is apparent that for components handling recorded data of these sensors they must usually deal with large amounts of data. In particular, the labeling of raw sensor data must be furthered by a technical solution. To deal with these data handling challenges in a generic way, a sensor processing pipeline (SPP) was developed, which provides effective methods to capture, process, store, and visualize raw sensor data based on a processing chain. Based on the example of a machine manufacturing company, the SPP approach is presented in this work. For the company involved, the approach has revealed promising results. Full article
Show Figures

Figure 1

Figure 1
<p>Signals of a compacting station.</p>
Full article ">Figure 2
<p>An Uhlmann pharmaceutical packaging line with its production sections.</p>
Full article ">Figure 3
<p>Communication schema.</p>
Full article ">Figure 4
<p>Information flow processing schema, adopted from [<a href="#B9-sensors-20-05245" class="html-bibr">9</a>].</p>
Full article ">Figure 5
<p>Window types and sliding window concept.</p>
Full article ">Figure 6
<p>Excerpt of a machine maintenance process model (simplified).</p>
Full article ">Figure 7
<p>Schematic overview of context-aware process execution framework.</p>
Full article ">Figure 8
<p>Schematic Overview of the sensor processing pipeline (SPP).</p>
Full article ">Figure 9
<p>BTTM frames.</p>
Full article ">Figure 10
<p>BTTM group state diagram.</p>
Full article ">Figure 11
<p>Examples of BTTM transmissions.</p>
Full article ">Figure 12
<p>Processing pipeline concept.</p>
Full article ">Figure 13
<p>Schema of a SPP node.</p>
Full article ">Figure 14
<p>Overview of the sensor processing pipeline.</p>
Full article ">Figure 15
<p>Correlated period of two sensor signals.</p>
Full article ">Figure 16
<p>Data point synchronization.</p>
Full article ">Figure 17
<p>Schedule controller.</p>
Full article ">Figure 18
<p>User interface of the SPP data visualizer.</p>
Full article ">

Planned Papers

The below list represents only planned manuscripts. Some of these manuscripts have not been received by the Editorial Office yet. Papers submitted to MDPI journals are subject to peer-review.

Title: Towards smart traceability for digital sensors and the industrial Internet of Things
Authors: S. Eichstädt; M. Gruber; B. Seeger; Th. Bruns
Affiliation: Physikalisch-Technische Bundesanstalt, Braunschweig and Berlin, Germany

Back to TopTop