[go: up one dir, main page]
More Web Proxy on the site http://driver.im/
Next Issue
Volume 11, December
Previous Issue
Volume 11, October
You seem to have javascript disabled. Please note that many of the page functionalities won't work as expected without javascript enabled.
 
 

Information, Volume 11, Issue 11 (November 2020) – 49 articles

Cover Story (view full-size image): The ITS-G5 standard is the basis for European communication technologies and protocols that assist public road users by providing them with additional traffic information. The scientific community is developing ITS-G5 applications for various purposes. Our research team is currently working on the development of ITS applications that can be applied in public transport networks to support the dissemination of ITS technology. At this stage, our focus was an ITS-G5 prototype that aims at increasing the safety of pedestrians and drivers that are in the vicinity of a pedestrian crosswalk by sending ITS-G5 DENM messages to the vehicles. These messages are analyzed, and if they are relevant, they are presented to the driver on an onboard infotainment system. View this paper
  • Issues are regarded as officially published after their release is announced to the table of contents alert mailing list.
  • You may sign up for e-mail alerts to receive table of contents of newly released issues.
  • PDF is the official format for papers published in both, html and pdf forms. To view the papers in pdf format, click on the "PDF Full-text" link, and use the free Adobe Reader to open them.
Order results
Result details
Section
Select all
Export citation of selected articles as:
16 pages, 2996 KiB  
Article
Access Control in NB-IoT Networks: A Deep Reinforcement Learning Strategy
by Yassine Hadjadj-Aoul and Soraya Ait-Chellouche
Information 2020, 11(11), 541; https://doi.org/10.3390/info11110541 - 23 Nov 2020
Cited by 10 | Viewed by 2381
Abstract
The Internet of Things (IoT) is a key enabler of the digital mutation of our society. Driven by various services and applications, Machine Type Communications (MTC) will become an integral part of our daily life, over the next few years. Meeting the ITU-T [...] Read more.
The Internet of Things (IoT) is a key enabler of the digital mutation of our society. Driven by various services and applications, Machine Type Communications (MTC) will become an integral part of our daily life, over the next few years. Meeting the ITU-T requirements, in terms of density, battery longevity, coverage, price, and supported mechanisms and functionalities, Cellular IoT, and particularly Narrowband-IoT (NB-IoT), is identified as a promising candidate to handle massive MTC accesses. However, this massive connectivity would pose a huge challenge for network operators in terms of scalability. Indeed, the connection to the network in cellular IoT passes through a random access procedure and a high concentration of IoT devices would, very quickly, lead to a bottleneck. The latter procedure needs, then, to be enhanced as the connectivity would be considerable. With this in mind, we propose, in this paper, to apply the access class barring (ACB) mechanism to regulate the number of devices competing for the access. In order to derive the blocking factor, we formulated the access problem as a Markov decision process that we were able to solve using one of the most advanced deep reinforcement learning techniques. The evaluation of the proposed access control, through simulations, shows the effectiveness of our approach compared to existing approaches such as the adaptive one and the Proportional Integral Derivative (PID) controller. Indeed, it manages to keep the proportion of access attempts close to the optimum, despite the lack of accurate information on the number of access attempts. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols)
Show Figures

Figure 1

Figure 1
<p>The average number of access successes as a function of the number of devices, for different access opportunities.</p>
Full article ">Figure 2
<p>Preamble sequence structure.</p>
Full article ">Figure 3
<p>Random access procedure.</p>
Full article ">Figure 4
<p>System model. Subsystem 1 represents the terminals that would like to connect; the objects in the state variable <math display="inline"><semantics> <msub> <mi>x</mi> <mn>1</mn> </msub> </semantics></math> represent those that can try to connect with a probability <span class="html-italic">p</span>, in the case of a failure they go into the waiting state <math display="inline"><semantics> <msub> <mi>x</mi> <mrow> <mn>1</mn> <mo>,</mo> <mi>L</mi> </mrow> </msub> </semantics></math> for a back-off time duration. Subsystem 2 represents the objects coming from the different classes that can try to choose a preamble. In the case of a collision, they may attempt access a number of times. They leave subsystem 2 when they succeed in being the only ones to have chosen a preamble or when they reach the maximum number of attempts (with a rate of <math display="inline"><semantics> <mi>θ</mi> </semantics></math>).</p>
Full article ">Figure 5
<p>Arrival regulation system.</p>
Full article ">Figure 6
<p>The access probability for the considered strategies.</p>
Full article ">Figure 7
<p>The average latency of the devices for the considered strategies.</p>
Full article ">Figure 8
<p>The average reward of the considered strategies.</p>
Full article ">Figure 9
<p>The status of the preambles with the different approaches.</p>
Full article ">
11 pages, 3187 KiB  
Article
Humanities: The Outlier of Research Assessments
by Güleda Doğan and Zehra Taşkın
Information 2020, 11(11), 540; https://doi.org/10.3390/info11110540 - 23 Nov 2020
Cited by 2 | Viewed by 5832
Abstract
Commercial bibliometric databases, and the quantitative indicators presented by them, are widely used for research assessment purposes, which is not fair for the humanities. The humanities are different from all other areas by nature in many aspects. This study aimed to show the [...] Read more.
Commercial bibliometric databases, and the quantitative indicators presented by them, are widely used for research assessment purposes, which is not fair for the humanities. The humanities are different from all other areas by nature in many aspects. This study aimed to show the extent of the difference in terms of five size-independent bibliometric indicators, based on citations and collaborations. We used categorical InCites data (1980–2020) to compare six main Organisation for Economic Co-operation and Development (OECD) subject areas, and the 45,987 sources of humanities, to make a comparison for subareas of the humanities. Results showed that the humanities are statistically different from all other areas, including social sciences, with high effect sizes in terms of the five indicators taken into consideration. Besides that, all the subareas of the humanities differ from each other. This main finding indicates that the humanities do not need new indicators for quantitative evaluation, but different approaches for assessment, such as bottom-up approaches. Full article
(This article belongs to the Special Issue ICT Enhanced Social Sciences and Humanities)
Show Figures

Figure 1

Figure 1
<p>Distribution of publications and citations to the Organisation for Economic Co-operation and Development (OECD) subareas.</p>
Full article ">Figure 2
<p>Boxplot of citations per publication for six main areas. (Each dot in the boxplot represents one Web of Science (WoS) subject category in the related OECD subject. The average citations per paper for 6 main and 39 s-level OECD subjects are shown in <a href="#app2-information-11-00540" class="html-app">Appendix B</a>.)</p>
Full article ">Figure 3
<p>95% confidence interval and scatter graphs for subareas of the humanities, based on journal data for citations per publication (first line) and percentage of documents cited (second line).</p>
Full article ">Figure A1
<p>OECD-WoS subject classification scheme (high-resolution image: <a href="http://zehrataskin.com/MDPI_Information/appendix.jpg" target="_blank">http://zehrataskin.com/MDPI_Information/appendix.jpg</a>).</p>
Full article ">Figure A2
<p>Citations per paper for 6 main and 39 s-level OECD subjects.</p>
Full article ">
40 pages, 13898 KiB  
Article
Addressing Misinformation in Online Social Networks: Diverse Platforms and the Potential of Multiagent Trust Modeling
by Robin Cohen, Karyn Moffatt, Amira Ghenai, Andy Yang, Margaret Corwin, Gary Lin, Raymond Zhao, Yipeng Ji, Alexandre Parmentier, Jason P’ng, Wil Tan and Lachlan Gray
Information 2020, 11(11), 539; https://doi.org/10.3390/info11110539 - 23 Nov 2020
Cited by 4 | Viewed by 5818
Abstract
In this paper, we explore how various social networking platforms currently support the spread of misinformation. We then examine the potential of a few specific multiagent trust modeling algorithms from artificial intelligence, towards detecting that misinformation. Our investigation reveals that specific requirements of [...] Read more.
In this paper, we explore how various social networking platforms currently support the spread of misinformation. We then examine the potential of a few specific multiagent trust modeling algorithms from artificial intelligence, towards detecting that misinformation. Our investigation reveals that specific requirements of each environment may require distinct solutions for the processing. This then leads to a higher-level proposal for the actions to be taken in order to judge trustworthiness. Our final reflection concerns what information should be provided to users, once there are suspected misleading posts. Our aim is to enlighten both the organizations that host social networking and the users of those platforms, and to promote steps forward for more pro-social behaviour in these environments. As a look to the future and the growing need to address this vital topic, we reflect as well on two related topics of possible interest: the case of older adult users and the potential to track misinformation through dedicated data science studies, of particular use for healthcare. Full article
(This article belongs to the Special Issue Tackling Misinformation Online)
Show Figures

Figure A1

Figure A1
<p>#Health#Fitness#Telstra#Cancer#Radiation#SmartMeters example from Twitter.</p>
Full article ">Figure A2
<p>Maxwithaxe example from Twitter.</p>
Full article ">Figure A3
<p>Kelley Eidem @CuresCancer example from Twitter.</p>
Full article ">Figure A4
<p>BlueSky example from Twitter.</p>
Full article ">Figure A5
<p>KyleBass and Jordon Sather examples from Twitter (coronavirus).</p>
Full article ">Figure A6
<p>Front page of r/science subreddit from Reddit.</p>
Full article ">Figure A7
<p>Sample individual page on Reddit.</p>
Full article ">Figure A8
<p>Misinformation on Reddit. (<b>a</b>) User claimed to have developed a certain electronic system. The post received a 74.2 thousand net score and 3 thousand comments, which is considered far above average for the r/teenagers community with 1.8 million subscribers. Many initial responses were positive. Only after a few dozen threads did negative threads begin to show up. User began to notice inconsistencies with the setup in the picture. (<b>b</b>) User stole and reposted a story made by another user, claiming as own. Before the community response was removed, it received a 19.2 thousand net score, which is considered a lot for the r/AskReddit community. Initial responses were unsuspecting. However, another user commented and provided definitive evidence that the author stole and reused their story. (<b>c</b>) User claimed picture was taken in 1970s but was clearly modern. The post received a 732 net score and 31 comments, which is not a lot compared to top posts in the subreddit but still a decent amount. After just 2 comment threads, a user picked up a detail in the picture that showed it must have been taken much more recently. (<b>d</b>) User referred to an article from a questionable source. The post received a 498 net score and 188 comments, which is not a lot compared to top posts in the subreddit, but still a decent amount. A few comment threads in, users started to point out the cited article didn’t seem to be very credible and the news wasn’t true. (<b>e</b>) User posted a link to a questionable source The post received a 6.7 thousand net score and 957 comments, a farily large amount for the subbreddit. Despite the post’s high rating, users pointed out that the study was unreliable and inconclusive quite early on. Many also claimed that the author had a history of blindly linking as many studies as possible without verifying which ones seemed reasonable.</p>
Full article ">Figure A9
<p>Sample Facebook Page.</p>
Full article ">Figure A10
<p>Sample Facebook Group.</p>
Full article ">Figure A11
<p>Facebook misattributed quote.</p>
Full article ">Figure A12
<p>Facebook inflammatory post.</p>
Full article ">Figure A13
<p>Facebook suspected content, low response.</p>
Full article ">Figure A14
<p>Facebook suspected content, joke.</p>
Full article ">Figure A15
<p>Facebook example of ideology.</p>
Full article ">Figure A16
<p>Facebook healthcare example.</p>
Full article ">Figure A17
<p>Facebook climate change example.</p>
Full article ">Figure A18
<p>Facebook coronavirus example (fact checking).</p>
Full article ">Figure A19
<p>Snapchat samples.</p>
Full article ">Figure A20
<p>Instagram samples.</p>
Full article ">Figure A21
<p>Instagram healthcare example.</p>
Full article ">Figure A22
<p>Instagram Disney example.</p>
Full article ">Figure A23
<p>Google acting on misinformation (coronavirus).</p>
Full article ">Figure A24
<p>Twitter algorithm to detect misinformation.</p>
Full article ">Figure A25
<p>Twitter algorithm to inform users.</p>
Full article ">Figure A26
<p>Example of Reddit Misinformation for our Algorithm.</p>
Full article ">Figure A27
<p>Display of Thread for Example.</p>
Full article ">Figure A28
<p>Control flow of the model.</p>
Full article ">Figure A29
<p>Data flow of the model.</p>
Full article ">
10 pages, 1024 KiB  
Article
American Children’s Screen Time: Diminished Returns of Household Income in Black Families
by Shervin Assari
Information 2020, 11(11), 538; https://doi.org/10.3390/info11110538 - 20 Nov 2020
Cited by 9 | Viewed by 3854
Abstract
While increased household income is associated with overall decreased screen time for children, less is known about the effect of racial variation on this association. According to Minorities’ Diminished Returns (MDRs) theory, family income and other economic resources show weaker association with children’s [...] Read more.
While increased household income is associated with overall decreased screen time for children, less is known about the effect of racial variation on this association. According to Minorities’ Diminished Returns (MDRs) theory, family income and other economic resources show weaker association with children’s developmental, behavioral, and health outcomes for racialized groups such as black families, due to the effect of racism and social stratification. In this study, we investigated the association, by race, between family income and children’s screen time, as a proxy of screen time. This longitudinal study followed 15,022 American children aged 9–11 over a 1-year period. The data came from the baseline of the Adolescent Brain Cognitive Development (ABCD) study. The independent variable was family income, and it was categorized as a three-level nominal variable. The dependent variable, screen time, was a continuous variable. Ethnicity, gender, parental education, and marital status were the covariates. The results showed that family income was inversely associated with children’s screen time. However, there was a weaker inverse association seen in black families when compared with white families. This was documented by a significant statistical interaction between race and family income on children’s screen time. Diminished association between family income and children’s screen time for black families, compared with white families, is similar to MDRs and reflects a health risk to high-income black children. In a society where race and skin color determine opportunities and treatment by society, children from middle class black families remain at risk across multiple domains. We should not assume that income similarly promotes the health of all racial and ethnic groups. Addressing health and behavioral inequalities requires interventions that go beyond equalizing socioeconomic resources for black families. Marginalization, racism, and poverty interfere with the normal family income-related development of American children. Full article
Show Figures

Figure 1

Figure 1
<p>The association between family income and children’s screen time overall.</p>
Full article ">Figure 2
<p>The association between family income and children’s screen time by race.</p>
Full article ">
15 pages, 348 KiB  
Article
Evaluation of Attackers’ Skill Levels in Multi-Stage Attacks
by Terézia Mézešová, Pavol Sokol and Tomáš Bajtoš
Information 2020, 11(11), 537; https://doi.org/10.3390/info11110537 - 19 Nov 2020
Cited by 2 | Viewed by 3652
Abstract
The rapid move to digitalization and usage of online information systems brings new and evolving threats that organizations must protect themselves from and respond to. Monitoring an organization’s network for malicious activity has become a standard practice together with event and log collection [...] Read more.
The rapid move to digitalization and usage of online information systems brings new and evolving threats that organizations must protect themselves from and respond to. Monitoring an organization’s network for malicious activity has become a standard practice together with event and log collection from network hosts. Security operation centers deal with a growing number of alerts raised by intrusion detection systems that process the collected data and monitor networks. The alerts must be processed so that the relevant stakeholders can make informed decisions when responding to situations. Correlation of alerts into more expressive intrusion scenarios is an important tool in reducing false-positive and noisy alerts. In this paper, we propose correlation rules for identifying multi-stage attacks. Another contribution of this paper is a methodology for inferring from an alert the values needed to evaluate the attack in terms of the attacker’s skill level. We present our results on the CSE-CIC-IDS2018 data set. Full article
(This article belongs to the Special Issue Advanced Topics in Systems Safety and Security)
Show Figures

Figure 1

Figure 1
<p>Overview of the processing and evaluation stages.</p>
Full article ">
17 pages, 860 KiB  
Article
Document Summarization Based on Coverage with Noise Injection and Word Association
by Heechan Kim and Soowon Lee
Information 2020, 11(11), 536; https://doi.org/10.3390/info11110536 - 19 Nov 2020
Cited by 1 | Viewed by 2187
Abstract
Automatic document summarization is a field of natural language processing that is rapidly improving with the development of end-to-end deep learning models. In this paper, we propose a novel summarization model that consists of three methods. The first is a coverage method based [...] Read more.
Automatic document summarization is a field of natural language processing that is rapidly improving with the development of end-to-end deep learning models. In this paper, we propose a novel summarization model that consists of three methods. The first is a coverage method based on noise injection that makes the attention mechanism select only important words by defining previous context information as noise. This alleviates the problem that the summarization model generates the same word sequence repeatedly. The second is a word association method to update the information of each word by comparing the information of the current step with the information of all previous decoding steps. According to following words, this catches a change in the meaning of the word that has been already decoded. The third is a method using a suppression loss function that explicitly minimizes the probabilities of non-answer words. The proposed summarization model showed good performance on some recall-oriented understudy for gisting evaluation (ROUGE) metrics compared to the state-of-the-art models in the CNN/Daily Mail summarization task, and the results were achieved with very few learning steps compared to the state-of-the-art models. Full article
(This article belongs to the Special Issue Natural Language Processing for Social Media)
Show Figures

Figure 1

Figure 1
<p>Structure of the proposed model.</p>
Full article ">Figure 2
<p>Detailed process of the coverage method based on noise injection.</p>
Full article ">Figure 3
<p>Detailed process of the word association method.</p>
Full article ">
21 pages, 2129 KiB  
Article
Towards Context-Aware Opinion Summarization for Monitoring Social Impact of News
by Alejandro Ramón-Hernández, Alfredo Simón-Cuevas, María Matilde García Lorenzo, Leticia Arco and Jesús Serrano-Guerrero
Information 2020, 11(11), 535; https://doi.org/10.3390/info11110535 - 18 Nov 2020
Cited by 3 | Viewed by 3517
Abstract
Opinion mining and summarization of the increasing user-generated content on different digital platforms (e.g., news platforms) are playing significant roles in the success of government programs and initiatives in digital governance, from extracting and analyzing citizen’s sentiments for decision-making. Opinion mining provides the [...] Read more.
Opinion mining and summarization of the increasing user-generated content on different digital platforms (e.g., news platforms) are playing significant roles in the success of government programs and initiatives in digital governance, from extracting and analyzing citizen’s sentiments for decision-making. Opinion mining provides the sentiment from contents, whereas summarization aims to condense the most relevant information. However, most of the reported opinion summarization methods are conceived to obtain generic summaries, and the context that originates the opinions (e.g., the news) has not usually been considered. In this paper, we present a context-aware opinion summarization model for monitoring the generated opinions from news. In this approach, the topic modeling and the news content are combined to determine the “importance” of opinionated sentences. The effectiveness of different developed settings of our model was evaluated through several experiments carried out over Spanish news and opinions collected from a real news platform. The obtained results show that our model can generate opinion summaries focused on essential aspects of the news, as well as cover the main topics in the opinionated texts well. The integration of term clustering, word embeddings, and the similarity-based sentence-to-news scoring turned out the more promising and effective setting of our model. Full article
(This article belongs to the Special Issue Information Retrieval and Social Media Mining)
Show Figures

Figure 1

Figure 1
<p>Workflow overview of the proposed model.</p>
Full article ">Figure 2
<p>Results of the Silhouette measure for the two clustering approaches in the topic detection on the TelecomServ dataset by applying (<b>a</b>) WordNet and (<b>b</b>) word embeddings based semantic processing approaches.</p>
Full article ">Figure 3
<p>Results of the Silhouette measure for the two clustering approaches in the topic detection on COVID-19 dataset by applying (<b>a</b>) WordNet and (<b>b</b>) word embeddings based semantic processing approaches.</p>
Full article ">Figure 4
<p>Averaged Silhouette values of compared topic detection approaches applied to the TelecomServ dataset.</p>
Full article ">Figure 5
<p>Average Silhouette values of compared topic detection approaches applied to the COVID-19 dataset.</p>
Full article ">Figure 6
<p>Results of <span class="html-italic">JSD<sub>News</sub></span> (Jensen–Shannon divergence focused on the news) applying (<b>a</b>) term and (<b>b</b>) sentence clustering, using WordNet and word embeddings on the TelecomServ dataset.</p>
Full article ">Figure 7
<p>Results of <span class="html-italic">JSD<sub>Opinions</sub></span> (Jensen–Shannon divergence focused on the opinions) applying (<b>a</b>) term and (<b>b</b>) sentence clustering, using WordNet and word embeddings on the TelecomServ dataset.</p>
Full article ">Figure 8
<p>Results of <span class="html-italic">JSD<sub>News</sub></span> applying (<b>a</b>) term and (<b>b</b>) sentence clustering, using WordNet and word embeddings on the COVID-19 dataset.</p>
Full article ">Figure 9
<p>Results of <span class="html-italic">JSD<sub>Opinions</sub></span> applying (<b>a</b>) term and (<b>b</b>) sentence clustering, using WordNet and word embeddings on the COVID-19 dataset.</p>
Full article ">
16 pages, 928 KiB  
Article
Identification of Social Aspects by Means of Inertial Sensor Data
by Luca Bedogni and Giacomo Cabri
Information 2020, 11(11), 534; https://doi.org/10.3390/info11110534 - 17 Nov 2020
Cited by 4 | Viewed by 2130
Abstract
Today’s applications and providers are very interested in knowing the social aspects of users in order to customize the services they provide and to be more effective. Among the others, the most frequented places and the paths to reach them are information that [...] Read more.
Today’s applications and providers are very interested in knowing the social aspects of users in order to customize the services they provide and to be more effective. Among the others, the most frequented places and the paths to reach them are information that turns out to be very useful to define users’ habits. The most exploited means to acquire positions and paths is the GPS sensor, however it has been shown how leveraging inertial data from installed sensors can lead to path identification. In this work, we present a Computationally Efficient algorithm to Reconstruct Vehicular Traces (CERT), a novel algorithm which computes the path traveled by a vehicle using accelerometer and magnetometer data. We show that by analyzing data obtained through the accelerometer and the magnetometer in vehicular scenarios, CERT achieves almost perfect identification for medium and small sized cities. Moreover, we show that the longer the path, the easier it is to recognize it. We also present results characterizing the privacy risks depending on the area of the world, since, as we show, urban dynamics play a key role in the path detection. Full article
(This article belongs to the Special Issue The Integration of Digital and Social Systems)
Show Figures

Figure 1

Figure 1
<p>Example run of the CERT algorithm. (<b>a</b>) shows <b>G</b><sup>(<span class="html-italic">P</span>)</sup>, (<b>b</b>) shows instead <b>G</b><sup>(<span class="html-italic">I</span>)</sup>. The clique generation, pictured in (<b>c</b>) finds all the subgraphs of of length <span class="html-italic">N</span> = 4 according to the length of <b>G</b><sup>(<span class="html-italic">P</span>)</sup>. The detection step is shown in (<b>d</b>), where we find the best possible match of <b>G</b><sup>(<span class="html-italic">P</span>)</sup>.</p>
Full article ">Figure 2
<p>(<b>a</b>,<b>b</b>) show the PDF of the road angles and road lengths for different worldwide cities, respectively. (<b>c</b>) shows the <span class="html-italic">α</span> and <span class="html-italic">β</span> values for different sized cities in the EU and US.</p>
Full article ">Figure 3
<p>Overarching schema of our proposal. Sensors read data from the car movement, which are reported to a central server. Here we create the two graphs, <math display="inline"><semantics> <msup> <mi mathvariant="script">G</mi> <mrow> <mo>(</mo> <mi>P</mi> <mo>)</mo> </mrow> </msup> </semantics></math> and <math display="inline"><semantics> <msup> <mi mathvariant="script">G</mi> <mrow> <mo>(</mo> <mi>I</mi> <mo>)</mo> </mrow> </msup> </semantics></math>, and from the latter we find all the possible subgraphs of any size. We then perform the matching between <math display="inline"><semantics> <msup> <mi mathvariant="script">G</mi> <mrow> <mo>(</mo> <mi>P</mi> <mo>)</mo> </mrow> </msup> </semantics></math> and the subgraphs we found, eventually detecting the best possible match and reporting it back. We note that the <math display="inline"><semantics> <msup> <mi mathvariant="script">G</mi> <mrow> <mo>(</mo> <mi>I</mi> <mo>)</mo> </mrow> </msup> </semantics></math> download and the clique operation can be performed in advance, and stored in memory, to save time for the matching operation.</p>
Full article ">Figure 4
<p>Measurement distribution gathered with 10 different smartphones placed at a constant direction.</p>
Full article ">Figure 5
<p>Number of paths in different cities, varying <span class="html-italic">ϵ</span> and <span class="html-italic">δ</span>. The cities tested are Bologna, Italy (<b>a</b>), Austin, Texas (<b>b</b>), Marrakech, Morocco (<b>c</b>), Manila, Philippines (<b>d</b>), Buenos Aires, Argentina (<b>e</b>), and Auckland, New Zealand (<b>f</b>).</p>
Full article ">Figure 6
<p>(<b>a</b>) shows the Identification probability for increasingly large cities. (<b>b</b>) shows the same metric, plotted instead against the path length. Finally (<b>c</b>) shows the comparison when considering only the accelerometer, only the magnetometer, or both.</p>
Full article ">Figure 7
<p>(<b>a</b>) shows the time needed to perform the clique generation and the detection for different cities worldwide. (<b>b</b>) shows the number of different subgraphs found for the same cities, versus the length of the path. Finally (<b>c</b>) shows the comparison in time between this work and [<a href="#B30-information-11-00534" class="html-bibr">30</a>].</p>
Full article ">
31 pages, 1498 KiB  
Article
Cybersecurity Challenges in Industry: Measuring the Challenge Solve Time to Inform Future Challenges
by Tiago Espinha Gasiba, Ulrike Lechner and Maria Pinto-Albuquerque
Information 2020, 11(11), 533; https://doi.org/10.3390/info11110533 - 16 Nov 2020
Cited by 6 | Viewed by 3885
Abstract
Cybersecurity vulnerabilities in industrial control systems have been steadily increasing over the last few years. One possible way to address this issue is through raising the awareness (through education) of software developers, with the intent to increase software quality and reduce the number [...] Read more.
Cybersecurity vulnerabilities in industrial control systems have been steadily increasing over the last few years. One possible way to address this issue is through raising the awareness (through education) of software developers, with the intent to increase software quality and reduce the number of vulnerabilities. CyberSecurity Challenges (CSCs) are a novel serious game genre that aims to raise industrial software developers’ awareness of secure coding, secure coding guidelines, and secure coding best practices. An important industry-specific requirement to consider in designing these kinds of games is related to the whole event’s duration and how much time it takes to solve each challenge individually—the challenge solve time. In this work, we present two different methods to compute the challenge solve time: one method based on data collected from the CSC dashboard and another method based on a challenge heartbeat. The results obtained by both methods are presented; both methods are compared to each other, and the advantages and limitations of each method are discussed. Furthermore, we introduce the notion of a player profile, which is derived from dashboard data. Our results and contributions aim to establish a method to measure the challenge solve time, inform the design of future challenges, and improve coaching during CSC gameplay. Full article
(This article belongs to the Special Issue Computer Programming Education)
Show Figures

Figure 1

Figure 1
<p>Architecture of cybersecurity challenges.</p>
Full article ">Figure 2
<p>Exemplary dashboard of the cybersecurity challenge using CTFd.</p>
Full article ">Figure 3
<p>Computing the challenge solve time from dashboard interactions.</p>
Full article ">Figure 4
<p>Web interface of the Sifu platform.</p>
Full article ">Figure 5
<p>Computing challenge solve time from the Sifu platform’s heartbeat.</p>
Full article ">Figure 6
<p>Challenge solve time from the dashboard based on flag submission of Sifu challenges.</p>
Full article ">Figure 7
<p>Challenge solve time from the challenge heartbeat in the Sifu platform.</p>
Full article ">Figure 8
<p>Consolidated <math display="inline"><semantics> <msub> <mi>P</mi> <mi>f</mi> </msub> </semantics></math> computed with data from the dashboard and Sifu platform.</p>
Full article ">Figure 9
<p><math display="inline"><semantics> <msub> <mi>P</mi> <mi>f</mi> </msub> </semantics></math> for different challenges computed with data from the dashboard and Sifu platform.</p>
Full article ">Figure 10
<p>Examples of normalized time vs. normalized total interactions.</p>
Full article ">Figure 11
<p>Six real-world examples of the challenge heartbeat.</p>
Full article ">
30 pages, 11905 KiB  
Article
Connecting Semantic Situation Descriptions with Data Quality Evaluations—Towards a Framework of Automatic Thematic Map Evaluation
by Timo Homburg
Information 2020, 11(11), 532; https://doi.org/10.3390/info11110532 - 15 Nov 2020
Cited by 2 | Viewed by 2688
Abstract
A continuing question in the geospatial community is the evaluation of fitness for use of map data for a variety of use cases. While data quality metrics and dimensions have been discussed broadly in the geospatial community and have been modelled in semantic [...] Read more.
A continuing question in the geospatial community is the evaluation of fitness for use of map data for a variety of use cases. While data quality metrics and dimensions have been discussed broadly in the geospatial community and have been modelled in semantic web vocabularies, an ontological connection between use cases and data quality expressions allowing reasoning approaches to determine the fitness for use of semantic web map data has not yet been approached. This publication introduces such an ontological model to represent and link situations with geospatial data quality metrics to evaluate thematic map contents. The ontology model constitutes the data storage element of a framework for use case based data quality assurance, which creates suggestions for data quality evaluations which are verified and improved upon by end-users. So-created requirement profiles are associated and shared to semantic web concepts and therefore contribute to a pool of linked data describing situation-based data quality assessments, which may be used by a variety of applications. The framework is tested using two test scenarios which are evaluated and discussed in a wider context. Full article
(This article belongs to the Section Information Processes)
Show Figures

Figure 1

Figure 1
<p>Thematic Map: <a href="https://www.openrailwaymap.org" target="_blank">OpenRailwayMap</a> showing maximum speeds of railway lines in Germany. Here, a thematic map layer of max speeds is overlaying the general background of OpenStreetMap. Clearly, the focus of this map is to show the maximum speeds of railway lines. Therefore other details of the map except for the existence and completeness of the railway network are less important.</p>
Full article ">Figure 2
<p>School Accessibility Map: A thematic map representing wheelchair access to school buildings in Potsdam, Germany. The wheelchair access may be provided, not provided, limited or there may be no data given. Only one attribute is needed to generate the thematic map as shown above.</p>
Full article ">Figure 3
<p>Requirement Profile Generation Tool: Highlights the result of the query in <a href="#information-11-00532-box001" class="html-boxed-text">Listing 1</a> on the right and shows an example of <a href="https://www.wikidata.org/wiki/Property:P131" target="_blank">“located in the administrative territorial entity” (P131)</a> on the map. The attribute occurs frequently, has more than one unique attribute, but is not entirely unique, i.e., fulfils the given criteria for a thematic map property.</p>
Full article ">Figure 4
<p>Requirement Profile Suggestion: A requirement profile is suggested by the system for the usecase of School Wheelchair Accessibility according to the workflow described previously. The algorithm detected the dealbreaker property “wheelchair accessibility” and Geometry Validity as priority 1 requirements. The related property “toilets:wheelchair” has been found as a related requirement and is classified as priority 2. Finally, a metadata quality metric Freshness has been inferred with a range suggestion (priority 3) and the positional accuracy metric has been added as a general purpose geometry evaluation metric (priority 4). In this case no other metrics were deemed eligible and feasible by the system, thus no priority 5 metric is visible. The generated requirement profile is applied on the given map an gives aggregated data quality results for schools in the area of Mainz. The requirement profile may now be further refined by the end user.</p>
Full article ">Figure 5
<p>Complete ontology model: The ontology model contains the requirement profile as its connecting component between situations, geometries, provenance information, data quality metrics and the description of a thematic map. For each of the components, a standardized vocabulary is being used. For requirement profiles and the connection of situations to requirement profiles, the respective vocabulary is stated within this publication.</p>
Full article ">Figure 6
<p>Example individual implementing the complete ontology model: A school modeled as school_instance1 is connected to its geometry using the GeoSPARQL vocabulary. The school instance is related to the new Thematic Map vocabulary visa the isPartOf relation. The Thematic Map instance relates to a set of evaluations which are the results of data quality assessments. In addition, the thematic map relates to one or more requirement profiles which relate to a set of criteria, shown here with the condition of an accessibility constraint. Thematic map instances are classified and may related to a situational description.</p>
Full article ">Figure 7
<p>Architecture of the overall system: A data quality service provides semantically annotated data quality metric calculations which may be related to a situational description or thematic map. A geospatial data repository gets geospatial data from the linked open data cloud and combines these geometries with data quality metric calculation results provided as RDF. In the data quality triple store, requirement profiles and links to situational concepts in the linked open data cloud are stored in order to link situational descriptions to data quality metrics.</p>
Full article ">Figure 8
<p>Data Quality Service: The data quality service provides data quality metrics which can be tested in a web interface. The service exposes these metrics as a webservice and as semantic web descriptions which are stored in the Data Quality Triple Store. If new data quality metrics are implemented, those are automatically added to the Data Quality Triple store where they can be annotated and linked to requirement profiles. The service besides allowing to provide own reference data may take a triple store or other webservice as a comparison (gold standard) dataset.</p>
Full article ">Figure 9
<p>Thematic Map <span class="html-italic">School_Rescue</span> exposing the quality of a map highlighting the number of students from green = good to red = bad. Rescue operators may use this information to estimate rescue efforts in the case of a disaster. Areas with low school coverage should prompt the authorities not to plan the rescue using this map source.</p>
Full article ">Figure 10
<p>Thematic Map <span class="html-italic">School_Culture</span> exposing the date of the school’s inception, including a data quality layer exposing a good coverage with green and a bad coverage with red. Contrary to <a href="#information-11-00532-f009" class="html-fig">Figure 9</a>, aside from the positional accuracy, only one attribute, the inception is of major interest in this thematic map. Further data lower priority quality metrics may be applied.</p>
Full article ">Figure 11
<p>Thematic Map <span class="html-italic">Hospital_Capacity</span> exposing the number of beds available in a clinic, including a data quality layer giving a quality estimation of the map. Rescue operators may use this information to plan rescue contingencies in case of a disaster such as a flood. The coverage of the relevant information according to the requirement profile looks usable in this particular case.</p>
Full article ">
13 pages, 6433 KiB  
Article
An Accurate GNSS-Based Redundant Safe Braking System for Urban Elevated Rail Maglev Trains
by João Batista Pinto Neto, Lucas de Carvalho Gomes, Miguel Elias Mitre Campista and Luís Henrique Maciel Kosmalski Costa
Information 2020, 11(11), 531; https://doi.org/10.3390/info11110531 - 15 Nov 2020
Cited by 2 | Viewed by 2596
Abstract
The association of elevated rail structures and Maglev (magnetic levitation) trains is a promising alternative for urban transportation. Besides being cost-effective in comparison with underground solutions, the Maglev technology is a clean and low-noise mass transportation. In this paper, we propose a low-cost [...] Read more.
The association of elevated rail structures and Maglev (magnetic levitation) trains is a promising alternative for urban transportation. Besides being cost-effective in comparison with underground solutions, the Maglev technology is a clean and low-noise mass transportation. In this paper, we propose a low-cost automatic braking system for Maglev trains. There is a myriad of sensors and positioning techniques used to improve the accuracy, precision and stability of train navigation systems, but most of them result in high implementation costs. In this paper, we develop an affordable solution, called Redundant Autonomous Safe Braking System (RASBS), for the MagLev-Cobra train, a magnetic levitation vehicle developed at the Federal University of Rio de Janeiro (UFRJ), Brazil. The proposed braking system employs GNSS (Global Navigation Satellite System) receivers at the stations and trains, which are connected via an ad-hoc wireless network. The proposed system uses a cooperative error correction algorithm to achieve sub-meter distance precision. We experimentally evaluate the performance of RASBS in the MagLev prototype located at the campus of UFRJ, Brazil. Results show that, using RASBS, the train is able to dynamically set the precise location to start the braking procedure. Full article
Show Figures

Figure 1

Figure 1
<p>MagLev-Cobra prototype (Source: [<a href="#B7-information-11-00531" class="html-bibr">7</a>]).</p>
Full article ">Figure 2
<p>Application scenario.</p>
Full article ">Figure 3
<p>Architecture of RASBS.</p>
Full article ">Figure 4
<p>GPS static measures versus real distances along MagLev-Cobra railway track.</p>
Full article ">Figure 5
<p>Experimental MagLev-Cobra railway track.</p>
Full article ">Figure 6
<p>RASBS performance evaluation metrics.</p>
Full article ">Figure 7
<p>RASBS safe distance detection performance.</p>
Full article ">
21 pages, 7758 KiB  
Article
Context-Aware Wireless Sensor Networks for Smart Building Energy Management System
by Najem Naji, Mohamed Riduan Abid, Driss Benhaddou and Nissrine Krami
Information 2020, 11(11), 530; https://doi.org/10.3390/info11110530 - 15 Nov 2020
Cited by 18 | Viewed by 4598
Abstract
Energy Management Systems (EMS) are indispensable for Smart Energy-Efficient Buildings (SEEB). This paper proposes a Wireless Sensor Network (WSN)-based EMS deployed and tested in a real-world smart building on a university campus. The at-scale implementation enabled the deployment of a WSN mesh topology [...] Read more.
Energy Management Systems (EMS) are indispensable for Smart Energy-Efficient Buildings (SEEB). This paper proposes a Wireless Sensor Network (WSN)-based EMS deployed and tested in a real-world smart building on a university campus. The at-scale implementation enabled the deployment of a WSN mesh topology to evaluate performance in terms of routing capabilities, data collection, and throughput. The proposed EMS uses the Context-Based Reasoning (CBR) Model to represent different types of buildings and offices. We implemented a new energy-efficient policy for electrical heaters control based on a Finite State Machine (FSM) leveraging on context-related events. This demonstrated significant effectiveness in minimizing the processing load, especially when adopting multithreading in data acquisition and control. To optimize sensors’ battery lifetime, we deployed a new Energy Aware Context Recognition Algorithm (EACRA) that dynamically configures sensors to send data under specific conditions and at particular times to avoid redundant data transmissions. EACRA increases the sensors’ battery lifetime by optimizing the number of samples, used modules, and transmissions. Our proposed EMS design can be used as a model to retrofit other kinds of buildings, such as residential and industrial, and thus converting them to SEEBs. Full article
(This article belongs to the Special Issue Data Processing in the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>General architecture (<b>A</b>) and communication process (<b>B</b>) of the proposed Energy Management Systems (EMS).</p>
Full article ">Figure 2
<p>Information and Communication Technology (ICT) components for data acquisition in Smart Energy-Efficient Buildings (SEEB).</p>
Full article ">Figure 3
<p>Components of the Wireless Sensor Network (WSN) nodes in the data acquisition (<b>A</b>) Arduino nano and its components, (<b>B</b>) the installed actuator with the electric heater, (<b>C</b>–<b>E</b>) gateway devices.</p>
Full article ">Figure 4
<p>WSN architecture deployment at the university campus building.</p>
Full article ">Figure 5
<p>(<b>A</b>) Energy consumption of different sensor nodes configurations, (<b>B</b>) Energy consumption of XBee RF module under different operating modes.</p>
Full article ">Figure 6
<p>Packet arrival rate using the different gateways under full mesh topology and cluster tree mesh topology.</p>
Full article ">Figure 7
<p>Link quality between WSN nodes.</p>
Full article ">Figure 8
<p>Local and remote Remote Signal Strength Indicator (RSSI) of WSN nodes.</p>
Full article ">Figure 9
<p>RSSI value between sensor nodes and gateway.</p>
Full article ">Figure 10
<p>Knowledge types used in enabling and disabling the heating process.</p>
Full article ">Figure 11
<p>Groups in the Context-Based Reasoning (CBR) model.</p>
Full article ">Figure 12
<p>Finite State Machine (FSM) State diagram that controls Linux Laboratory heater.</p>
Full article ">Figure 13
<p>Energy Aware Context Recognition Algorithm (EACRA) Client algorithm.</p>
Full article ">Figure A1
<p>WSN nodes displayed on the web application.</p>
Full article ">Figure A2
<p>List of buildings in the EMS.</p>
Full article ">Figure A3
<p>Rooms of building 7 where the EMS was implemented.</p>
Full article ">Figure A4
<p>The application getting an update about the status of the heater.</p>
Full article ">
17 pages, 3398 KiB  
Article
Optimized Particle Swarm Optimization Algorithm for the Realization of an Enhanced Energy-Aware Location-Aided Routing Protocol in MANET
by Taj-Aldeen Naser Abdali, Rosilah Hassan, Ravie Chandren Muniyandi, Azana Hafizah Mohd Aman, Quang Ngoc Nguyen and Ahmed Salih Al-Khaleefa
Information 2020, 11(11), 529; https://doi.org/10.3390/info11110529 - 15 Nov 2020
Cited by 41 | Viewed by 3864
Abstract
Mobile Ad-hoc Network (MANETs) is a wireless network topology with mobile network nodes and movable communication routes. In addition, the network nodes in MANETs are free to either join or leave the network. Typically, routing in MANETs is multi-hop because of the limited [...] Read more.
Mobile Ad-hoc Network (MANETs) is a wireless network topology with mobile network nodes and movable communication routes. In addition, the network nodes in MANETs are free to either join or leave the network. Typically, routing in MANETs is multi-hop because of the limited communication range of nodes. Then, routing protocols have been developed for MANETs. Among them, energy-aware location-aided routing (EALAR) is an efficient reactive MANET routing protocol that has been recently obtained by integrating particle swarm optimization (PSO) with mutation operation into the conventional LAR protocol. However, the mutation operation (nonuniform) used in EALAR has some drawbacks, which make EALAR provide insufficient exploration, exploitation, and diversity of solutions. Therefore, this study aims to propose to apply the Optimized PSO (OPSO) via adopting a mutation operation (uniform) instead of nonuniform. The OPSO is integrated into the LAR protocol to enhance all critical performance metrics, including packet delivery ratio, energy consumption, overhead, and end-to-end delay. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols)
Show Figures

Figure 1

Figure 1
<p>Mobile Ad-hoc Network (MANET) categories.</p>
Full article ">Figure 2
<p>Expected zone in LAR protocol.</p>
Full article ">Figure 3
<p>Optimized Particle Swarm Optimization- Location-Aided Routing <b>(</b>OPSO-LAR) mechanism.</p>
Full article ">Figure 4
<p>Packet delivery ratio of OPSO-LAR, Energy-Aware Location-Aided Routing (EALAR), and Density Aware Location-Aided Routing (DLAR).</p>
Full article ">Figure 5
<p>Overhead of OPSO-LAR, EALAR, and DLAR.</p>
Full article ">Figure 6
<p>E2E delay of OPSO-LAR, EALAR, and DLAR.</p>
Full article ">Figure 7
<p>Energy per packet of OPSO-LAR, EALAR, and DLAR<b>.</b></p>
Full article ">Scheme 1
<p>First LAR protocol <a href="#information-11-00529-sch001" class="html-scheme">Scheme 1</a> [<a href="#B12-information-11-00529" class="html-bibr">12</a>].</p>
Full article ">Scheme 2
<p>LAR protocol scheme 2 [<a href="#B12-information-11-00529" class="html-bibr">12</a>].</p>
Full article ">
12 pages, 543 KiB  
Article
Semantic Enhanced Distantly Supervised Relation Extraction via Graph Attention Network
by Xiaoye Ouyang, Shudong Chen and Rong Wang
Information 2020, 11(11), 528; https://doi.org/10.3390/info11110528 - 14 Nov 2020
Cited by 3 | Viewed by 2948
Abstract
Distantly Supervised relation extraction methods can automatically extract the relation between entity pairs, which are essential for the construction of a knowledge graph. However, the automatically constructed datasets comprise amounts of low-quality sentences and noisy words, and the current Distantly Supervised methods ignore [...] Read more.
Distantly Supervised relation extraction methods can automatically extract the relation between entity pairs, which are essential for the construction of a knowledge graph. However, the automatically constructed datasets comprise amounts of low-quality sentences and noisy words, and the current Distantly Supervised methods ignore these noisy data, resulting in unacceptable accuracy. To mitigate this problem, we present a novel Distantly Supervised approach SEGRE (Semantic Enhanced Graph attention networks Relation Extraction) for improved relation extraction. Our model first uses word position and entity type information to provide abundant local features and background knowledge. Then it builds the dependency trees to remove noisy words that are irrelevant to relations and employs Graph Attention Networks (GATs) to encode syntactic information, which also captures the important semantic features of relational words in each instance. Furthermore, to make our model more robust against noisy words, the intra-bag attention module is used to weight the bag representation and mitigate noise in the bag. Through extensive experiments on Riedel New York Times (NYT) and Google IISc Distantly Supervised (GIDS) datasets, we demonstrate SEGRE’s effectiveness. Full article
(This article belongs to the Section Artificial Intelligence)
Show Figures

Figure 1

Figure 1
<p>The example uses dependency tree and sequence structure to obtain sentence semantic, and assist in extracting relations between entities (indicated in red). In (<b>a</b>), the dependency tree can clearly express the dependency relationship between words in the sentence. Specifically, it analyzes and recognizes the grammatical components such as “subject-predicate-object” and “fixed adverbial complement” in the sentence. Each node is representing a word. In (<b>b</b>), the words in the sentence are read sequentially, usually from the left to the right, such as LSTM and GRU, while there are also two-way sequential reading forms, such as BiLSTM and BiGRU.</p>
Full article ">Figure 2
<p>The framework of the proposed Semantic Enhanced Graph attention networks Relation Extraction (SEGRE). SEGRE first encodes each word in the sentence by concatenating word, position, and entity type information. Then the sentence representation is achieved by constructing a graph attention network using a syntactic dependency tree. Next, the bag representation is calculated by weighting sentence embeddings using intra-bag attention. Finally, the bag representation is fed to a softmax classifier to get the relation of the entity pair.</p>
Full article ">Figure 3
<p>Comparison of Precision–Recall curves. SEGRE achieves higher precision over the entire range of recall than all the baselines on both datasets.</p>
Full article ">Figure 4
<p>Performance comparison of different SEGRE ablated version on two datasets.</p>
Full article ">
15 pages, 5476 KiB  
Article
Distributed Simulation with Multi-Agents for IoT in a Retail Pharmacy Facility
by Mohammed Basingab
Information 2020, 11(11), 527; https://doi.org/10.3390/info11110527 - 13 Nov 2020
Cited by 2 | Viewed by 2348
Abstract
Nowadays, internet of things (IoT) technology is considered as one of the key future technologies. The adoption of such technology is receiving quick attention from many industries as competitive pressures inspire them to move forward and invest. As technologies continue to advance, such [...] Read more.
Nowadays, internet of things (IoT) technology is considered as one of the key future technologies. The adoption of such technology is receiving quick attention from many industries as competitive pressures inspire them to move forward and invest. As technologies continue to advance, such as IoT, there is a vital need for an approach to identify its viability. This research proposes the adoption of IoT technology and the use of a simulation paradigm to capture the complexity of a system, offer reliable and continuous perceptions into its present and likely future state, and evaluate the economic feasibility of such adoption. A case study of one of the largest pharmacy retail chain is presented. IoT devices are suggested to be used to remotely monitor the failures of a geographically distributed system of refrigeration units. Multi-agents distributed system is proposed to simulate the operational behavior of the refrigerators and calculate the return of investment (ROI) of the proposed IoT implementation. Full article
(This article belongs to the Special Issue Distributed Simulation 2020)
Show Figures

Figure 1

Figure 1
<p>Survival function graph for refrigerator.</p>
Full article ">Figure 2
<p>Probability destitution for ref. failure rate.</p>
Full article ">Figure 3
<p>Statechart diagram.</p>
Full article ">Figure 4
<p>Class diagram.</p>
Full article ">Figure 5
<p>Sequence diagram.</p>
Full article ">Figure 6
<p>The structure of each agent in the agent-based simulation model (ABSM).</p>
Full article ">Figure 7
<p>The main level in ABSM.</p>
Full article ">Figure 8
<p>Discrete-event simulation (DES) model in the manufacturing agent.</p>
Full article ">Figure 9
<p>The hybrid system.</p>
Full article ">Figure 10
<p>Determine the out of service time in ABSM.</p>
Full article ">Figure 11
<p>Model animation.</p>
Full article ">Figure 12
<p>ABSM results.</p>
Full article ">Figure 13
<p>Optimization experiment and its results.</p>
Full article ">Figure 14
<p>ROI values for different reduction rates of failures.</p>
Full article ">
16 pages, 806 KiB  
Article
Evaluating the Investment Climate for China’s Cross-Border E-Commerce: The Application of Back Propagation Neural Network
by Yi Lei and Xiaodong Qiu
Information 2020, 11(11), 526; https://doi.org/10.3390/info11110526 - 12 Nov 2020
Cited by 6 | Viewed by 2908
Abstract
China’s cross-border e-commerce will usher in a new golden age of development. Based on seven countries which include the Russian Federation, Mongolia, Ukraine, Kazakhstan, Tajikistan, Kyrgyzstan and Belarus along the “Belt and Road”, an evaluation system for cross-border e-commerce investment climate indicators is [...] Read more.
China’s cross-border e-commerce will usher in a new golden age of development. Based on seven countries which include the Russian Federation, Mongolia, Ukraine, Kazakhstan, Tajikistan, Kyrgyzstan and Belarus along the “Belt and Road”, an evaluation system for cross-border e-commerce investment climate indicators is established in this study. This research applied the entropy method twice to evaluate the investment climate of seven countries based on 5 years panel data comprehensively and these countries are then classified into politics-oriented and industry-oriented countries, and then the weight of indicators for each category is analyzed. In addition, cross-border e-commerce investors are proposed to prioritize industry-oriented countries. Back propagation neural network algorithm is used to map the existing data and optimize the evaluation index system in combination with the genetic algorithm. This research denotes the effort to find out the index evaluation combination corresponding to the best overall score, make the established evaluation index system applicable to other countries, and provide reference for cross-border e-commerce investors when evaluating the investment climate in each country. This study provides the important practical implications in the sustainable development of China’s cross-border e-commerce environment. Full article
(This article belongs to the Special Issue Personalized Visual Recommendation for E-Commerce)
Show Figures

Figure 1

Figure 1
<p>Structure diagram of back propagation (BP) neural network model.</p>
Full article ">Figure 2
<p>(<b>a</b>) The predicted output; (<b>b</b>) The error (Source: own work).</p>
Full article ">
13 pages, 348 KiB  
Article
Graph Convolutional Neural Network for a Pharmacy Cross-Selling Recommender System
by Franz Hell, Yasser Taha, Gereon Hinz, Sabine Heibei, Harald Müller and Alois Knoll
Information 2020, 11(11), 525; https://doi.org/10.3390/info11110525 - 11 Nov 2020
Cited by 11 | Viewed by 5720
Abstract
Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance in recommender system benchmarks. Adapting these methods to pharmacy product cross-selling recommendation tasks with a million products and hundreds of millions of sales remains a challenge, due to the [...] Read more.
Recent advancements in deep neural networks for graph-structured data have led to state-of-the-art performance in recommender system benchmarks. Adapting these methods to pharmacy product cross-selling recommendation tasks with a million products and hundreds of millions of sales remains a challenge, due to the intricate medical and legal properties of pharmaceutical data. To tackle this challenge, we developed a graph convolutional network (GCN) algorithm called PharmaSage, which uses graph convolutions to generate embeddings for pharmacy products, which are then used in a downstream recommendation task. In the underlying graph, we incorporate both cross-sales information from the sales transaction within the graph structure, as well as product information as node features. Via modifications to the sampling involved in the network optimization process, we address a common phenomenon in recommender systems, the so-called popularity bias: popular products are frequently recommended, while less popular items are often neglected and recommended seldomly or not at all. We deployed PharmaSage using real-world sales data and trained it on 700,000 articles represented as nodes in a graph with edges between nodes representing approximately 100 million sales transactions. By exploiting the pharmaceutical product properties, such as their indications, ingredients, and adverse effects, and combining these with large sales histories, we achieved better results than with a purely statistics based approach. To our knowledge, this is the first application of deep graph embeddings for pharmacy product cross-selling recommendation at this scale to date. Full article
(This article belongs to the Special Issue Information Retrieval and Social Media Mining)
Show Figures

Figure 1

Figure 1
<p>The graph convolutional neural network uses localized convolutions on aggregated neighborhood vectors to learn product embeddings. Here, we show the two layer graph neural network that computes the final embeddings of nodes A and B using the previous layer representation of nodes A and B, respectively, and that of their respective neighborhoods. Different colors denote different nodes. The recommendation score between two products A, B is then computed via the utilization of the cosine similarity between the two final embedding vectors of nodes A and B.</p>
Full article ">Figure 2
<p>Popularity bias and re-ranking. (<b>a</b>) Depiction of overall sales probability and cross-sales (CS) probability (<span class="html-italic">y</span>1-axis) and the node degree (number of neighbors) (<span class="html-italic">y</span>3-axis) for all products. The average cross-sales rank (<span class="html-italic">y</span>2-axis) is shown for initial cross-sales statistics (solid black line) and the probability based re-ranking (PBR) approach (black dotted line). Note that the average rank is proportional to the popularity of the respective product in the initial cross-sales statistics, but this relation disappears in the PBR approach. (<b>b</b>) The probability density of edge weights (kernel density estimation (KDE)) shows the distribution of edge weights in the initial cross-sales based graph, as well as in the PBR based graph.</p>
Full article ">Figure 3
<p>(<b>a</b>) Cross-selling rank distribution for a top-selling product and a product from the long tail based on raw cross-sales and the PBR. (<b>b</b>) Average probability for products (grouped into 10 quantiles) to be among the top cross-selling articles that are being used for positive sampling in the triplet loss for both approaches. “New” denotes products that are not included in positive sampling in the approach based on raw cross-sales, but are relevant in the PBR approach, and shows their average positive sampling probability. These products show up in addition among top ranked cross-selling products when the PBR is applied. The PBR introduces around 75% more products with an average sampling probability 36.6%. Quantile selection of products for the first 10 product bins for both approaches is based on raw cross-selling statistics (red).</p>
Full article ">Figure 4
<p>Average recommendation quality among the top 15 recommended articles for 25 evaluated products. Recommendations are computed based on the graph incorporating raw cross-selling statistics and the graph, where the edge weights have been recomputed using the probability approach (PBR). PharmaSage is optimized based on the PBR approach as the input.</p>
Full article ">
21 pages, 2455 KiB  
Article
GPRS Sensor Node Battery Life Span Prediction Based on Received Signal Quality: Experimental Study
by Joseph Habiyaremye, Marco Zennaro, Chomora Mikeka, Emmanuel Masabo, Santhi Kumaran and Kayalvizhi Jayavel
Information 2020, 11(11), 524; https://doi.org/10.3390/info11110524 - 11 Nov 2020
Cited by 7 | Viewed by 3905
Abstract
Nowadays with the evolution of Internet of Things (IoT), building a network of sensors for measuring data from remote locations requires a good plan considering a lot of parameters including power consumption. A Lot of communication technologies such as WIFI, Bluetooth, Zigbee, Lora, [...] Read more.
Nowadays with the evolution of Internet of Things (IoT), building a network of sensors for measuring data from remote locations requires a good plan considering a lot of parameters including power consumption. A Lot of communication technologies such as WIFI, Bluetooth, Zigbee, Lora, Sigfox, and GSM/GPRS are being used based on the application and this application will have some requirements such as communication range, power consumption, and detail about data to be transmitted. In some places, especially the hilly area like Rwanda and where GSM connectivity is already covered, GSM/GPRS may be the best choice for IoT applications. Energy consumption is a big challenge in sensor nodes which are specially supplied by batteries as the lifetime of the node and network depends on the state of charge of the battery. In this paper, we are focusing on static sensor nodes communicating using the GPRS protocol. We acquired current consumption for the sensor node in different locations with their corresponding received signal quality and we tried to experimentally find a mathematical data-driven model for estimating the GSM/GPRS sensor node battery lifetime using the received signal strength indicator (RSSI). This research outcome will help to predict GPRS sensor node life, replacement intervals, and dynamic handover which will in turn provide uninterrupted data service. This model can be deployed in various remote WSN and IoT based applications like forests, volcano, etc. Our research has shown convincing results like when there is a reduction of −30 dBm in RSSI, the current consumption of the radio unit of the node will double. Full article
(This article belongs to the Special Issue Wireless IoT Network Protocols)
Show Figures

Figure 1

Figure 1
<p>GSM sensor node, in a GSM cell.</p>
Full article ">Figure 2
<p>System Block diagram.</p>
Full article ">Figure 3
<p>Current acquisition process.</p>
Full article ">Figure 4
<p>Current acquisition process LabVIEW graph.</p>
Full article ">Figure 5
<p>System configuration.</p>
Full article ">Figure 6
<p>The picture for the experiment.</p>
Full article ">Figure 7
<p>RSSI acquisition process.</p>
Full article ">Figure 8
<p>RSSI Acquisition LabVIEW graph.</p>
Full article ">Figure 9
<p>System schematic diagram.</p>
Full article ">Figure 10
<p>Sensor Node block diagram.</p>
Full article ">Figure 11
<p>Current transition process.</p>
Full article ">Figure 12
<p>Current consumption in different modes.</p>
Full article ">Figure 13
<p>Current consumption in location 2: RSSI = −83 dBm.</p>
Full article ">Figure 14
<p>Current consumption in location 3: RSSI = −53 dBm.</p>
Full article ">Figure 15
<p>Current consumption in location 1: RSSI = −75 dBm.</p>
Full article ">Figure 16
<p>Current consumption in location 4: RSSI = −73 dBm.</p>
Full article ">Figure 17
<p>Current consumption in location 5: RSSI = −65 dBm.</p>
Full article ">Figure 18
<p>Current consumption in location 6: RSSI = −63 dBm.</p>
Full article ">Figure 19
<p>RSSI vs. current consumption.</p>
Full article ">Figure 20
<p>Mathematical model.</p>
Full article ">
15 pages, 416 KiB  
Article
A Method of Ultra-Large-Scale Matrix Inversion Using Block Recursion
by HouZhen Wang, Yan Guo and HuanGuo Zhang
Information 2020, 11(11), 523; https://doi.org/10.3390/info11110523 - 10 Nov 2020
Cited by 7 | Viewed by 4801
Abstract
Ultra-large-scale matrix inversion has been applied as the fundamental operation of numerous domains, owing to the growth of big data and matrix applications. Using cryptography as an example, the solution of ultra-large-scale linear equations over finite fields is important in many cryptanalysis schemes. [...] Read more.
Ultra-large-scale matrix inversion has been applied as the fundamental operation of numerous domains, owing to the growth of big data and matrix applications. Using cryptography as an example, the solution of ultra-large-scale linear equations over finite fields is important in many cryptanalysis schemes. However, inverting matrices of extremely high order, such as in millions, is challenging; nonetheless, the need has become increasingly urgent. Hence, we propose a parallel distributed block recursive computing method that can process matrices at a significantly increased scale, based on Strassen’s method; furthermore, we describe the related well-designed algorithm herein. Additionally, the experimental results based on comparison show the efficiency and the superiority of our method. Using our method, up to 140,000 dimensions can be processed in a supercomputing center. Full article
(This article belongs to the Special Issue Cyberspace Security, Privacy & Forensics)
Show Figures

Figure 1

Figure 1
<p>Matrix partition.</p>
Full article ">Figure 2
<p>The impact of different threads on the G-J’s runtime.</p>
Full article ">Figure 3
<p>The comparison of G-J and NovelIn and NovelIn’s performance on <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>F</mi> <mo>(</mo> <msup> <mn>2</mn> <mn>8</mn> </msup> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 4
<p>The comparison of G-J and NovelIn and NovelIn’s performance on <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>F</mi> <mo>(</mo> <msup> <mn>2</mn> <mn>16</mn> </msup> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">Figure 5
<p>The comparison of G-J and NovelIn and NovelIn’s performance on <math display="inline"><semantics> <mrow> <mi>G</mi> <mi>F</mi> <mo>(</mo> <msup> <mn>2</mn> <mn>32</mn> </msup> <mo>)</mo> </mrow> </semantics></math>.</p>
Full article ">
17 pages, 1818 KiB  
Article
The Challenges and Opportunities to Formulate and Integrate an Effective ICT Policy at Mountainous Rural Schools of Gilgit-Baltistan
by Sabit Rahim, Tehmina Bibi, Sadruddin Bahadur Qutoshi, Shehla Gul, Yasmeen Gul, Naveed Ali Khan Kaim Khani and Muhammad Shahid Malik
Information 2020, 11(11), 522; https://doi.org/10.3390/info11110522 - 9 Nov 2020
Cited by 1 | Viewed by 3991
Abstract
The study, through the lens of school principals’ views, investigates the challenges and opportunities to formulate an information and communications technology (ICT) policy in order to integrate it in teaching and learning practices at the schools of mountainous rural areas of Gilgit-Baltistan (GB). [...] Read more.
The study, through the lens of school principals’ views, investigates the challenges and opportunities to formulate an information and communications technology (ICT) policy in order to integrate it in teaching and learning practices at the schools of mountainous rural areas of Gilgit-Baltistan (GB). This quantitative research approach focuses on three different educational systems (Regional, National, and International), as a source of data collection, which operate in GB, Pakistan. To collect the required data, questionnaires with principals and policy document reviews were used. Applying SPSS, the data were analyzed. The results show that both groups (male and female) strongly agree to formulate a policy on ICT in order to integrate it in teaching and learning to improve at the school level. The results also show that the school heads face a number of challenges (e.g., lack of infrastructure, finance, Internet, technical staff, time, awareness, and training facilities, etc.) in the formulation of ICT policy and its integration in teaching and learning. The results revealed that the majority of the schools have an absence of ICT policy instead of having competent principals in those schools. Therefore, the research recommends that the school level ICT policy should be developed and integrated in teaching and learning practices to create an environment of powerful learning at schools, in order to fulfill the needs and demands of the 21st century education. Full article
(This article belongs to the Special Issue ICT Enhanced Social Sciences and Humanities)
Show Figures

Figure 1

Figure 1
<p>Gender wise age of respondents.</p>
Full article ">Figure 2
<p>ICT trainings and qualification.</p>
Full article ">Figure 3
<p>Principal ICT competence on hardware, software, communication, and teaching and learning tools.</p>
Full article ">Figure 3 Cont.
<p>Principal ICT competence on hardware, software, communication, and teaching and learning tools.</p>
Full article ">Figure 4
<p>ICT integration challenges at the school level.</p>
Full article ">Figure 5
<p>School level ICT policy formulation challenges.</p>
Full article ">
31 pages, 517 KiB  
Review
Wearable Sensors for Monitoring and Preventing Noncommunicable Diseases: A Systematic Review
by Annica Kristoffersson and Maria Lindén
Information 2020, 11(11), 521; https://doi.org/10.3390/info11110521 - 6 Nov 2020
Cited by 18 | Viewed by 5283
Abstract
Ensuring healthy lives and promoting a healthy well-being for all at all ages are listed as some of the goals in Agenda 2030 for Sustainable Development. Considering that noncommunicable diseases (NCDs) are the leading cause of death worldwide, reducing the mortality of NCDs [...] Read more.
Ensuring healthy lives and promoting a healthy well-being for all at all ages are listed as some of the goals in Agenda 2030 for Sustainable Development. Considering that noncommunicable diseases (NCDs) are the leading cause of death worldwide, reducing the mortality of NCDs is an important target. To reach this goal, means for detecting and reacting to warning signals are necessary. Here, remote health monitoring in real time has great potential. This article provides a systematic review of the use of wearable sensors for the monitoring and prevention of NCDs. In addition, this article not only provides in-depth information about the retrieved articles, but also discusses examples of studies assessing warning signals that may result in serious health conditions, such as stroke and cardiac arrest, if left untreated. One finding is that even though many good examples of wearable sensor systems for monitoring and controlling NCDs are presented, many issues also remain to be solved. One major issue is the lack of testing on representative people from a sociodemographic perspective. Even though substantial work remains, the use of wearable sensor systems has a great potential to be used in the battle against NCDs by providing the means to diagnose, monitor and prevent NCDs. Full article
(This article belongs to the Special Issue Ubiquitous Sensing for Smart Health Monitoring)
Show Figures

Figure 1

Figure 1
<p>The article selection process for the April 2019 search [<a href="#B4-information-11-00521" class="html-bibr">4</a>].</p>
Full article ">Figure 2
<p>The article selection process for the August 2020 search.</p>
Full article ">
13 pages, 2311 KiB  
Article
Two-Dimensional Jamming Recognition Algorithm Based on the Sevcik Fractal Dimension and Energy Concentration Property for UAV Frequency Hopping Systems
by Rui Xue, Jing Liu and Huaiyu Tang
Information 2020, 11(11), 520; https://doi.org/10.3390/info11110520 - 6 Nov 2020
Cited by 6 | Viewed by 2420
Abstract
Unmanned aircraft vehicle frequency hopping (UAV-FH) systems face multiple types of jamming, and one anti-jamming method cannot cope with all types of jamming. Therefore, the jamming signals of the environment where the UAV-FH system is located must be identified and classified; moreover, anti-jamming [...] Read more.
Unmanned aircraft vehicle frequency hopping (UAV-FH) systems face multiple types of jamming, and one anti-jamming method cannot cope with all types of jamming. Therefore, the jamming signals of the environment where the UAV-FH system is located must be identified and classified; moreover, anti-jamming measures must be selected in accordance with different jamming types. First, the algorithm extracts the Sevcik fractal dimension from the frequency domain (SFDF) and the degree of energy concentration from the fractional Fourier domain of various types of jamming. Then, these parameters are combined into a two-dimensional feature vector and used as a feature parameter for classification and recognition. Lastly, a binary tree-based support vector machine (BT-SVM) multi-classifier is used to classify the jamming signal. Simulation results show that the feature parameters extracted by the proposed method have good separation and strong stability. Compared with the existing box-dimensional recognition algorithm, the new algorithm not only can quickly and accurately identify the type of jamming signal but also has more advantages when the jamming-to-noise ratio (JNR) is low. Full article
Show Figures

Figure 1

Figure 1
<p>Block diagram of the unmanned aircraft vehicle frequency hopping (UAV FH) system.</p>
Full article ">Figure 2
<p>Sevcik fractal dimension from the frequency domain (SFDF) of six kinds of interference changes with the increase in jamming-to-noise ratio (JNR).</p>
Full article ">Figure 3
<p>Changes of broadband noise jamming (BNJ) and linear frequency modulation jamming (LFM) as JNR increases.</p>
Full article ">Figure 4
<p>Characteristic distribution diagram of six types of jamming.</p>
Full article ">Figure 5
<p>Block diagram of interference recognition based on two-dimensional features.</p>
Full article ">Figure 6
<p>Jamming recognition process based on two-dimensional features.</p>
Full article ">Figure 7
<p>Recognition rate of the algorithm proposed in this study and the algorithm in [<a href="#B19-information-11-00520" class="html-bibr">19</a>]. (<b>a</b>) Recognition rate of broadband noise jamming (BNJ) and linear frequency modulation jamming (LFM); (<b>b</b>) Recognition rate of narrowband noise jamming (NNJ) and single-tone jamming (STJ); (<b>c</b>) Recognition rate of multi-tone jamming (MTJ) and pulse jamming (PJ).</p>
Full article ">
13 pages, 460 KiB  
Article
Random Forest with Sampling Techniques for Handling Imbalanced Prediction of University Student Depression
by Siriporn Sawangarreerak and Putthiporn Thanathamathee
Information 2020, 11(11), 519; https://doi.org/10.3390/info11110519 - 5 Nov 2020
Cited by 28 | Viewed by 5038
Abstract
In this work, we propose a combined sampling technique to improve the performance of imbalanced classification of university student depression data. In experimental results, we found that combined random oversampling with the Tomek links under sampling methods allowed generating a relatively balanced depression [...] Read more.
In this work, we propose a combined sampling technique to improve the performance of imbalanced classification of university student depression data. In experimental results, we found that combined random oversampling with the Tomek links under sampling methods allowed generating a relatively balanced depression dataset without losing significant information. In this case, the random oversampling technique was used for sampling the minority class to balance the number of samples between the datasets. Then, the Tomek links technique was used for undersampling the samples by removing the depression data considered less relevant and noisy. The relatively balanced dataset was classified by random forest. The results show that the overall accuracy in the prediction of adolescent depression data was 94.17%, outperforming the individual sampling technique. Moreover, our proposed method was tested with another dataset for its external validity. This dataset’s predictive accuracy was found to be 93.33%. Full article
(This article belongs to the Special Issue Data Modeling and Predictive Analytics)
Show Figures

Figure 1

Figure 1
<p>The proposed method to handle imbalanced depression data.</p>
Full article ">
16 pages, 1058 KiB  
Article
Urdu Documents Clustering with Unsupervised and Semi-Supervised Probabilistic Topic Modeling
by Mubashar Mustafa, Feng Zeng, Hussain Ghulam and Hafiz Muhammad Arslan
Information 2020, 11(11), 518; https://doi.org/10.3390/info11110518 - 5 Nov 2020
Cited by 9 | Viewed by 6150
Abstract
Document clustering is to group documents according to certain semantic features. Topic model has a richer semantic structure and considerable potential for helping users to know document corpora. Unfortunately, this potential is stymied on text documents which have overlapping nature, due to their [...] Read more.
Document clustering is to group documents according to certain semantic features. Topic model has a richer semantic structure and considerable potential for helping users to know document corpora. Unfortunately, this potential is stymied on text documents which have overlapping nature, due to their purely unsupervised nature. To solve this problem, some semi-supervised models have been proposed for English language. However, no such work is available for poor resource language Urdu. Therefore, document clustering has become a challenging task in Urdu language, which has its own morphology, syntax and semantics. In this study, we proposed a semi-supervised framework for Urdu documents clustering to deal with the Urdu morphology challenges. The proposed model is a combination of pre-processing techniques, seeded-LDA model and Gibbs sampling, we named it seeded-Urdu Latent Dirichlet Allocation (seeded-ULDA). We apply the proposed model and other methods to Urdu news datasets for categorizing. For the datasets, two conditions are considered for document clustering, one is “Dataset without overlapping” in which all classes have distinct nature. The other is “Dataset with overlapping” in which the categories are overlapping and the classes are connected to each other. The aim of this study is threefold: it first shows that unsupervised models (Latent Dirichlet Allocation (LDA), Non-negative matrix factorization (NMF) and K-means) are giving satisfying results on the dataset without overlapping. Second, it shows that these unsupervised models are not performing well on the dataset with overlapping, because, on this dataset, these algorithms find some topics that are neither entirely meaningful nor effective in extrinsic tasks. Third, our proposed semi-supervised model Seeded-ULDA performs well on both datasets because this model is straightforward and effective to instruct topic models to find topics of specific interest. It is shown in this paper that the semi-supervised model, Seeded-ULDA, provides significant results as compared to unsupervised algorithms. Full article
(This article belongs to the Special Issue Natural Language Processing for Social Media)
Show Figures

Figure 1

Figure 1
<p>Generative Model LDA [<a href="#B30-information-11-00518" class="html-bibr">30</a>].</p>
Full article ">Figure 2
<p>Proposed Seeded-ULDA Framework.</p>
Full article ">Figure 3
<p>Urdu Diacritics Example.</p>
Full article ">Figure 4
<p>List of some stop words of Urdu language.</p>
Full article ">Figure 5
<p>First ten words of seeded topic of each class.</p>
Full article ">Figure 6
<p>Accuracy of LDA, NMF and K-means for each class on dataset without overlapping.</p>
Full article ">Figure 7
<p>Performance measure of LDA, NMF and K-means by Rand-index, precision, recall and F-measure on dataset without overlapping.</p>
Full article ">Figure 8
<p>Accuracy of of Seeded-ULDA and LDA on dataset with overlapping.</p>
Full article ">Figure 9
<p>Performance of Seeded-ULDA and LDA by Rand-index, precision, recall and F-measure on dataset with overlapping.</p>
Full article ">
19 pages, 2957 KiB  
Article
Heracles: A Context-Based Multisensor Sensor Data Fusion Algorithm for the Internet of Things
by Flávia C. Delicato, Tayssa Vandelli, Mario Bonicea and Claudio M. de Farias
Information 2020, 11(11), 517; https://doi.org/10.3390/info11110517 - 4 Nov 2020
Cited by 1 | Viewed by 2510
Abstract
In the Internet of Things (IoT), extending the average battery duration of devices is of paramount importance, since it promotes uptime without intervention in the environment, which can be undesirable or costly. In the IoT, the system’s functionalities are distributed among devices that [...] Read more.
In the Internet of Things (IoT), extending the average battery duration of devices is of paramount importance, since it promotes uptime without intervention in the environment, which can be undesirable or costly. In the IoT, the system’s functionalities are distributed among devices that (i) collect, (ii) transmit and (iii) apply algorithms to process and analyze data. A widely adopted technique for increasing the lifetime of an IoT system is using data fusion on the devices that process and analyze data. There are already several works proposing data fusion algorithms for the context of wireless sensor networks and IoT. However, most of them consider that application requirements (such as the data sampling rate and the data range of the events of interest) are previously known, and the solutions are tailored for a single target application. In the context of a smart city, we envision that the IoT will provide a sensing and communication infrastructure to be shared by multiple applications, that will make use of this infrastructure in an opportunistic and dynamic way, with no previous knowledge about its requirements. In this work, we present Heracles, a new data fusion algorithm tailored to meet the demands of the IoT for smart cities. Heracles considers the context of the application, adapting to the features of the dataset to perform the data analysis. Heracles aims at minimizing data transmission to save energy while generating value-added information, which will serve as input for decision-making processes. Results of the performed evaluation show that Heracles is feasible, enhances the performance of decision methods and extends the system lifetime. Full article
(This article belongs to the Special Issue Data Processing in the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>Sets classified as platikurtic by Hephaestus—(<b>a</b>) monomodal symmetry (<b>b</b>) multimodal symmetry.</p>
Full article ">Figure 2
<p>Pseudocode.</p>
Full article ">Figure 3
<p>Average pseudocode.</p>
Full article ">Figure 4
<p>Lifetime.</p>
Full article ">Figure 5
<p>Comparing Heracles’ and Hephaestus’ phenomena discovery.</p>
Full article ">Figure 6
<p>Accuracy scalability.</p>
Full article ">Figure 7
<p>System lifetime scalability.</p>
Full article ">
15 pages, 585 KiB  
Article
Botnet Defense System: Concept, Design, and Basic Strategy
by Shingo Yamaguchi
Information 2020, 11(11), 516; https://doi.org/10.3390/info11110516 - 4 Nov 2020
Cited by 25 | Viewed by 4685
Abstract
This paper proposes a new kind of cyber-security system, named Botnet Defense System (BDS), which defends an Internet of Things (IoT) system against malicious botnets. The concept of BDS is “Fight fire with fire”. The distinguishing feature is that it uses white-hat botnets [...] Read more.
This paper proposes a new kind of cyber-security system, named Botnet Defense System (BDS), which defends an Internet of Things (IoT) system against malicious botnets. The concept of BDS is “Fight fire with fire”. The distinguishing feature is that it uses white-hat botnets to fight malicious botnets. A BDS consists of four components: Monitor, Strategy Planner, Launcher, and Command and Control (C&C) server. The Monitor component watches over a target IoT system. If the component detects a malicious botnet, the Strategy Planner component makes a strategy against the botnet. Based on the planned strategy, the Launcher component sends white-hat worms into the IoT system and constructs a white-hat botnet. The C&C server component commands and controls the white-hat botnet to exterminate the malicious botnet. Strategy studies are essential to produce intended results. We proposed three basic strategies to launch white-hat worms: All-Out, Few-Elite, and Environment-Adaptive. We evaluated BDS and the proposed strategies through the simulation of agent-oriented Petri net model representing the battle between Mirai botnets and the white-hat botnets. This result shows that the Environment-Adaptive strategy is the best and reduced the number of needed white-hat worms to 38.5% almost without changing the extermination rate for Mirai bots. Full article
(This article belongs to the Special Issue Security and Privacy in the Internet of Things)
Show Figures

Figure 1

Figure 1
<p>A PN<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> model representing a battle between Mirai and the white-hat worm.</p>
Full article ">Figure 2
<p>The state after the white-hat worm infected <tt>device2</tt>. This disables Mirai from infecting <tt>device2</tt>.</p>
Full article ">Figure 3
<p>System configuration of BDS.</p>
Full article ">Figure 4
<p>An application example of the All-Out launch strategy <math display="inline"><semantics> <msub> <mi mathvariant="script">L</mi> <mrow> <mi>A</mi> <mi>l</mi> <mi>l</mi> <mo>−</mo> <mi>O</mi> <mi>u</mi> <mi>t</mi> </mrow> </msub> </semantics></math>. (<b>a</b>) State when the BDS detected a Mirai botnet, (<b>b</b>) State after launching.</p>
Full article ">Figure 5
<p>An application example of the Few-Elite launch strategy <math display="inline"><semantics> <msub> <mi mathvariant="script">L</mi> <mrow> <mi>F</mi> <mi>e</mi> <mi>w</mi> <mo>−</mo> <mi>E</mi> <mi>l</mi> <mi>i</mi> <mi>t</mi> <mi>e</mi> </mrow> </msub> </semantics></math>, (<b>a</b>) State after launching a limited number when the worm’s capability is sufficient, (<b>b</b>) State after launching the upper number when the worm’s capability is insufficient.</p>
Full article ">Figure 6
<p>An application example of the Environment-Adaptive launch strategy <math display="inline"><semantics> <msub> <mi mathvariant="script">L</mi> <mrow> <mi>E</mi> <mi>n</mi> <mi>v</mi> <mo>−</mo> <mi>A</mi> <mi>d</mi> <mi>a</mi> <mi>p</mi> <mi>t</mi> <mi>i</mi> <mi>v</mi> <mi>e</mi> </mrow> </msub> </semantics></math>.</p>
Full article ">Figure 7
<p>Illustration of translating a network into a PN<math display="inline"><semantics> <msup> <mrow/> <mn>2</mn> </msup> </semantics></math> model.</p>
Full article ">
16 pages, 248 KiB  
Article
Social Capital on Social Media—Concepts, Measurement Techniques and Trends in Operationalization
by Flora Poecze and Christine Strauss
Information 2020, 11(11), 515; https://doi.org/10.3390/info11110515 - 4 Nov 2020
Cited by 11 | Viewed by 9075
Abstract
The introduction of the Web 2.0 era and the associated emergence of social media platforms opened an interdisciplinary research domain, wherein a growing number of studies are focusing on the interrelationship of social media usage and perceived individual social capital. The primary aim [...] Read more.
The introduction of the Web 2.0 era and the associated emergence of social media platforms opened an interdisciplinary research domain, wherein a growing number of studies are focusing on the interrelationship of social media usage and perceived individual social capital. The primary aim of the present study is to introduce the existing measurement techniques of social capital in this domain, explore trends, and offer promising directions and implications for future research. Applying the method of a scoping review, a set of 80 systematically identified scientific publications were analyzed, categorized, grouped and discussed. Focus was placed on the employed viewpoints and measurement techniques necessary to tap into the possible consistencies and/or heterogeneity in this domain in terms of operationalization. The results reveal that multiple views and measurement techniques are present in this research area, which might raise a challenge in future synthesis approaches, especially in the case of future meta-analytical contributions. Full article
(This article belongs to the Special Issue Information Retrieval and Social Media Mining)
14 pages, 475 KiB  
Article
Nutrient Profiling of Romanian Traditional Dishes—Prerequisite for Supporting the Flexitarian Eating Style
by Lelia Voinea, Dorin Vicențiu Popescu, Teodor Mihai Negrea and Răzvan Dina
Information 2020, 11(11), 514; https://doi.org/10.3390/info11110514 - 2 Nov 2020
Cited by 4 | Viewed by 3355
Abstract
Currently, most countries have to deal with multiple discrepancies that have arisen between the constraints of sustainable development and the return to traditions, involving food producers, as well as consumers, aspects that are also easily noticed in Romania. Thus, the main purpose of [...] Read more.
Currently, most countries have to deal with multiple discrepancies that have arisen between the constraints of sustainable development and the return to traditions, involving food producers, as well as consumers, aspects that are also easily noticed in Romania. Thus, the main purpose of this study was to assess the nutritional quality of the Romanian traditional diet using a nutrient profiling method based on the Nutri-Score algorithm, applied to several representative Romanian traditional dishes. Because this algorithm has the capacity to highlight the amount (%) of fruits, vegetables, and nuts from a certain dish, it might be considered an indicator of the sustainable valences of the selected meals. The results showed that the traditional menus do not correspond to a balanced and sustainable eating behavior; thus, it is recommended to improve the Romanian pattern of food consumption and to ensure its sustainable basis. In order to achieve this goal, we propose the development of a new paradigm of the contemporary Romanian food style incorporating three main directions of action: acceptance, adaptation, and transformation. Full article
(This article belongs to the Special Issue Green Marketing)
10 pages, 202 KiB  
Article
Perceptions and Misperceptions of Smartphone Use: Applying the Social Norms Approach
by John McAlaney, Mohamed Basel Almourad, Georgina Powell and Raian Ali
Information 2020, 11(11), 513; https://doi.org/10.3390/info11110513 - 2 Nov 2020
Cited by 5 | Viewed by 3312
Abstract
The social norms approach is an established technique to bring about behaviour change through challenging misperceptions of peer behaviour. This approach is limited by a reliance on self-report and a lack of interactivity with the target population. At the same time, excessive use [...] Read more.
The social norms approach is an established technique to bring about behaviour change through challenging misperceptions of peer behaviour. This approach is limited by a reliance on self-report and a lack of interactivity with the target population. At the same time, excessive use of digital devices, known as digital addiction, has been recognized as an emergent issue. There is potential to apply the social norms approach to digital addiction and, in doing so, address some of the limitations of the social norms field. In this study, we trialled a social norms intervention with a sample of smartphone users (n = 94) recruited from the users of a commercial app designed to empower individuals to reduce their device usage. Our results indicate that most of the sample overestimated peer use of smartphone apps, demonstrating the existence of misperceptions relating to smartphone use. Such misperceptions are the basis for the social norms approach. We also document the discrepancy between self-report and smartphone usage data as recorded through data collected directly from the device. The potential for the application of the social norms approach and directions for future research are discussed. Full article
(This article belongs to the Special Issue Interactive e-Health Interventions for Digital Addiction)
18 pages, 1224 KiB  
Article
Making the Case for a P2P Personal Health Record
by William Connor Horne and Zina Ben Miled
Information 2020, 11(11), 512; https://doi.org/10.3390/info11110512 - 31 Oct 2020
Cited by 2 | Viewed by 3952
Abstract
Improved health care services can benefit from a more seamless exchange of medical information between patients and health care providers. This exchange is especially important considering the increasing trends in mobility, comorbidity and outbreaks. However, current Electronic Health Records (EHR) tend to be [...] Read more.
Improved health care services can benefit from a more seamless exchange of medical information between patients and health care providers. This exchange is especially important considering the increasing trends in mobility, comorbidity and outbreaks. However, current Electronic Health Records (EHR) tend to be institution-centric, often leaving the medical information of the patient fragmented and more importantly inaccessible to the patient for sharing with other health providers in a timely manner. Nearly a decade ago, several client–server models for personal health records (PHR) were proposed. The aim of these previous PHRs was to address data fragmentation issues. However, these models were not widely adopted by patients. This paper discusses the need for a new PHR model that can enhance the patient experience by making medical services more accessible. The aims of the proposed model are to (1) help patients maintain a complete lifelong health record, (2) facilitate timely communication and data sharing with health care providers from multiple institutions and (3) promote integration with advanced third-party services (e.g., risk prediction for chronic diseases) that require access to the patient’s health data. The proposed model is based on a Peer-to-Peer (P2P) network as opposed to the client–server architecture of the previous PHR models. This architecture consists of a central index server that manages the network and acts as a mediator, a peer client for patients and providers that allows them to manage health records and connect to the network, and a service client that enables third-party providers to offer services to the patients. This distributed architecture is essential since it promotes ownership of the health record by the patient instead of the health care institution. Moreover, it allows the patient to subscribe to an extended range of personalized e-health services. Full article
(This article belongs to the Section Information Applications)
Show Figures

Figure 1

Figure 1
<p>System Architecture.</p>
Full article ">Figure 2
<p>Transfer Request Operation. This operation is used by the patient to request a health record from the health provider or by the health provider to request a health record from the patient.</p>
Full article ">Figure 3
<p>Push Operation. This operation is used by either the patient or the health provider to update the health record maintained by the other party.</p>
Full article ">Figure 4
<p>Service Operation. This operation is used by the patient to request an e-health service from a third party.</p>
Full article ">Figure 5
<p>The UI used by the patient to initiate a transaction or invoke a service.</p>
Full article ">Figure 6
<p>List of pending transactions in the network. This list is maintained by the index server.</p>
Full article ">Figure 7
<p>List of completed Transactions. The status of the transactions is maintained and updated by the index server.</p>
Full article ">Figure 8
<p>Example data record. The first field is the unique record id; the second field is the record metadata; the third field is the content of the record.</p>
Full article ">Figure 9
<p>Result record return by the hypertension service in response to the patient’s request.</p>
Full article ">
Previous Issue
Next Issue
Back to TopTop