Introduction

Roughly speaking, Science answers questions and discovers information from which Engineering designs products and/or processes to solve problems. However, it can occur that the design will not meet the user experience, so that it is fundamental the designer could “foresee” the impact of his/her design to adjust the product/process accordingly, the “predictability” being essential in all human activities.

An historical example came after the tragic sinking of the Royal Charter ship on 1859 due to terrible sea conditions so that, to avoid recurrence of this event, Robert Fitzroy developed the weather charts he described as “forecasts”, the first known usage of such a term.

Another historical example came on 1953 from the first virtual-imitation of the operation of a real-world process or system, that is the “simulation” of the sinusoidal steady state of a linear network made by an electromechanical-relays computer [1].

Also, it is worth mentioning the NASA approach that for the Apollo missions, on 1960–70s, “mirrored” on earth the conditions of the space vehicle during the mission to “foresee” on the outer space the impact of a decision made on the earth.

Since then, from weather, electronics and space, the “predictability” approach has been extended to any aspects of human beings.

In particular, for manufacturing on 2002 M. W. Grieves coined the expression “Product Lifecycle Management” (PLM) defined as “an integrated, information-driven approach comprised of people, processes/practices, technology, to all aspects of a product’s life” [2, 3], all aspects being the product analysis, planning, designing, engineering, production, sale, distribution, disposal and recycling. PLM was also reported in different contests as “Mirror Space Model” (MSM) or “Information Mirroring Model” (IMM) [4]. PLM relays on two parallel spaces, a real and a virtual one (Fig. 1). Information are gathered from the real space for feeding processes in the virtual space where, as a consequence of a route, knowledge can be obtained to assist running processes in the real space, in a feed-forward and feed-back scenario. Moreover, to simplify and speed up the route, the virtual space can be split into subspaces to work parallel on subsets of information and, to better arrange the route, data are not organized by their function but by their physical entities. Real and virtual are connected during each of the four phases of production: development, manufacturing, operation, and disposal.

Fig. 1
figure 1

PLM as a mirroring or twinning between a real physical system and its virtual counterpart. Data flows are from the real to the virtual space, information process flows are, vice versa, from the virtual to the real space, the virtual space is arranged in virtual subspaces 1, 2,..n

The concept of the two parallel (real and virtual) spaces gave the floor to the more descriptive expression of digital twin (DT, thoroughly detailed in the following sections). Over the years, several works, and comprehensively review papers, have been devoted to the description [5, 6], the characterization [7, 8] and, more relevantly, the applications of DTs [9,10,11,12].

This paper is organized in covering DTs and their improved versions, evidencing advantages and limits (Sections “Digital Twin” and ’Improvements (ADT, IDT, VHS) and Limits”), introducing and discussing the new concept of human digi-real duality (HDRD, Section ”Human Digi-Real Duality (HDRD)”) and the necessary technologies that can make HDRD feasible (Section “Key-Enabling Techs”), with conclusions and outlooks handled in the end (Section “Conclusions and Outlooks”).

Digital Twin

Digital Twin (DT) embraces so many relevant concepts that a univocal definition universally accepted does not exist.

For Grieves, a DT is “a set of virtual information constructs that fully describes a potential or actual physical manufactured product from the micro atomic level to the macro geometrical level”.

For NASA’s Technology Area 11, a DT is “an integrated multi-physics, multi-scale, probabilistic simulation of an as-built vehicle or system that uses the best available physical models, sensor updates, fleet history, etc., to mirror the life of its corresponding flying twin” [13].

For the AIAA Institute, a DT is “a set of virtual information construct that mimics the structure, context and behavior of an individual unique asset, is dynamically updated with data from its physical twin throughout its lifecycle, and informs decisions that realize value”.

Even though it is clear that a DT is made of a real and a virtual spaces side-by-side exchanging data and information, nevertheless the aforementioned three definitions are not univocal but focused on three different aspects: “describe”, “simulate”, and “mimic”, respectively. Even so, DTs are gaining a more and more interest (Figs. 2, 3), and assuming a so relevant rule to be named in a variety of other terms too, such as Digital Avatar, Digital Shadow, Digital Angel, Virtual Entity, Cyber Object, and more.

Fig. 2
figure 2

a Google search of “Digital Twin” since 2016 (100 indicates the highest search frequency for the term, 50 indicates half of the searches, and so ahead), b differentiated by Countries (in grey few searches, in light blue and blue higher numbers of searches)

Fig. 3
figure 3

Number of papers including the words “Digital Twin” from 2016 (263) to 2022 (26,200)

A DT, as a complete virtual description of a physical product accurate in both micro and macro levels, splits into

  • Digital twin prototype (DTP): a virtual description of a prototype product, containing all the information required to create the physical twin;

  • Digital twin instance (DTI): a specific instance of a physical product that remains linked to an individual product throughout that products life;

  • Digital twin aggregate (DTA): a combination of all the DTI;

  • Digital twin environment (DTE): a multiple domain physics application space for operating on DTs, including performance prediction, and information interrogation.

The possibility to count on a digital clone of a real system has a number of advantages, such as:

  • time and cost savings for the reduction of tests and checks;

  • time-to-market reduction;

  • lower any product defect;

  • intercept potential problems before happen;

  • simulating events without causing them in the real system;

  • evaluate aging effects;

  • evaluate all parameters at once avoiding considering them one-by-one;

  • limit the production as a final step only.

To evidence those advantages, we can consider the potential stream of a generic physical system, which can evolve into predicted or unpredicted behaviors, which in turn can eventually evolve into desirable or undesirable conditions (Fig. 4).

Fig. 4
figure 4

Classifications of all possible evolutions of a physical system [4]

In particular, predicted desirable (PD) is the designed class; predicted undesirable (PU) is related to unsolved problems (to be taken into account to avoid lawsuits); unpredicted desirable (UD) does not result in problems but is an index of incompleteness or the adopted model; unpredicted undesirable (UU) holds potential serious and dangerous problems that must be mitigated or solved. The latter is the class for which a DT can be strategically advantageous.

All in, DT is becoming more and more crucial in view of the realization of the “Industry 4.0” paradigms, as already implemented by several companies, such as Siemens, Toyota, DHL [14], Philips [15], IBM (IBM website, 2022), General Electric (GE website, 2022), Oracle Corporation [16], Microsoft, to cite a few. Moreover, the concept of DT has been successfully adopted in many different areas, such as aerospace [17, 18], automotive, manufacturing [19], production [20], industrial and consumer packaged goods, food process [21], pharmaceuticals industries, energy management [22], maintenance optimization [23], distribution grid [24], and more.

The effectiveness of the DT strictly depends on how faithful is the virtual entity with its real counterpart. Virtual vs. real correspondence depends on data gathered from the real world (Fig. 1), in particular their numerousness, accuracies, levels of significance and abstraction [25]. Moreover, the correspondence is based on the algorithms that, depending on data, describe the physical system, and on processes that foresee actions to be done to the real part [26,27,28].

However, interestingly there are only few elements that are essential to define, to develop, and to utilize a DT according to the elements reported in Table 1 that shows DT-related keyworks [29]. As it can be evidenced, the physical asset, the models and the communications are main aspect to count on, but others such as the services, the data, and even the user are no strategic elements since can be assumed rather than measured.

Table 1 Matrix of DT characteristics

These characteristics make possible the DTs as active, semi-active, or passive, as here detailed: a DT is active when real-digital in sync, seamlessly, continuously updated, virtual and real exist simultaneously; semi-active for post analysis after pre data collection; passive when some data are collected, other are assumed.

Some resources can help the DT adoption. Main examples are the Eclipse Ditto™ (by Eclipse Foundation), and the Azure DTs (by Microsoft). In particular, the Eclipse Ditto™ implements DT in the IoT, potentially mirroring billions of DTs residing in the digital world with physical “Things”, as a “middleware” between IoT devices and IoT solutions. The Azure DTs is a platform as a service (PaaS) that enables the creation of knowledge graphs based on digital models of entire environments: buildings, factories, farms, energy grids, railroads, stadiums,…, even entire cities too.

Improvements (ADT, IDT, VHS) and Limits

The concept of DT is expanded leading to the so called augmented digital twin (ADT).

For an ADT:

  1. (1)

    the virtual space multiple-interacts with its real counterpart and with its surroundings and other DTs too;

  2. (2)

    it enhances data from the connected asset with derivative data, correlated data from federated sources, and/or intelligence data from analytics and algorithms [30];

  3. (3)

    it concerns visualizing the DT data by using augmented reality [31];

  4. (4)

    also, it gathers data from other objects and resources and is shaped using artificial intelligence techniques [32];

The concept of ADT moves manufacturers and operators closer to the goal of selling outcomes (results) instead of products (machines).

Moreover, the concept of DT can be further enhanced as an Intelligent Digital Twin (IDT), when DT meets artificial intelligence, machine learning, deep learning algorithms [33].

Relevant examples of DT, ADT, or IDT adoptions come from the Wuhan Raytheon hospital in China, that was built (at the beginning of the COVID-19 pandemic) in the extraordinary time laps of fourteen days only, from the football stadium in Barcelona by Iotwins Inc. (www.iotwins.eu), from the adoption of the 51word technology (www.51aes.com) on water sites, from the transformation of the Port of Rotterdam for monitoring and efficiency issues by IBM (ibm.com/blog/iot-digital-twin-rotterdam), and for implementing a power grid in Finland by Siemens [34]. Moreover, the Tesla Company is developing a smartphone application to let the car giving you a periodic report of its status (source: Tesla Crash Lab, 2019), and the Boeing company achieved a 40% improvement rate in the first-time quality of parts by using DTs, while plans to digitize all its engineering and development systems.

In contrast, whatever the adopted version of DT, adopters can fail when misrepresenting the system they want to replicate (low/inaccurate virtual twin with respect to its physical counterpart, more complicated as considered at the first glance), misunderstanding the real benefits, not correctly identifying and addressing business capabilities and affordability with respect to the technology and timeline choices, and not adequately planning technology sustainability. Moreover, DTs are in general specific for the original equipment manufacturer (OEM) and can have success when produced by the OEM itself. As a consequence, it is mandatory for developers to solve the challenging task of including as many failure models as possible in any possible scenarios. Besides, even if many works claim successful uses of DT, the lack of metric-based evidence makes it hard to correctly evaluate the implementation success of a DT.

All in, as a matter of principle, any physical system (an object, a component, a mechanism, a network, an implant, a machinery, a structure, an asset, …), potentially of any level of complexity (whether inanimate, vegetative, and animate), can be fully described by its features (data). Not only, but Electronics (sensors, transducers, …) can allow gathering whatever amounts of whatever data (full information) from any system during any period of time. The most disruptive area of application of this concept, of whatever DT or ADT or IDT, is in the medical field, leading towards the so-called Human Digital Twin (HDT), also termed Virtual Human Simulator (VHS).

VHS is not just a simple evolution of DT. The latter refers to a physical system, VHS refers to a behavior, a biological and physical system in an ensemble.

The higher the correspondence between the real and the virtual, the higher the number of possibilities the VHS allows, as described in the following points mentioned with increased complexity level:

  1. (1)

    Digitize: analogue to digital data conversion;

  2. (2)

    Visualize: digital representation of a physical system;

  3. (3)

    Simulate: determine one or more behaviors of the physical system in its environment;

  4. (4)

    Emulate: mirror a system by imitation;

  5. (5)

    Extract: gather information from real data streaming;

  6. (6)

    Orchestrate: virtual control or update of physical system;

  7. (7)

    Predict: future behavior of the physical system

The digitizing step can be considered as the results of measurements based on a four-type schematization, according to the positions of electronic sources and sensors with respect to the human body under measure [35]:

  • Outside-In (sources on the body and sensors elsewhere, e.g. optical systems [36]);

  • Inside-Out (sensors on the body and sources elsewhere, e.g. accelerometers [37, 38])

  • Inside-In (sources and sensors on the body, e.g. sensory gloves [39] or electro-goniometers [40])

  • Outside-Out (sources and sensors not directly on the body, e.g. force plates [41]).

Data gathering have been evolving from fixed sensors (Inside-In), to mobile sensors (Outside-In, Inside-Out) to immersive sensors (Outside-Out, immersive persistent ambient, ubiquitous data).

The visualizing step can be realized by means of a monitor or an hologram [42, 43], showing virtual [44] or by augmented reality, going from pixels (2D on screen) to voxels (3D on metaverso), or 3D printed structures such as human organoids [45, 46].

The simulation step can be realized by means of multi-physics software modelling mechanical, electrical, biochemical cues useful for designing proof-of-concept models of human systems and subsystems [47].

The emulation step can provide test beds for a broad range of human behaviors (e.g. the neon project by Samsung; [48]), useful for better understanding the human biomechanics, biophysics, biochemistry and energetics (e.g. of the ankle–foot mechanism [49] or sign language implementation [50, 51]).

The extraction step can allow determining differences between ideal vs. non-ideal behaviors (e.g. healthy vs. pathological human walking assessments [52]).

The orchestrating step can take advantages from the extraction step to modify the incorrect behavior of the physical system toward the desired one (e.g. specific vitamins requirements across the human life cycle [53]).

The prediction step can provide future effects of current behaviors (e.g. dietary habits [54]).

Human Digi-real Duality (HDRD)

When a VHS interacts with other VHSs and with surroundings, we can refer this augmented possibility as the human digi-real duality (HDRD).

The HDRD can be modelled under the behavior, the physical and the biological aspects.

Behavioral Model

The behavioral pattern of an individual depends on a complex mechanism due to the contribution of environmental characteristics (climate, pollution, etc.), his/her physical conditions and evolutions (sex, age, deficiencies, impairments, etc.), psychological status and developments (family situations, working conditions, friendships, etc.), boundary conditions (holidays/weekdays, day/night, sounds/noises, etc.), and more.

Therefore, as a matter of principle, if related data are fully available, a complex HDRD can be potentially realized, for which every environmental characteristic, every physical development, every psychological development, and every boundary condition can be simulated, each as one of the n interacting subspaces (Fig. 1). As an example, we can think to an HDRD identity to determine, describe and predict a child’s character [55].

Physical Model

The physical model of an individual depends on a complex mechanism due to the contribution of historical data (past muscular activities, lifestyle, type of work, etc.), metabolism, and physical characteristics [56] (individual muscular forces, contact forces and joints, elastic energy in tendons, antagonistic muscle action, etc.).

Therefore, as a matter of principle, being able to count on related physical data, it is possible to create a complex HDRD so to be able predicting the future muscular behavior of an individual, as already introduced for athletes’ activity [57].

Biological Model

The functioning of an individual’s body depends on a complex mechanism due to the contribution of systems (circulatory, muscular, endocrine, nervous, immune, etc.), subsystems (respiratory, vestibular, endocrine, locomotor, digestive, etc.), organs (heart, liver, spleen, lungs, etc.), influenced by surrounding conditions (temperature, vasodilatation, hydration, etc.) and regulated by brain activity.

Therefore, as a matter of principle, counting on complex related data, a complex HDRD can be realized. Each system, apparatus or organ can be seen as subspaces (Fig. 1), all interacting in a single (on cloud) platform, as already introduced for cancer evolution issues [58].

Key-Enabling Techs

This ambitious and challenging scenario can be realized if supported by the so-named key enabling technologies (KETs) requested to:

  1. (1)

    gather the more accurate and complete information as possible from the real space,

  2. (2)

    model fittingly to the particular claims,

  3. (3)

    provide suitable results within the appropriate applications,

  4. (4)

    evidence the results as more comprehensively as possible.

These four points are, respectively, met as detailed in the following paragraphs.

Sensors

Information from the human beings (corresponding to the “real space” for DT) can be obtained by means of different types of sensors [59], namely wearable [60], epidermal, implantable, immersive, and pervasive.

Wearable sensors must be unobstructive, light, compact, easy to use, as it occurs for flex sensors [44, 60], inertial measurement units (IMUs) [36], stretch sensors [40, 61], e-textile devices [62], sensory gloves [63, 64], headwear systems [65], and more.

Epidermal sensors must be shape-adaptable, stretchable, biocompatible, light, thin, compact, as it results from chemical, strain, temperature, piezoresistive sensors [66], etc.

Implantable sensors are those inserted into the human body, performing localized measurements, the subject acting as a part of the system [67]. The sensors must be safe and bio-compatible [68], unalter the health status and, eventually, biodegradable [69], as it occurs for monitoring metabolites in blood plasma by optical means [70], for cardiac electronic monitoring [71], for analytes’ concentrations in bodily fluids [72], for osteoporosis conditions of bones [73], and so ahead.

Immersive sensors merge the information related to humans with surroundings, such as tracking devices [35], environmental monitoring systems, depth sensors, geo-mapping tools, and more.

Pervasive sensors refer to those systems for which one sensor is vaguely located because its different parts belong to different locations [74], as it occurs for ubiquitous computing, spatial microphone [75] and arrangement [76], human–computer interfaces, etc.

Whatever considered sensors, they can be differently arranged to realize wireless body area networks (WBANs), body sensor network (BSNs), implant communication systems (MICSs), wireless medical telemetry system (WMTSs), and other arrangements to gather as more data organized as possible.

Models

The (more or less) success of HDRD basically relies on the accuracy degree of the digital model as a faithful representation of its real human counterpart.

But the human being is of an immense degree of complexity, so currently practically impossible to be wholly represented. Actually, only partial human subsystems have been digitally modelled [77], such as the cardio twin [78], the carotid arteries twin [79], the possibility of assessing type 2 diabetes using digital twin [80], but surely not the whole human body.

To this aim, we have to think “out of the box” with new paradigms so to include the highly disordered system concept, recently relaunched by the 2021 Nobel price assigned to Syukuro Manabe, Klaus Hasselmann, and Giorgio Parisi, scientists who paved the way for new approaches in describing and modelling disordered systems.

Computing

A so complex, chaotic, and disorder system, as the HDRD, claims large computational resources difficult to quantify and not feasible with current technologies.

A promise in the desirable direction come from the 2022 Nobel priced scientists Anton Zeilinger, John F. Clauser, and Alain Aspect, who improved the understanding of quantum computing (theoretically emerged in the 1980s), that harnesses the laws of quantum mechanics to solve problems too complex for classical computers.

Visualization

The fourth point concerns a comprehensive representation of the results achievable by means of standard monitors or, better, immersive 360° representations, as demonstrated by the following examples.

With the so-named tele-immersion technology (by Teleimmersion Lab, Berkeley, USA), the scenario is displayed by spherical 3D fully immersive multi-screen systems [81].

Autostereoscopic displays, i.e. glasses-free 3D vision [82], realized by lenticular lens fitted onto the LCD panel, create a fully natural sensation of depth (www.alioscopy.com).

Even more emotionally engaging is the Holomachine (patent no. WO2011117721A8), a holographic projector system with the screen made out of nothing but micro-particles of water [43].

The visualization can be further improved by human–machine interfaces that can make interactivity possible [83].

Conclusions and Outlooks

Everyone is built differently, no two people recover in the same way. As a matter of fact, 38–75% of common diseases’ patients do not respond to drug treatment (source: US Food and Drug Administration), and same-diagnosed patients will respond differently to the same treatment. These aspects make the difficulties in administer medications and therapies, so that it is becoming mandatory to be tailored to the specific patient, for the right patient at the right time.

Advantageously, full information can be gathered from any real physical systems to create their virtual counterparts known as Digital Twins (DTs) (partial information leads to a Model, not a Twin).

Sensors, transducers and related electronics are the medium to gather information and to make real and virtual interacting.

DTs have been successfully applied in a variety of different areas, in some of which its importance becomes mandatory.

In particular, in the medical field a number of different virtual human organs have been realized, such as the complex heart, the carotid arteries, and more.

Not limiting to one virtual organ, the DT concept can be, in principle, expanded toward the entire human being, so to land to the concept of Virtual Human Simulator (VHS).

By developing the VHS, some outlooks can be:

  • creating connections between two different VHSs to highlight differences and critical issues;

  • remotely monitoring the subject (progress/regression evaluation);

  • supplying the drug routines for the single subject's VHS;

  • facilitating the possibility of moving towards non-invasive and painless medical tests.

In futuristic perspectives, the VHS technology will be able to simulate brain waves in response to generic or specific stimuli, allowing investigations about the functioning of the human brain aimed at improving neuro-technologies.

Furthermore, VHS will be able to give rise to virtual models of bacteria and viruses, allowing to study the mechanisms of interaction with the various organs of the human body up to the possible onset of pathologies, and consequently to evaluate a-priori the effectiveness of possible pharmacological remedies.

But, the enhanced version of the VHS is the human digi-real duality (HDRD), as pioneer of precision health paradigm.

The complexity to gain the HDRD makes mandatory partnerships among Medicine, Biology, Engineering, Computer Science, etc. to monitor, analyze, simulate, visualize, manage the HDRD, to make it descriptive, integrative, and predictive.

Theoretically, the HDRD will be possible, let us see in practice.