[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2023055293A2 - Mouthguard system for human machine interaction - Google Patents

Mouthguard system for human machine interaction Download PDF

Info

Publication number
WO2023055293A2
WO2023055293A2 PCT/SG2022/050689 SG2022050689W WO2023055293A2 WO 2023055293 A2 WO2023055293 A2 WO 2023055293A2 SG 2022050689 W SG2022050689 W SG 2022050689W WO 2023055293 A2 WO2023055293 A2 WO 2023055293A2
Authority
WO
WIPO (PCT)
Prior art keywords
occlusal
occlusal contact
mouthguard
sensor
color
Prior art date
Application number
PCT/SG2022/050689
Other languages
French (fr)
Other versions
WO2023055293A3 (en
Inventor
Xiaogang Liu
Bo HOU
Luying YI
Original Assignee
National University Of Singapore
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by National University Of Singapore filed Critical National University Of Singapore
Publication of WO2023055293A2 publication Critical patent/WO2023055293A2/en
Publication of WO2023055293A3 publication Critical patent/WO2023055293A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/011Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61CDENTISTRY; APPARATUS OR METHODS FOR ORAL OR DENTAL HYGIENE
    • A61C19/00Dental auxiliary appliances
    • A61C19/04Measuring instruments specially adapted for dentistry
    • A61C19/05Measuring instruments specially adapted for dentistry for determining occlusion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61FFILTERS IMPLANTABLE INTO BLOOD VESSELS; PROSTHESES; DEVICES PROVIDING PATENCY TO, OR PREVENTING COLLAPSING OF, TUBULAR STRUCTURES OF THE BODY, e.g. STENTS; ORTHOPAEDIC, NURSING OR CONTRACEPTIVE DEVICES; FOMENTATION; TREATMENT OR PROTECTION OF EYES OR EARS; BANDAGES, DRESSINGS OR ABSORBENT PADS; FIRST-AID KITS
    • A61F4/00Methods or devices enabling patients or disabled persons to operate an apparatus or a device not forming part of the body 
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/017Gesture based interaction, e.g. based on a set of recognized hand gestures

Definitions

  • a system for human machine interaction comprising: a mouthguard; a computing device in communication with the mouthguard; wherein the mouthguard comprises: an array of occlusal contact sensors, wherein each occlusal contact sensor emits light of a designated color responsive to occlusal contact; an optical waveguide for propagating light emitted by each occlusal contact sensor to a color sensor, wherein the color sensor generates electrical signals responsive to color of the received light; an antenna for wirelessly transmitting the electrical signals generated by the color sensor; wherein the computing device is configured to process received electrical signals to determine an occlusal pattern signal.
  • a mouthguard for human-machine interaction comprising: an array of occlusal contact sensors, wherein each occlusal contact sensor emits light of a designated color responsive to occlusal contact; an optical waveguide for propagating light emitted by each occlusal contact sensor to a color sensor, wherein the color sensor generates electrical signals responsive to color of the received light, wherein the electrical signals encode part of an occlusal pattern signal for human-machine interaction; an antenna for wirelessly transmitting the electrical signals generated by the color sensor.
  • a method of human-machine interaction comprising: generating light at an array of occlusal contact sensors in response to occlusal contact, wherein each occlusal contact sensor emits light of a designated color responsive to occlusal contact; generating electrical signals by a color sensor responsive to the light generated by the array of occlusal contact sensors; processing the generated electrical signals by an electronic circuit to generate an occlusal pattern signal; performing human-machine interaction based on the occlusal pattern signal.
  • the method further comprises transmitting the occlusal pattern signal to a computing device for human-machine interaction.
  • Figure 1 illustrates a schematic diagram of a mouthguard, its associated components, schematic diagrams of bite patterns and associated graphs;
  • Figure 2 illustrates schematic diagrams of sensor arrays and associated characterization graphs
  • Figure 3 illustrates a schematic diagram of a mouthguard and associated sensor arrays
  • Figure 4 illustrates schematic diagrams of the integration of the mouthguard for human machine interaction applications
  • Figure 5 illustrates characterization of mechanoluminescent phosphors used in the occlusal sensors of the mouthguard
  • Figure 6 illustrates a mechanism of mechanoluminescence implemented by the occlusal sensors
  • Figure 7 illustrates bite/occlusal contact pattern analysis using the mouthguard
  • Figure 8 illustrates optical fiber designs for a waveguide of a mouthguard
  • Figure 9 illustrates optical data analysis using sensors of a mouthguard
  • Figure 12 illustrates images of a part of the fabrication process of the sensor array
  • Figure 13 illustrates optical and electron images of optical fiber and sensors of the mouthguard
  • Figure 14 illustrates an experimental setup for optical fiber characterization
  • Figure 15 illustrates graphs of optical characterization of optical fibers of the mouthguard
  • Figure 16 illustrates electron microscope images of sensors before and after stretch cycles
  • Figure 19 illustrates a block diagram comprising some components of a mouthguard
  • Figure 20 illustrates a circuit schematic and images of the circuitry of a mouthguard
  • Figure 21 illustrates various occlusal patterns and corresponding characterization
  • Figure 22 illustrates a schematic of machine learning models and associated output
  • Figure 23 illustrates keyboard functions associated with a mouthguard and a graph of conversion of signals to keyboard functions
  • the mouthguards include mechanoluminescence-powered mouthguards.
  • the mouthguards are designed to assist individuals with dexterity impairments or neurodegenerative conditions to wirelessly communicate with electronic devices.
  • the electronic devices may include computers, smart electronics or wheelchairs.
  • the mouthguards sense occlusion of teeth (biting actions with respect to the mouthguard) and generate electronic signals based on the sensed occlusion.
  • the electronic signals may be transmitted to a computing device that processes the electronic signals to determine occlusal pattern signals.
  • the occlusal pattern signals may be mapped to specific instructions or may be assigned a specific meaning in relation to HMI operations.
  • the occlusal patterns signals may be identified using machine learning.
  • the mouthguards could be integrated with miniaturized force-feedback actuators, high-precision bionic prosthetics, and human-robot interactive teleoperation etc.
  • the mouthguard comprises mechanoluminescence-powered distributed optical fiber (mp-DOF).
  • mp-DOF mechanoluminescence-powered distributed optical fiber
  • sensors pressure sensors
  • the mp-DOF enables accurate detection of distributed pressure without the need for external light sources.
  • the mp-DOF may be elastic, meaning it is suitable for or capable of repeated stretching followed by contracting under resilient bias, without plastic deformation.
  • luminescence-based pressure sensors of the disclosed mouthguards are less susceptible to electromagnetic interference.
  • Mechanoresponsive phosphors are embedded in a transparent elastomer to form mp-DOF sensors. Integration of these sensors into a flexible mouthguard creates a bite-controlled optoelectronic system that can be used to communicate with electronic devices. Deformation of the elastomeric mouthguard at specific segments upon biting can be determined by measuring the intensity ratios of different color emissions in the fiber system. By utilizing unique patterns of occlusal contacts (bite patterns) various forms of mechanical deformation can be easily distinguished by ratiometric luminescence measurements. Processing the electronic signals generated by the mouthguard using machine learning makes it possible to translate complex bite patterns into specific data inputs with 98% accuracy.
  • the mouthguard provides an effective assistive technology for individuals with dexterity or movement impairments.
  • Human teeth are one of the most sensitive and robust organs.
  • Previously interactive technologies including voice recognition, eye-blink tracking, head-movement tracking, and braincomputer interfaces
  • use of occlusal patterns as the control source of HMI by the embodiments provides a more flexible and robust solution for people with disabilities.
  • the mechanoluminescence-powered interactive mouthguard have low cost and low application environment requirements. It has high sensitivity and high precision for people with disabilities compared with other modern methods. Mouthguard Design
  • Figure 1 illustrates a design of a mouthguard 100 comprising an optical fiber/waveguide 110.
  • the optical fiber 110 comprises mp-DOF sensors 112 or sensor regions 112 or mechanoluminescent pads 112.
  • the sensors 112 are collectively referred to as an array of occlusal contact sensors.
  • Figure 1A is a schematic of a mp-DOF comprising an elastomeric waveguide and embedded mechanoluminescent pads (ZnS:Cu 2+ /Mn 2+ , ZnS:Cu 2+ , and ZnS:Cu + phosphors).
  • Figure IB is a schematic of mp-DOF integrated into a mouthguard.
  • Each mechanoluminescent pad generates light of a specific color/wavelength when subjected to pressure.
  • Four of the patterns (pattern 1, 2, 3 and 4) are displayed in Figure IB. Dashed lines represent the typical contour of the upper teeth of an adult.
  • the four patterns are also represented in the plot 150 mapped to specific wavelengths associated with the respective sensors. For each pattern, fibers 1 and 2 generate unigue optical spectra.
  • the 21 complex occlusal patterns can be correctly identified according to the input CIE tristimulus values (X1, Y1, Z1, X2, Y2, Z2) of the two fibers.
  • the occlusal contact sensors 112 need not be symmetrically or uniformly disposed in the sensor array. In some embodiments, positions of the occlusal contact sensors within the sensor array may be adapted to suit a particular mouth of an intended user of the mouthguard. The array may therefore involve regular spacing between sensors, irregular spacing, random spacing or another spacing scheme. In other embodiments, a different number or a different orientation/position of sensors may be provided in the mouthguard 100. With a different number/orientation of sensors, a larger or smaller number of occlusal patterns may be identified to suit the reguirements of the users of the interactive mouthguard or any devices that may be integrated with the mouthguard. The occlusal contact sensors can detect distributed pressure accurately without the need for external light sources.
  • the pressure on the occlusal contact sensor includes pressure from impact of one or more than one tooth on a surface of the occlusal contact sensor or a substrate covering the occlusal contact sensor.
  • force applied by a tooth in one of the upper and lower jaws will be resisted by a tooth in the other jaw. Therefore, the occlusal contact sensor is placed under bite pressure between opposing teeth - i.e. one or more teeth in opposing jaws.
  • pressure from contact between the occlusal contact sensor and one or more teeth from an upper jaw may be sufficient for producing a signal to enable human machine interaction.
  • Pressure from contact between the occlusal contact sensor and one or more tooth from a lower jaw may be sufficient for producing a signal to enable human machine interaction.
  • Some embodiments may be suitable for use by individuals without teeth.
  • occlusal contact may refer to contact between regions of the gums of an individual with the mp-DOF, or pressure applied by opposing gums.
  • the mouthguard 100 also comprises a printed circuit board 140, and a mechanically flexible polyethylene terephthalate substrate 130 as illustrated in Figure IB.
  • Tristimulus values include values indicative of a specific color detected by the detectors 120.
  • the detectors 120 receive light generated by the sensors 112 and transform the detected light into representative electrical signals.
  • the representative electrical signals are processed by the circuit 140 that facilitates transmission of the electrical signals wirelessly to other electronic devices.
  • the colours depend on the nature of the sensors 112 and may include the primary colours red, green and blue.
  • Chart 150 illustrates an example of output detected by the detectors 120. Chart 150 is a three dimensional graph, with the three dimensions being color/wavelength, time and fiber 1/2.
  • the output may also include bite intensity information indicative of how strong a bite was. The bite intensity information may be generated based on the intensity of the light generated by sensors 120.
  • Some embodiments include an optical fiber with various transition-metal-doped ZnS phosphors embedded at several predefined locations (Fig. 1A). Red, green, and blue mechanoluminescent emissions could be modulated by doping Cu 2+ /Mn 2+ (590 nm), Cu 2+ (525 nm), and Cu + (475 nm), respectively, into the ZnS host ( Figures 5 and 6). Beyond the primary colors, secondary colors could be produced by varying the composition of the three types of phosphors. When distributed compression was applied externally to the fiber, the mechanoresponsive phosphors emitted light at different wavelengths, which was propagated along the fiber by total internal reflection. This design principle was employed to fabricate the mouthguard that comprised an integrated mp- DOF array, a flexible printed circuit board, and a flexible polyethylene terephthalate substrate (Fig. IB).
  • Figure 2 illustrates a mp-DOF based sensors in various configurations and their associated characterizations.
  • Figure 2A is an exemplary schematic of the mp- DOF sensor in three different configurations: a single-layer mp-DOF 210, a 4 x 4 single-layer mp-DOF array to sense deformation 220, and a double-layer mp- DOF to distinguish multiple deformations 230.
  • Figure 2B illustrates exemplary doping concentrations of mechanoluminescent materials versus the luminous intensity of the optical fiber.
  • Figure 2C illustrates force sensitivity of the singlelayer mp-DOF according to some embodiments.
  • Figure 2D illustrates results of a dynamic test of the mp-DOF at frequencies of 1, 2, and 3 Hz.
  • Figures 2E and 2F are Optical photographs of a single-layer 4 x 4 mp-DOF array and the interface compression, recorded under 40 N compression through one-dimensional projection (indicated by a dashed line in Figure 2E).
  • Figure 2G illustrates optical power output (left) and corresponding spectra (right) of the double-layer mp-DOF upon compression, stretching, and bending.
  • the insets are schematics of the double-layer mp-DOF in different deformation modes.
  • Figure 6 illustrates a Mechanism and chromaticity of ZnS:M mechanoluminescence according to some embodiments.
  • Figure 6A illustrates a mechanistic diagram of ZnS: M mechanoluminescence according to some embodiments.
  • CB conduction band
  • Figure 6B illustrates mechanoluminescence spectra of the phosphors under study: red emission (590 nm, ZnS:Cu 2+ /Mn 2+ @ AI 2 O 3 ), green emission (525 nm, ZnS:Cu 2+ @ AI 2 O 3 ), and blue emission (475 nm, ZnS:Cu 1+ @ AI 2 O 3 ) according to some embodiemnts.
  • Figure 6C illustrates a mechanoluminescence waveguide setup according to some embodiments. ZnS: M@ AI 2 O 3 microparticles are dispersed into a soft silica gel film, which is covered with a transparent elastomer to transmit the luminescence.
  • the mouthguard can detect a variety of occlusal patterns.
  • Different occlusal trajectories give rise to mechanoluminescence with specific color schemes, and color sensors record the CIE tristimulus values.
  • a trained two-layer, feed-forward artificial neural network differentiates the 21 complex occlusal patterns in terms of tristimulus values (XI, Yl, Zl, X2, Y2, Z2). Each of the distinct occlusal patterns may correspond to a specific occlusal pattern signal for human machine interaction.
  • Figure 7 illustrates dynamic pattern analysis of occlusal contacts using mechanoluminescent pads.
  • Figure 7A illustrates a schematic of upper and lower tooth contours of an adult with normal occlusion.
  • Figure 7B illustrates different occlusal patterns (top) and corresponding spectra (bottom) according to some embodiments.
  • the X, Y, Z tristimulus values of the color sensor were converted into CIE xyz color space.
  • the real-time chromaticity response x/y was normalized in five comfortable bite patterns that could be distinguished with each fiber (Fig. 7B).
  • the plotted chromaticity response data of the five occlusal patterns in the force range of 5-50N demonstrated that pattern identification can be achieved by comparing variation across chromaticity space (Fig. 7C).
  • Fig. 7C variation across chromaticity space
  • 14 occlusal patterns for interactive bite-controlled operation could be employed as illustrated in Fig. 21.
  • Machine learning was applied to recognize the occlusion pattern based on the output spectra of 14 occlusal patterns (Fig. 3D).
  • an artificial neural network is used to process complex patterns owing to its advantages of precision, accuracy and robustness.
  • the trained feed-forward artificial neural network recognized 14 patterns with an accuracy greater than 98% as illustrated in Fig. 22.
  • Figure 3E illustrates the relationship between the recognition accuracy and the bite position offset d.
  • the classification accuracy reached 100% when biting the center of the middle pad and decreased as the biting position moved away from the center.
  • Figure 21 illustrates a different occlusal patterns and corresponding tristimulus values. According to permutation and combination theory, there are 20 possible combinations using two mp-DOFs .
  • Figure 22 illustrates a comparison of results processed by threshold evaluation, decision tree (DT), support vector machine (SVM), and artificial neural network (ANN).
  • Figure 22A is a schematic of DT, SVM, and ANN algorithms.
  • Figure 22B illustrates classification results processed using threshold evaluation, where the rectangles represent the boundary lines. The classification accuracy was 89.0%.
  • Figure 22C illustrates classification results processed by the ANN, SVM and DT methods, including confusion matrices (left) and scatter plots (right). The classification accuracy was 98.4, 98.0 and 95.1%, respectively.
  • a participant was instructed to launch a specific user interface by bite-controlled cursor movements, such as opening a web browser or music streaming application, and the control accuracy reached 98.3%. Cursor movement distance could be accurately controlled by adjusting the amplitude of the bite force as illustrated in Fig. 3H.
  • Cursor movement distance could be accurately controlled by adjusting the amplitude of the bite force as illustrated in Fig. 3H.
  • the sorting between function specification and bite patterns is illustrated in Fig. 23, in which the magnitude of the bite force determined the movement steps.
  • the participant controlled the bite interface to accurately type letter keys "LOVE" and number keys "3.14" using the virtual keyboard in less than 32 s.
  • the average typing speed is 22 characters per minute, which is comparable to the typing speed (13-30 characters per minute) of standard brain-computer interface technologies.
  • a 2 x 3 single-layer mp-DOF sensor array was integrated with six compression points into a three-dimensional printed soft mouthguard (Fig. 3A).
  • the right lateral, medial, and left lateral compression points of the mp-DOF were covered, respectively, with ZnS:Cu 2+ /Mn 2+ (585 nm emission), ZnS:Cu 2+ /Mn 2+ - ZnS:Cu 2+ (585 and 525 nm emissions of equal intensity), and ZnS:Cu 2+ (525 nm emission) particles.
  • Two high-sensitivity color sensor chips (detection limit: 0.5 pW/cm 2 ) were placed at the end of the mp-DOF and connected to a flexible processing circuit. With a thickness of 80 ⁇ m, the circuit could be bent effectively, fitting the mouthguard well.
  • the total circuit of the interactive mouthguard system included color sensor chips, a Bluetooth 5.0 SoC chip, and power management chips (Figs. 19 and 20). The weight of the entire system was less than 3 g.
  • the circuit size was 9 x 12 mm and the total power consumption was 9.09 mW.
  • the mp-DOFs were fabricated by embedding mechanoluminescent phosphors (ZnS:M) into an elastic matrix to form a waveguide.
  • the light collection efficiency depended critically on the shape and size of the fibers.
  • Total internal reflection in optical fibers results from the difference in the refractive index between core and cladding materials.
  • total internal reflection occurs when the incident angle of the incident light to the core boundary is larger than the critical angle, which is expressed as follows: where ⁇ c is the critical angle, and n2 and ni are the refractive indices of the core and cladding materials, respectively. Only those emissions at an incident angle larger than ⁇ c can be transmitted through the fiber by total internal reflection.
  • the embodiments include optimized designs of the mp-DOF sensors in various configurations, where mechanoluminescent materials are integrated into transparent optical waveguides.
  • the structure, key dimensions, and composition of materials for mp-DOFs and the associated waveguide were studied.
  • Figs. 8 to 11 illustrate the data and schematics of the various studies. Considering light collection efficiency and ease of processing, each mechanoluminescent pad was optimized with dimensions of 5 x 3 x 0.5 mm, aligned parallel to one another along the fiber axis. The dimensions of the fiber were 36 x 5 x 2.5 mm. Replica molding was employed to fabricate the mp-DOF sensors.
  • Figs. 12 and 13 illustrate parts of the process of replica moulding.
  • the luminescence behavior of the mechanoluminescent pads with ZnS:Cu 2+ /Mn 2+ particles of different concentrations was investigated (Fig. 2B).
  • the force sensitivity of the mp-DOF was tested by plotting the integral intensity versus the external force, and the intensity increased linearly in the range of 5- 60 N (Fig. 2C and Fig. 14).
  • the force sensitivity defined by the curve slope was 20 counts/N (integrated time: 0.25 s; Fig. 2C), which was sufficient to identify bite patterns.
  • the dynamic characteristics of the mp-DOF was investigated by compressing the optical fiber at various frequencies (1, 2, and 3 Hz), demonstrating highly stable and reproducible signal output (Fig. 2D).
  • the robustness of the mp-DOF was tested by applying compression over 2,000 cycles, and the intensity difference was within 5.1% (Figs. 15 and 16).
  • the temperature characteristics of the mp-DOF were also examined, and its luminescence fluctuation remained within 3.7% from 20 °C to 80 °C.
  • the performance of both single-layer and double-layer mp-DOF sensors was characterized.
  • the single-layer sensor measured distributed mechanical force.
  • the double-layer sensor distinguished different force modes, such as stretching, compression and bending. Compression maps were obtained using a 4 x 4 single-layer mp-DOF array under different force patterns (Figs. 2E and 2F, and Fig. 17). Stretching, compression and bending tests were performed to evaluate the stress distribution and light propagation of the double-layer mp-DOF sensor. As the mechanoluminescent pads embedded in the upper and lower fibers emitted light at different wavelengths, the mode and magnitude of the applied force could be distinguished spectroscopically.
  • the detected power P of fiber 1 was larger than, approximately equal to, and smaller than that of fiber 2 in the compression, stretching, and bending modes, respectively. Therefore, combining these two fibers in a double-layer configuration yielded substantial changes in spectral output in the three deformation modes as illustrated in Fig. 2G.
  • Embodiments comprising the double-layer configuration provide additional capability to distinguish stretching, bending and compression actions on the mouthguard through the distinct spectral outputs associated with those actions.
  • the capability to distinguish stretching, bending and compression actions may serve as distinct occlusal patterns that can be used as specific signals to communicate with an electronic device using the mouthguard.
  • Figure 8 illustrates a comparison of optical fiber designs with different geometries.
  • Figure 8B illustrates a three- dimensional structures of a positively tapered fiber (top, Si ⁇ Se), square fiber (middle), and negatively tapered fiber (bottom, Si > Se), all with embedded mechanoluminescent pads according to some embodiments.
  • Figure 8C illustrates an output light intensity versus cone angle, indicating that the square optical fiber has an advantage in light collection efficiency. Results were obtained by COMSOL Ray optics simulation. A sensor with an area of 1 x 1 mm was placed in the center of the fiber end. The transmittance of the luminescent pads was 5%.
  • Figure 9 illustrates an imaging analysis of a mechanoluminescence-powered distributed optical fiber (mp-DOF).
  • Figure 9A illustrates an exemplary equivalent model for imaging.
  • (I) presents a 2D geometrically equivalent model of waveguide imaging.
  • the reflected ray BC is originally emitted from point A r of the real light source, and ray BC is equivalent to the virtual ray A- 1 C. Points A- 1 and A r are symmetrical about the top plane of the optical fiber.
  • ray CD is equivalent to emission from the virtual light source A2. Multiple reflections generate multiple virtual sources.
  • II presents the mapping of angular coordinates onto the virtual source plane (plane I).
  • FIG. 9B illustrates a virtual sources plane, with each source labelled by its indices (i, j) (e.g., the real source is (0, 0)).
  • the power of point P is actually the integral of the power density of the light source in the blue circle with a radius of l B tan( ⁇ c ) centered on point P, which is approximately the area of all small rectangles within the blue circle.
  • Len, Wl, and Hl are the length, width, and thickness of the luminescent pads, respectively.
  • Ld is the interval between the pads.
  • W 2 and H 2 are the differences in the center coordinates between the luminescent pad and waveguide in the lateral and horizontal projections, respectively.
  • the mp-DOF of some embodiments incorporate a rectangular waveguide (width W x height H x length L) embedded with a rectangular mechanoluminescent pad (Wi x H 1 x Len).
  • the location of the luminescent pad in the waveguide is denoted as (W 2 , H 2 ), where W 2 and H 2 are differences in the center coordinates between the luminescent pad and waveguide in the width and thickness directions, respectively.
  • the spacing between the luminescence cores is denoted as Ld.
  • the disclosure next covers a theoretical analysis of light collection efficiency, which depends on the size of the fibers.
  • Len 0.
  • any ray that emanates from the real source and is reflected by a surface is geometrically equivalent to an undeviated ray from a virtual source.
  • a virtual source is identical to the real source, and they are symmetrical about the reflected surface. Multiple reflections off all four fiber walls generate a two- dimensional array of virtual sources in the input plane.
  • the determination of illuminance at a given position P in the output plane of the fiber is equivalent to the process of tracing the ray back to the virtual light source array in the input plane.
  • the power (P p ) at point P is actually the integral of the power density of the virtual light sources in a circle with a radius of l B tan ⁇ p c centered at point P; that is, where l B is the distance from the virtual light source plane to the output plane, (r, ⁇ ) are polar coordinates in the input plane, and S(r, ⁇ ) is the power density function of the virtual light source array in the circle.
  • a two-dimensional virtual light source array is composed of many large rectangles with a size of W x H x L whose sides are adjacent. Inside each large rectangle is a small rectangle with a size of Wi x H 1 x Len.
  • Eq. (2) is approximately the area of all the small rectangles within the circle.
  • Each virtual source in the array is indicated by with a pair of indices (m, n), and Eq. (2) can be written as follows: where (x, y) are the coordinates of the virtual light source plane, and (xo, yo) are the coordinates of the output plane, and ⁇ is the incident angle of each ray. Considering that the length of each luminescent pad is Len, the power Pp from the three-dimensional luminescent pad is , and Z is the coordinate in the length direction. Eq. (3) is applicable to every luminescent pad in the mp- DOFs.
  • the relationship between the size of the mp-DOFs and the light collection efficiency was analysed using COMSOL Multiphysics.
  • the multiphysics model consisted of ray optics and solid-state physics.
  • Si cross-sectional area
  • P e the ratio of the power at the exit end to the power (P s ) at the luminescent source is as follows: where p is the collection angle.
  • Figure 10 illustrates simulations of light collection efficiency with fibers of different sizes.
  • Len, Wi, and H 1 denote the length, width, and thickness of the luminescent pads, respectively.
  • Ld represents the interval between the pads.
  • W 2 and H 2 denote the differences in the center coordinates between the luminescent pad and waveguide in the lateral and horizontal projections, respectively.
  • Figure 10A illustrates Intensity under different Ld and Len.
  • Figure 10B illustrates intensity under different W 2 and H 2 .
  • Figure IOC illustrates intensity versus width Wi under different W 2 .
  • Figure 10D illustrates intensity versus location H 2 . Young's modulus of the luminescent pad was 1 MPa and that of the optical fiber was 0.5-4 MPa.
  • Figure 11 illustrates Young's modulus of waveguide materials versus mechanoluminescence. Relationship between the absolute value of the first principal stress (
  • Mpad ⁇ Mpad i.e., within the range from the coordinate zero point to the inflection point of each curve in the figure, the first principal stress is positive.
  • the luminescent pad mainly experiences tensile principal stress.
  • Mguide > Mpad the first principal stress is negative, and the pad experiences compressing principal stress.
  • Figure 13 illustrates an optical and electron images of mechanoluminescence- powered distributed optical fibers according to some embodiments.
  • Figure 13A illustrates an optical image of the waveguide comprising the mechanoluminescent pad.
  • Figure 13B illustrates a scanning electron microscopy image of the waveguide and luminescent pad, indicating that ZnS:M@ AI 2 O 3 microparticles were well dispersed in the silica gel matrix.
  • the elastomeric materials used for the mp-DOFs must efficiently transmit force and light simultaneously, their mechanical and optical properties must be considered.
  • the criteria for selecting materials for waveguide production include a large difference in refractive index between the core and cladding and high transparency.
  • the mechanical properties of the elastomers used in the mechanoluminescent pad and waveguide must be considered simultaneously.
  • Young's modulus of the elastomeric materials used in the mechanoluminescent pad and waveguide are denoted as Mpad and Mguide, respectively.
  • the luminous intensity of the luminescent pad is proportional to the average first principal stress applied to it. Therefore, the average first principal stress on the luminescent pad when compression (500 kPa) was applied to the waveguide with different Young's moduli was simulated and the results were analysed. The results indicated that the greater the difference between Mpad and Mguide, the greater the average first principal stress applied to the luminescent pad. Since a greatly deformed waveguide with a small Mguide under mechanical force causes severe optical attenuation, an elastomeric material with a larger Mguide was selected to construct the optical fiber.
  • the mp-DOF was a 36-mm rectangular fiber with a cross section of 2.5 mm (height) by 5 mm (width).
  • ZnS:M phosphors were doped into silica gel (parts A and B were mixed at a 10: 1 ratio) for fabrication of the mechanoluminescent pad, and the mass ratio of ZnS:M to silica gel was 8:2.
  • the optical fiber was fabricated by a simple molding process. First, the luminescent pad was produced by injecting evenly mixed ZnS:M phosphors and silica gel into a rectangular bar (5 x 3 x 0.5 mm) through a syringe and leaving it to thermally cure at 80 °C for approximately 30 min.
  • the mixed silicone LS6946 (parts A and B were mixed at a 10: 1 ratio) was applied over the luminescence pad to form half of the transparent waveguide. Then, demolding of the solidified fiber was performed and the other half of the transparent waveguide was cast using silicone LS-6946. The waveguide was thermally cured at 70 °C for approximately 30 min. Finally, two coating steps were employed.
  • the first was to cover the surface damage of the waveguide during demolding using silicone LS-6946, while the second was to fabricate the cladding of the optical fiber by coating polydimethylsiloxane (PDMS; Sylgard 184, parts A and B were mixed at a 10: 1 ratio) on the fiber and curing it in a 70 °C oven for 30 min.
  • the thickness of the cladding was 200 pm.
  • Figure 3 illustrates a technical evaluation of mechanoluminescence-powered distributed optical fiber (mp-DOF)-integrated interactive mouthguard according to some embodiments.
  • Figure 3A illustrates an experimental setup for mechanoluminescence stimulation with a wireless, interactive mouthguard comprising multicolor mechanoluminescent pads and a flexible circuit module.
  • Figure 3B illustrates a normalized real-time chromaticity response in x/y coordinates, derived from five occlusal trajectories comfortable to users. The insets are corresponding photographs of the mp-DOF under five force patterns.
  • Figure 3C illustrates chromaticity response of five occlusal patterns under different forces (5-50 N).
  • Figure 3D illustrates classification results of 14 occlusal patterns detected by a 2 x 3 mp-DOF array using machine learning algorithms.
  • Figure 3E illustrates different bite positions versus classification accuracy upon biting the yellow-emitting mechanoluminescent pad of some embodiments. The inset is a schematic of the bite positions.
  • Figure 3F illustrates classification accuracy for different 2 x 3 mp- DOF arrays.
  • Figure 3G illustrates classification accuracy of bite patterns for random users.
  • Figure 3H illustrates linear relationship between the mouse cursor movement distance and biting force, recorded when controlling a mouse cursor.
  • Figure 12 illustrates a part of a fabrication procedure of mechanoluminescence- powered distributed optical fibers according to some embodiments.
  • ZnS:M phosphors were doped into silica gel (monomer/curing agent, 10: 1) for fabrication of the mechanoluminescent pad, and the mass ratio of ZnS:M to silica gel was 8:2.
  • the fabrication process involves provision of the mechanoluminescent pads 1220 on a mould 1210.
  • a first half of a waveguide is applied to the pads at step 1230 followed by removal of the mould at step 1240 to obtain the pads provided on the half waveguide at step 1250.
  • the other half of the waveguide is applied to obtain the occlusal contact sensor array.
  • the sensor array is further subjected to steps 1270 (dip coating), 1272 (gravity spint coating) and 1274 (thermal curing). This is followed by steps 1276 (dip coating), 1278 (gravity spin coating) and 1279 (thermal curing).
  • a z-axis stage to repeatedly press a force gauge onto the mp-DOF with a force of 5-30 N in 5-N increments was used.
  • the realtime force was measured by the force gauge.
  • the light emission was collected with a spectrometer (Ocean Optics QEpro).
  • a fiber optic spectrometer the light intensity is obtained by integration over time, which is proportional to the exposure time.
  • the force response was tested with an integration time of 128 ms.
  • Figure 14 illustrates an experimental setup for optical fiber characterization.
  • Figure 14A illustrates a force response test.
  • a spectrometer (Ocean Optics QEPro) equipped with an optical fiber was used to record the spectral distribution and mechanoluminescence intensity.
  • a force dynamometer (HP-100, Handpi Instruments Co, Ltd., China) was applied to record the force in real time.
  • Figure 14B illustrates a cyclic extension test. During the experiment, two motorized stages stretched the fiber repeatedly, and the spectrometer recorded the luminescence intensity.
  • Figure 15 illustrates an optical characterization of mechanoluminescence- powered distributed optical fibers according to some embodiments.
  • Figure 15A illustrates luminescence intensity versus applied force.
  • Figure 15B illustrates luminescence intensity at different temperatures.
  • Figure 15C illustrates luminescence response to applied pulse force.
  • Figure 15D illustrates magnified luminescence response for one selected peak indicated by the dotted box in C.
  • Figure 15E illustrates intensity ratio of green to red (G/R) emissions, plotted over 2,000 stretching cycles.
  • Figures 15F intensity of green emission plotted over 2,000 stretching cycles.
  • Figure 16 illustrates scanning electron microscopy images of luminescent pad (A) before and (B) after stretching over 2,000 cycles.
  • Figure 17 illustrates compression maps measured with a 4x4 mechanoluminescence-powered distributed optical fiber array under different force patterns. Scale bars are 4 mm.
  • Figure 17A illustrates photographs of the 4x4 fiber array taken under various conditions (left to right: no pressing, pressing the green-emitting pad, and pressing the red-emitting pad).
  • Figures 17B and 17C illustrate recorded compression maps under different force levels through ID and 2D projections, respectively.
  • Figure 18 illustrates Mechanical and optical properties of a double-layer mechanoluminescence-powered distributed optical fiber of some embodiments under compression, stretching, and bending, respectively.
  • Figure 18A illustrates the first principal stress distribution mapping, and the average first principal stress E versus the compressive, tensile, and bending forces.
  • Figure 18B is a Ray tracing diagram, and the light intensity detected by the sensor S versus the compressive, tensile, and bending forces.
  • the setup for the temperature response test was the same as that used for the force response test, except that the mp-DOF was placed on a heating plate with a digital display. A compression force of 15 N was applied to the green-emitting pad, and the temperature was increased from 20°C to 50°C in 10°C increments.
  • the response time of the mp-DOF was explored using a z-axis stage to repeatedly and quickly press the force gauge onto the fiber.
  • a fiber optic spectrometer and a force gauge were used to record the luminescence and force, respectively.
  • the integration time of the spectrometer was 50 ms. Cyclic extension test
  • the mechanism of a multifunctional soft sensor composed of a double-layer mp- DOF was simulated using COMSOL Multiphysics, which indicated that the sensor was capable of distinguishing the deformation modes of compression, stretching and bending.
  • the simulated structure included two layers of mp-DOFs, separated by a layer of opaque medium with Young's modulus of 0.5 MPa.
  • the light transmittance of the luminescent pad was 5%, and Young's moduli of the luminescent pad and waveguide were 2 and 1.5 MPa, respectively.
  • the size of the luminescent pad was 0.5 x 3 x 6 mm, and the length of the optical fiber was 16 mm.
  • a double-layer mp-DOF was pressed using the same setup and configuration as in the force response measurements described in Section III. Two color sensors connected to the fibers measured the light intensity of each fiber. The spectrometer was placed in the middle of the double-layer mp- DOF end to measure the spectrum. Stretching response test
  • a double-layer mp-DOF was stretched on a customized linear translation stage, similar to the setup used in the cyclic extension test described in Section III. One end of the fiber was fixed, while the other end was connected to a reciprocating linear translation stage.
  • the working voltage was 3.0 V.
  • the working current of the Bluetooth transceiver was 3 mA, and the working time was 10 ms/s.
  • the total average power consumption of the interactive mouthguard under the normal operating condition was calculated to be 9.09 mW.
  • TPU thermoplastic polyurethane
  • a convex plate was provided at the upper position of the corresponding luminescent pad so that users could more flexibly and accurately determine the bite position.
  • the classification accuracy of threshold evaluation was 89.0%. Threshold evaluation is relatively simple; however, it has low accuracy and is difficult to apply to flexible interactions. Moreover, it is difficult to set the threshold directly and independently, making machine learning a necessity. With accuracy rates of 98.4% and 98%, respectively, the ANN and SVM had advantages, and the ANN was selected as the classification algorithm for prototype development.
  • FIG 19 illustrates a block diagram comprising some components of a mouthguard 1900.
  • Mouthguard 1900 comprises two mp-DOF sensors 1910, 1920.
  • Each sensor 1910 comprises an analog to digital signal converter (ADC) associated with each color of light emitted by the lights emitting pads of the mp-DOF.
  • ADC analog to digital signal converter
  • Each ADC transforms the analogue light intensity data into a digital signal representative of the light intensity of a particular color emitted by the occlusal sensor array of the mouthguard.
  • the mouthguard also comprises a voltage regulator 1930 that regulates the operation of the sensors and the electronics provided on an integrated circuit 1940 part of the mouthguard.
  • the IC 1940 comprises communication busses I 2 C 1 and I 2 C 2 to receive signals from the sensors 1910 and 1920.
  • the IC 1940 also comprises a general purpose input output component 1945 and an antenna 1948.
  • the antenna may be a Bluetooth antenna suitable for transmitting output of the IC 1940 to a computing device 1960.
  • the computing device 1960 may be part of an electronic device that is the intended target of the human machine interaction operations. Alternatively, the computing device 1960 may be an intermediary device that receives signals from the mouthguard 1900 and transmits the processed signals (occlusal pattern signals) to an electronic device that is the intended target of the human machine interaction.
  • the computing device 1960 comprises a machine learning model 1965 that processes the signals received from the IC 1940 to generate the occlusal pattern signals.
  • Figure 20 illustrates circuit schematic and characteristics of some embodiments.
  • Figure 20A illustrates schematic of the circuit designed for the interactive mouthguard.
  • Figure 20B is an image of the flexible printed circuit board according to some embodiments. The circuit, with a thickness of 80 pm, can be bent effectively and well fit to the mouthguard.
  • Figure 20C illustrates thermal images of the circuit board after operation for various times. Circuit-induced heating was less than 1°C after operation for 90 min.
  • Figure 20D illustrates electromagnetic radiation intensity of the circuit, which is equivalent to that of mobile phones.
  • Figure 4 illustrates application mouthguard for assistive technologies including operation of a smartphone, playing piano and controlling a wheelchair.
  • Figure 4A illustrates a conceptual sketch of a mouthguard integrated with a wheelchair, computer, a smartphone etc.
  • Figure 4B a mapping of the signals recorded generated by a mouthguard with specific actions associated with phone calls using a custom mobile application.
  • Figure 4C illustrates a mapping of signals generated by the mouthguard with specific piano playing operations.
  • Figure 4D illustrates mapping of signals generated by the mouthguard with wheelchair navigation operations.
  • Figure 4E illustrates monitoring wheelchair navigation around a standard 400 m running track (five repeated runs) by Global Positioning System (GPS) with a scale of bar: 40 m.
  • GPS Global Positioning System
  • the signals generated by the mouthguard may be differently mapped for different human machine interaction applications.
  • the specific tristimulus values generated by each fibre with its set of occlusal contact sensors may be mapped a specific action in a human machine interaction application as exemplified in Figure 4.
  • the up, down, left, right, left-click and right-click functions of a mouse corresponded to bite patterns 3, 8, 6, 10, 1, and 11, respectively.
  • the navigation distance was defined according to the amplitude of the bite force and the luminous intensity.
  • the interactive mouthguard was used to open, use, and close a web browser.
  • the up, down, left, right, enter, and switch between alphabetic and numeric keyboard functions corresponded to bite patterns 3, 8, 6, 10, 1, and 5, respectively.
  • a word document was created, and then the word "LOVE" and the number "3.14" were repeatedly written to the word document. The duration to complete the input of all characters did not exceed 32 s (the duration was from the time the participant activated the keyboard until the last character was entered).
  • Figure 23 demonstrates keyboard function according to some embodiments.
  • the top of the figure illustrates the correspondence between functions and classification numbers in Figure 21.
  • the middle panel displays the navigation trajectory when typing letter keys "LOVE” and number keys "3.14".
  • Biting occlusal pattern 5 switches between the numeric and alphabetic keyboards.
  • the bottom of the figure plots the corresponding output signal of the color sensors. Red, green and blue curves represent the tristimulus values X, Y and Z, respectively. Designing piano keyboard
  • a piano keyboard with a total of 14 keys was designed to operate with the interactive mouthguard.
  • the song, "Happy Birthday” containing seven notes, could be played using the designed piano keyboard.
  • Bite patterns 5, 3, 1, 10, 8, 6, and 11 corresponded to musical notes E, F, G, A, B, C, and D, respectively.
  • the sensor When subjected to biting, the sensor captured the input signal and performed classification, and then directed the piano keyboard to play the corresponding note.
  • the wheelchair used was a standard electrical wheelchair (W5517, Inuovo Co, Ltd., Germany).
  • Bite patterns 3, 8, 10, and 6 corresponded to the forward, backward, turning left and turning right functions, respectively, and bite pattern 5 controlled switching between starting and braking.
  • Wheelchair control was achieved using the Bluetooth interface.
  • occlusal pattern data were first transmitted to the computer via Bluetooth chips. After the data were classified and processed, the corresponding action command was sent to the wheelchair via another Bluetooth chip.
  • the circular and figure-8 complex trajectories of the wheelchair controlled by biting demonstrated the flexibility of the system, and five tests on the same trajectory revealed the stability of the system.
  • GPS Global Positioning System
  • a smartphone application was designed to operate with the interactive mouthguard and provide a user-friendly interface for data display and collection.
  • the user should first put on the interactive mouthguard and open the App installed on the smartphone, and then establish a secure Bluetooth connection between the App and mouthguard.
  • the App can receive and display the data stream from the mouthguard in real time.
  • the App is capable of plotting a graph of these data streams versus time during the user's physical activity. Data and graphs can be stored on the device, uploaded to cloud servers online, and shared via social media. Additionally, with the App, users can make emergency calls through different bite patterns.
  • the current implementation was programmed in the Android environment, and similar application interfaces can be easily developed for other popular operating systems, such as iOS.
  • Figure 24 illustrates a mobile application developed for data display and collection.
  • Figure 24A is a homepage of the application after Bluetooth pairing.
  • Figure 24B is a real-time display of data from the interactive mouthguard.
  • Figure 24C illustrates real-time data progression of an individual sensor.

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Public Health (AREA)
  • Physics & Mathematics (AREA)
  • Biomedical Technology (AREA)
  • Veterinary Medicine (AREA)
  • General Physics & Mathematics (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Vascular Medicine (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Biophysics (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Dentistry (AREA)
  • Epidemiology (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Dental Tools And Instruments Or Auxiliary Dental Instruments (AREA)

Abstract

Mouthguards and systems for enabling human machine interaction based on occlusal contact. Embodiments include an occlusal contact sensor, and an optical waveguide. The occlusal contact sensor emits light of a designated color responsive to occlusal contact and the optical waveguide propagates light emitted by the occlusal contact sensor to a color sensor. The color sensor generates electrical signals responsive to color of the received light for human-machine interaction.

Description

Mouthguard System for Human Machine Interaction
Technical Field
The present disclosure relates, in general terms, to mouthguards, systems and methods for human machine interaction.
Background
World Health Organization (WHO) estimates that there are about 1 billion people with disabilities in the world. About 3% of the population is suffering from severe disabilities. About 110 to 190 million people cannot live independently. Existing assistive technologies include voice recognition, eye-blink tracking, head- movement tracking, and brain-computer interfaces, etc. These technologies have promoted the daily lives of many paralyzed people and improved their independence. However, the help provided is limited and the degree of empowerment provided by such technologies is still far less than that the capabilities of able bodied people. These assistive technologies are discarded due to poor flexibility, low reliability, environmental barriers, difficult equipment maintenance, high prices, and changes in user functional capabilities.
Operation of computers, smart electronics and wheelchairs is essential for individuals with dexterity impairments or neurological conditions. Overcoming communication barriers is a significant step toward self-determination, increased independence, and active interaction with the environment. Existing assistive technologies, including voice recognition, eye blink tracking, head movement tracking, tongue drive interaction, and brain-computer interfaces, have increased the functional capabilities of individuals with special needs and improved their quality of life.
However, modern assistive technologies have problems in the use and maintenance of equipment. For example, voice recognition depends upon large operating memory, up-to-date hardware, and low-noise operation. The use of eye tracking requires a camera to be mounted in front of users with prolonged calibration that is prone to fatigue. Furthermore, head movement tracking excludes users with neurological dysfunction. Although recent advances in brain-computer interfaces have significantly improved typing speed, this technology requires electrodes attached to the scalp for electroencephalogram, or implanted under the skull for electrocardiogram, or in the brain for neural recording, resulting in cumbersome cabled instruments and a high degree of invasiveness.
It would be desirable to overcome or ameliorate at least one of the above- described problems associated with existing assistive technologies, or at least to provide a useful alternative.
Summary
Disclosed herein is a system for human machine interaction, the syste- comprising: a mouthguard; a computing device in communication with the mouthguard; wherein the mouthguard comprises: an array of occlusal contact sensors, wherein each occlusal contact sensor emits light of a designated color responsive to occlusal contact; an optical waveguide for propagating light emitted by each occlusal contact sensor to a color sensor, wherein the color sensor generates electrical signals responsive to color of the received light; an antenna for wirelessly transmitting the electrical signals generated by the color sensor; wherein the computing device is configured to process received electrical signals to determine an occlusal pattern signal.
Disclosed herein is a mouthguard for human-machine interaction, comprising: an array of occlusal contact sensors, wherein each occlusal contact sensor emits light of a designated color responsive to occlusal contact; an optical waveguide for propagating light emitted by each occlusal contact sensor to a color sensor, wherein the color sensor generates electrical signals responsive to color of the received light, wherein the electrical signals encode part of an occlusal pattern signal for human-machine interaction; an antenna for wirelessly transmitting the electrical signals generated by the color sensor.
Disclosed herein is a method of human-machine interaction, the method comprising: generating light at an array of occlusal contact sensors in response to occlusal contact, wherein each occlusal contact sensor emits light of a designated color responsive to occlusal contact; generating electrical signals by a color sensor responsive to the light generated by the array of occlusal contact sensors; processing the generated electrical signals by an electronic circuit to generate an occlusal pattern signal; performing human-machine interaction based on the occlusal pattern signal. In some embodiments, the method further comprises transmitting the occlusal pattern signal to a computing device for human-machine interaction.
Brief description of the drawings
Embodiments of the present disclosure will now be described, by way of non- limiting example, with reference to the drawings in which:
Figure 1 illustrates a schematic diagram of a mouthguard, its associated components, schematic diagrams of bite patterns and associated graphs;
Figure 2 illustrates schematic diagrams of sensor arrays and associated characterization graphs;
Figure 3 illustrates a schematic diagram of a mouthguard and associated sensor arrays;
Figure 4 illustrates schematic diagrams of the integration of the mouthguard for human machine interaction applications;
Figure 5 illustrates characterization of mechanoluminescent phosphors used in the occlusal sensors of the mouthguard;
Figure 6 illustrates a mechanism of mechanoluminescence implemented by the occlusal sensors;
Figure 7 illustrates bite/occlusal contact pattern analysis using the mouthguard; Figure 8 illustrates optical fiber designs for a waveguide of a mouthguard;
Figure 9 illustrates optical data analysis using sensors of a mouthguard;
Figure 12 illustrates images of a part of the fabrication process of the sensor array;
Figure 13 illustrates optical and electron images of optical fiber and sensors of the mouthguard;
Figure 14 illustrates an experimental setup for optical fiber characterization;
Figure 15 illustrates graphs of optical characterization of optical fibers of the mouthguard;
Figure 16 illustrates electron microscope images of sensors before and after stretch cycles; Figure 19 illustrates a block diagram comprising some components of a mouthguard;
Figure 20 illustrates a circuit schematic and images of the circuitry of a mouthguard;
Figure 21 illustrates various occlusal patterns and corresponding characterization;
Figure 22 illustrates a schematic of machine learning models and associated output;
Figure 23 illustrates keyboard functions associated with a mouthguard and a graph of conversion of signals to keyboard functions;
Detailed description
People with disabilities face difficulties in living independently because modern assistive technologies have several drawbacks. The disclosure provides mouthguards that enable communication or human machine interaction (HMI) by sensing occlusal contact. The mouthguards of the disclose address the challenges faced by people with disabilities in using convention assistive technologies or provide an alternative.
The mouthguards include mechanoluminescence-powered mouthguards. The mouthguards are designed to assist individuals with dexterity impairments or neurodegenerative conditions to wirelessly communicate with electronic devices. The electronic devices may include computers, smart electronics or wheelchairs. The mouthguards sense occlusion of teeth (biting actions with respect to the mouthguard) and generate electronic signals based on the sensed occlusion. The electronic signals may be transmitted to a computing device that processes the electronic signals to determine occlusal pattern signals. The occlusal pattern signals may be mapped to specific instructions or may be assigned a specific meaning in relation to HMI operations. The occlusal patterns signals may be identified using machine learning. The mouthguards could be integrated with miniaturized force-feedback actuators, high-precision bionic prosthetics, and human-robot interactive teleoperation etc. The mouthguard comprises mechanoluminescence-powered distributed optical fiber (mp-DOF). Provided in mp-DOF are sensors (pressure sensors) composed of an elastic optical waveguide embedded with pressure-responsive luminescent phosphors. The mp-DOF enables accurate detection of distributed pressure without the need for external light sources. The mp-DOF may be elastic, meaning it is suitable for or capable of repeated stretching followed by contracting under resilient bias, without plastic deformation. Unlike conventional electronic pressure sensors, luminescence-based pressure sensors of the disclosed mouthguards are less susceptible to electromagnetic interference.
Mechanoresponsive phosphors are embedded in a transparent elastomer to form mp-DOF sensors. Integration of these sensors into a flexible mouthguard creates a bite-controlled optoelectronic system that can be used to communicate with electronic devices. Deformation of the elastomeric mouthguard at specific segments upon biting can be determined by measuring the intensity ratios of different color emissions in the fiber system. By utilizing unique patterns of occlusal contacts (bite patterns) various forms of mechanical deformation can be easily distinguished by ratiometric luminescence measurements. Processing the electronic signals generated by the mouthguard using machine learning makes it possible to translate complex bite patterns into specific data inputs with 98% accuracy.
The mouthguard provides an effective assistive technology for individuals with dexterity or movement impairments. Human teeth are one of the most sensitive and robust organs. Compared with existing interactive technologies, including voice recognition, eye-blink tracking, head-movement tracking, and braincomputer interfaces, use of occlusal patterns as the control source of HMI by the embodiments provides a more flexible and robust solution for people with disabilities. The mechanoluminescence-powered interactive mouthguard have low cost and low application environment requirements. It has high sensitivity and high precision for people with disabilities compared with other modern methods. Mouthguard Design
Figure 1 illustrates a design of a mouthguard 100 comprising an optical fiber/waveguide 110. The optical fiber 110 comprises mp-DOF sensors 112 or sensor regions 112 or mechanoluminescent pads 112. The sensors 112 are collectively referred to as an array of occlusal contact sensors.
Figure 1A is a schematic of a mp-DOF comprising an elastomeric waveguide and embedded mechanoluminescent pads (ZnS:Cu2+/Mn2+, ZnS:Cu2+, and ZnS:Cu+ phosphors). Figure IB is a schematic of mp-DOF integrated into a mouthguard. A 2 x 3 mp-DOF array comprising two sets of tricolor mechanoluminescent pads can theoretically generate 21 distinct patterns upon biting one or two luminescent pads (6C1 + 6C2= 21). Each mechanoluminescent pad generates light of a specific color/wavelength when subjected to pressure. Four of the patterns (pattern 1, 2, 3 and 4) are displayed in Figure IB. Dashed lines represent the typical contour of the upper teeth of an adult. The four patterns are also represented in the plot 150 mapped to specific wavelengths associated with the respective sensors. For each pattern, fibers 1 and 2 generate unigue optical spectra.
Using machine learning, the 21 complex occlusal patterns can be correctly identified according to the input CIE tristimulus values (X1, Y1, Z1, X2, Y2, Z2) of the two fibers. For display purposes, the scatter plot 160 has been drawn with coordinates (ΔX = X1 - X2, AY = Yl - Y2, ΔZ = Zl - Z2).
The occlusal contact sensors 112 need not be symmetrically or uniformly disposed in the sensor array. In some embodiments, positions of the occlusal contact sensors within the sensor array may be adapted to suit a particular mouth of an intended user of the mouthguard. The array may therefore involve regular spacing between sensors, irregular spacing, random spacing or another spacing scheme. In other embodiments, a different number or a different orientation/position of sensors may be provided in the mouthguard 100. With a different number/orientation of sensors, a larger or smaller number of occlusal patterns may be identified to suit the reguirements of the users of the interactive mouthguard or any devices that may be integrated with the mouthguard. The occlusal contact sensors can detect distributed pressure accurately without the need for external light sources. The pressure on the occlusal contact sensor includes pressure from impact of one or more than one tooth on a surface of the occlusal contact sensor or a substrate covering the occlusal contact sensor. In general, force applied by a tooth in one of the upper and lower jaws will be resisted by a tooth in the other jaw. Therefore, the occlusal contact sensor is placed under bite pressure between opposing teeth - i.e. one or more teeth in opposing jaws. In some embodiments, pressure from contact between the occlusal contact sensor and one or more teeth from an upper jaw may be sufficient for producing a signal to enable human machine interaction. Pressure from contact between the occlusal contact sensor and one or more tooth from a lower jaw may be sufficient for producing a signal to enable human machine interaction. Some embodiments may be suitable for use by individuals without teeth. In such embodiments, occlusal contact may refer to contact between regions of the gums of an individual with the mp-DOF, or pressure applied by opposing gums.
Phosphors provided in the sensor 112 at different positions emit light of different wavelengths when subjected to forces. In some embodiments ZnS:Cu2+/Mn2+ are used as the mechanoluminescence phosphors. Red, green and blue mechanoluminescent emissions can be modulated by doping Cu2+/Mn2+ (590 nm), Cu2+ (525 nm), and Cu+ (475 nm), respectively, into the ZnS host. When distributed pressure is applied externally to the fiber, these mechano-responsive phosphors emit light with different wavelengths, which is propagated along the fiber through total internal reflection described with reference to Figures 8A and 9. The mouthguard 100 also comprises a printed circuit board 140, and a mechanically flexible polyethylene terephthalate substrate 130 as illustrated in Figure IB.
The mp-DOF may be in the form of a continuous sensor wherein a segment or region of the mp-DOF emits light of a particular color based on a position at which pressure is applied relative to the sensor. For example, a length of the mp-DOF may be mapped to a continuous frequency spectrum of light, and contact at a particular point or region in the mp-DOF may generate light of a frequency within the continuous frequency spectrum depending on the point or region of contact. Different occlusal trajectories give rise to mechnanoluminescence with specific color cues, and color sensors/detectors 120 record CIE tristimulus values. The color cues may correspond to a particular frequency or wavelength of light emitted by the mp-DOF. Other array sizes or configurations may be provided. Sensors may be spaced at regular intervals or predetermined intervals with respect to a user's teeth, or may be randomly placed.
Tristimulus values include values indicative of a specific color detected by the detectors 120. The detectors 120 receive light generated by the sensors 112 and transform the detected light into representative electrical signals. The representative electrical signals are processed by the circuit 140 that facilitates transmission of the electrical signals wirelessly to other electronic devices. The colours depend on the nature of the sensors 112 and may include the primary colours red, green and blue. Chart 150 illustrates an example of output detected by the detectors 120. Chart 150 is a three dimensional graph, with the three dimensions being color/wavelength, time and fiber 1/2. The output may also include bite intensity information indicative of how strong a bite was. The bite intensity information may be generated based on the intensity of the light generated by sensors 120.
A machine learning model (ML model) may be employed to process the data generated by the detectors and detect occlusal patterns associated with the bites. The ML model may include a trained two-layer, feed-forward artificial neural network to differentiate the occlusal patterns in terms of tristimulus values. The interactive mouthguard can be used to interact with computers and phones, control wheelchairs, and play the piano. The results show that the accuracy of each command recognition can reach up to 98%.
Some embodiments include an optical fiber with various transition-metal-doped ZnS phosphors embedded at several predefined locations (Fig. 1A). Red, green, and blue mechanoluminescent emissions could be modulated by doping Cu2+/Mn2+ (590 nm), Cu2+ (525 nm), and Cu+ (475 nm), respectively, into the ZnS host (Figures 5 and 6). Beyond the primary colors, secondary colors could be produced by varying the composition of the three types of phosphors. When distributed compression was applied externally to the fiber, the mechanoresponsive phosphors emitted light at different wavelengths, which was propagated along the fiber by total internal reflection. This design principle was employed to fabricate the mouthguard that comprised an integrated mp- DOF array, a flexible printed circuit board, and a flexible polyethylene terephthalate substrate (Fig. IB).
Figure 2 illustrates a mp-DOF based sensors in various configurations and their associated characterizations. Figure 2A is an exemplary schematic of the mp- DOF sensor in three different configurations: a single-layer mp-DOF 210, a 4 x 4 single-layer mp-DOF array to sense deformation 220, and a double-layer mp- DOF to distinguish multiple deformations 230. Figure 2B illustrates exemplary doping concentrations of mechanoluminescent materials versus the luminous intensity of the optical fiber. Figure 2C illustrates force sensitivity of the singlelayer mp-DOF according to some embodiments.
Figure 2D illustrates results of a dynamic test of the mp-DOF at frequencies of 1, 2, and 3 Hz. Figures 2E and 2F are Optical photographs of a single-layer 4 x 4 mp-DOF array and the interface compression, recorded under 40 N compression through one-dimensional projection (indicated by a dashed line in Figure 2E). Figure 2G illustrates optical power output (left) and corresponding spectra (right) of the double-layer mp-DOF upon compression, stretching, and bending. The insets are schematics of the double-layer mp-DOF in different deformation modes.
Figure 5 illustrates a characterization of mechanoluminescent phosphors. Figures 5A, 5B and 5C illustrates scanning electron microscopy images of AI2O3 coated ZnS:M (M = Cu2+ or Mn2+) microparticles (10-30 pm). Figure 5D illustrates an Energy dispersive X-ray spectroscopy of ZnS@ AI2O3 samples. Figure 5E illustrates a Powder X-ray diffraction patterns of ZnS:M microparticles. The Scale for Figure 5A-5D is 1 bars = 20 μm.
Figure 6 illustrates a Mechanism and chromaticity of ZnS:M mechanoluminescence according to some embodiments. Figure 6A illustrates a mechanistic diagram of ZnS: M mechanoluminescence according to some embodiments. Under compression, piezoelectric polarization charges are induced within ZnS, and the potential produced by the charges effectively distorts the bands, thus facilitating electron detrapping to the conduction band (CB) of ZnS: M. Nonradiative recombination between detrapped electrons and holes then occurs, which leads to mechanoluminescence. Figure 6B illustrates mechanoluminescence spectra of the phosphors under study: red emission (590 nm, ZnS:Cu2+/Mn2+@ AI2O3), green emission (525 nm, ZnS:Cu2+@ AI2O3), and blue emission (475 nm, ZnS:Cu1+@ AI2O3) according to some embodiemnts. Figure 6C illustrates a mechanoluminescence waveguide setup according to some embodiments. ZnS: M@ AI2O3 microparticles are dispersed into a soft silica gel film, which is covered with a transparent elastomer to transmit the luminescence.
Distinguishing Occlusal Patterns
The mouthguard can detect a variety of occlusal patterns. For example, a 2x3 mp-DOF array comprising two sets of tricolor mechanoluminescent pads can theoretically generate 21 distinct patterns by biting one or two luminescent pads
Figure imgf000012_0001
(Fig. 7). If more than two pads are bitten at a time, more patterns will be generated; for example, = 41. Different occlusal trajectories
Figure imgf000012_0002
give rise to mechanoluminescence with specific color schemes, and color sensors record the CIE tristimulus values. A trained two-layer, feed-forward artificial neural network differentiates the 21 complex occlusal patterns in terms of tristimulus values (XI, Yl, Zl, X2, Y2, Z2). Each of the distinct occlusal patterns may correspond to a specific occlusal pattern signal for human machine interaction.
Figure 7 illustrates dynamic pattern analysis of occlusal contacts using mechanoluminescent pads. Figure 7A illustrates a schematic of upper and lower tooth contours of an adult with normal occlusion. Figure 7B illustrates different occlusal patterns (top) and corresponding spectra (bottom) according to some embodiments.
To distinguish occlusal patterns, the X, Y, Z tristimulus values of the color sensor were converted into CIE xyz color space. The real-time chromaticity response x/y was normalized in five comfortable bite patterns that could be distinguished with each fiber (Fig. 7B). The plotted chromaticity response data of the five occlusal patterns in the force range of 5-50N demonstrated that pattern identification can be achieved by comparing variation across chromaticity space (Fig. 7C). In view of the fact that five patterns can be accurately distinguished using one fiber, it is possible to differentiate 20 patterns using two
Figure imgf000013_0001
fibers. Considering the adaptive comfort of users, 14 occlusal patterns for interactive bite-controlled operation could be employed as illustrated in Fig. 21. Machine learning was applied to recognize the occlusion pattern based on the output spectra of 14 occlusal patterns (Fig. 3D).
Compared with machine learning algorithms such as decision trees and support vector machines, an artificial neural network is used to process complex patterns owing to its advantages of precision, accuracy and robustness. The trained feed-forward artificial neural network recognized 14 patterns with an accuracy greater than 98% as illustrated in Fig. 22. Figure 3E illustrates the relationship between the recognition accuracy and the bite position offset d. The classification accuracy reached 100% when biting the center of the middle pad and decreased as the biting position moved away from the center.
Within a distance of 8 mm, the accuracy remained greater than 75% as illustrated within the light blue shaded area in Fig. 3E. To further evaluate the accuracy of the system, eight pairs of mp-DOFs were tested, and all had a classification accuracy greater than 97% (Fig. 3F). The interactive system according to the embodiments was tested by eight participants, and the recognition accuracy of each interactive command remained higher than 96.5% (Fig. 3G).
Figure 21 illustrates a different occlusal patterns and corresponding tristimulus values. According to permutation and combination theory, there are 20 possible combinations using two mp-DOFs . Figure 22 illustrates a
Figure imgf000013_0002
comparison of results processed by threshold evaluation, decision tree (DT), support vector machine (SVM), and artificial neural network (ANN). Figure 22A is a schematic of DT, SVM, and ANN algorithms. Figure 22B illustrates classification results processed using threshold evaluation, where the rectangles represent the boundary lines. The classification accuracy was 89.0%. Figure 22C illustrates classification results processed by the ANN, SVM and DT methods, including confusion matrices (left) and scatter plots (right). The classification accuracy was 98.4, 98.0 and 95.1%, respectively.
Mechanoluminescence intensity
The detectors (color sensors) of some mouthguards provide data relating to the intensity of a bite in addition to the chromaticity. While the chromaticity is a function of the location of a bite on the sensor array, the intensity of light generated is a function of the intensity of the bite. The luminance and emission intensity detected by the detector provide an additional dimension for performing human machine interaction using the mouthguard. Although the chromaticity of the mechanoluminescent pads remained unchanged under different forces, the mechanoluminescence intensity is proportional to the magnitude of the force. This feedback endowed the mp-DOF sensors with an additional input variable beyond chromaticity, as demonstrated by controlling the mouse cursor movement and typing with an on-screen keyboard in Figure 23. In experiments, a participant was instructed to launch a specific user interface by bite-controlled cursor movements, such as opening a web browser or music streaming application, and the control accuracy reached 98.3%. Cursor movement distance could be accurately controlled by adjusting the amplitude of the bite force as illustrated in Fig. 3H. For interaction with the keyboard, the sorting between function specification and bite patterns is illustrated in Fig. 23, in which the magnitude of the bite force determined the movement steps. The participant controlled the bite interface to accurately type letter keys "LOVE" and number keys "3.14" using the virtual keyboard in less than 32 s. The average typing speed is 22 characters per minute, which is comparable to the typing speed (13-30 characters per minute) of standard brain-computer interface technologies.
Mouthguard Fabrication
To construct a mechanoluminescence-assisted interactive mouthguard, a 2 x 3 single-layer mp-DOF sensor array was integrated with six compression points into a three-dimensional printed soft mouthguard (Fig. 3A). The right lateral, medial, and left lateral compression points of the mp-DOF were covered, respectively, with ZnS:Cu2+/Mn2+ (585 nm emission), ZnS:Cu2+/Mn2+ - ZnS:Cu2+ (585 and 525 nm emissions of equal intensity), and ZnS:Cu2+ (525 nm emission) particles. Two high-sensitivity color sensor chips (detection limit: 0.5 pW/cm2) were placed at the end of the mp-DOF and connected to a flexible processing circuit. With a thickness of 80 μm, the circuit could be bent effectively, fitting the mouthguard well. The total circuit of the interactive mouthguard system included color sensor chips, a Bluetooth 5.0 SoC chip, and power management chips (Figs. 19 and 20). The weight of the entire system was less than 3 g. The circuit size was 9 x 12 mm and the total power consumption was 9.09 mW.
The mp-DOFs were fabricated by embedding mechanoluminescent phosphors (ZnS:M) into an elastic matrix to form a waveguide. The light collection efficiency depended critically on the shape and size of the fibers.
Total internal reflection in optical fibers results from the difference in the refractive index between core and cladding materials. When light travels from an optically dense medium to a less dense medium, total internal reflection occurs when the incident angle of the incident light to the core boundary is larger than the critical angle, which is expressed as follows:
Figure imgf000015_0001
where φ c is the critical angle, and n2 and ni are the refractive indices of the core and cladding materials, respectively. Only those emissions at an incident angle larger than φ c can be transmitted through the fiber by total internal reflection.
Light collection efficiency of optical waveguide
The embodiments include optimized designs of the mp-DOF sensors in various configurations, where mechanoluminescent materials are integrated into transparent optical waveguides. To ensure that the detectors 120 collect the maximum optical power at one end of the fibers under low force, the structure, key dimensions, and composition of materials for mp-DOFs and the associated waveguide were studied. Figs. 8 to 11 illustrate the data and schematics of the various studies. Considering light collection efficiency and ease of processing, each mechanoluminescent pad was optimized with dimensions of 5 x 3 x 0.5 mm, aligned parallel to one another along the fiber axis. The dimensions of the fiber were 36 x 5 x 2.5 mm. Replica molding was employed to fabricate the mp-DOF sensors. Figs. 12 and 13 illustrate parts of the process of replica moulding.
The luminescence behavior of the mechanoluminescent pads with ZnS:Cu2+/Mn2+ particles of different concentrations was investigated (Fig. 2B). The force sensitivity of the mp-DOF was tested by plotting the integral intensity versus the external force, and the intensity increased linearly in the range of 5- 60 N (Fig. 2C and Fig. 14). The force sensitivity defined by the curve slope was 20 counts/N (integrated time: 0.25 s; Fig. 2C), which was sufficient to identify bite patterns. Moreover, the dynamic characteristics of the mp-DOF was investigated by compressing the optical fiber at various frequencies (1, 2, and 3 Hz), demonstrating highly stable and reproducible signal output (Fig. 2D). The robustness of the mp-DOF was tested by applying compression over 2,000 cycles, and the intensity difference was within 5.1% (Figs. 15 and 16). The temperature characteristics of the mp-DOF were also examined, and its luminescence fluctuation remained within 3.7% from 20 °C to 80 °C.
The performance of both single-layer and double-layer mp-DOF sensors was characterized. The single-layer sensor measured distributed mechanical force. The double-layer sensor distinguished different force modes, such as stretching, compression and bending. Compression maps were obtained using a 4 x 4 single-layer mp-DOF array under different force patterns (Figs. 2E and 2F, and Fig. 17). Stretching, compression and bending tests were performed to evaluate the stress distribution and light propagation of the double-layer mp-DOF sensor. As the mechanoluminescent pads embedded in the upper and lower fibers emitted light at different wavelengths, the mode and magnitude of the applied force could be distinguished spectroscopically. The optical power P at the end of each fiber is given by the formula, P = aE2S, where a is the proportionality factor, E is the first principal stress value of the mechanoluminescent pad, and S is the light transmission efficiency of the optical waveguide (Fig. 18). The detected power P of fiber 1 was larger than, approximately equal to, and smaller than that of fiber 2 in the compression, stretching, and bending modes, respectively. Therefore, combining these two fibers in a double-layer configuration yielded substantial changes in spectral output in the three deformation modes as illustrated in Fig. 2G. Embodiments comprising the double-layer configuration provide additional capability to distinguish stretching, bending and compression actions on the mouthguard through the distinct spectral outputs associated with those actions. The capability to distinguish stretching, bending and compression actions may serve as distinct occlusal patterns that can be used as specific signals to communicate with an electronic device using the mouthguard.
Figure 8 illustrates a comparison of optical fiber designs with different geometries. Figure 8A is a two-dimensional schematic representation of a tapered optical fiber 810 (top, Si
Figure imgf000017_0001
Se) and a square optical fiber 820 (bottom, Si = Se), where a is the cone angle, φ is the incident angle of each reflection, [3 is the collection angle, and φ c is the critical angle. Figure 8B illustrates a three- dimensional structures of a positively tapered fiber (top, Si < Se), square fiber (middle), and negatively tapered fiber (bottom, Si > Se), all with embedded mechanoluminescent pads according to some embodiments. Figure 8C illustrates an output light intensity versus cone angle, indicating that the square optical fiber has an advantage in light collection efficiency. Results were obtained by COMSOL Ray optics simulation. A sensor with an area of 1 x 1 mm was placed in the center of the fiber end. The transmittance of the luminescent pads was 5%.
Figure 9 illustrates an imaging analysis of a mechanoluminescence-powered distributed optical fiber (mp-DOF). Figure 9A illustrates an exemplary equivalent model for imaging. (I) presents a 2D geometrically equivalent model of waveguide imaging. The reflected ray BC is originally emitted from point Ar of the real light source, and ray BC is equivalent to the virtual ray A-1C. Points A-1 and Ar are symmetrical about the top plane of the optical fiber. Similarly, ray CD is equivalent to emission from the virtual light source A2. Multiple reflections generate multiple virtual sources. II presents the mapping of angular coordinates onto the virtual source plane (plane I). Determining the luminance at a given position P in the output plane is equivalent to the process of tracing the ray back to the virtual light source array in the input plane. The traced solution is displayed in Figure 9B. Figure 9B illustrates a virtual sources plane, with each source labelled by its indices (i, j) (e.g., the real source is (0, 0)). The power of point P is actually the integral of the power density of the light source in the blue circle with a radius of lBtan( φc) centered on point P, which is approximately the area of all small rectangles within the blue circle. Len, Wl, and Hl are the length, width, and thickness of the luminescent pads, respectively. Ld is the interval between the pads. The total width of the optical fiber is W = W1 + 2W2, and the total thickness of the fiber is H = H1 + 2H2. W2 and H2 are the differences in the center coordinates between the luminescent pad and waveguide in the lateral and horizontal projections, respectively.
Mechanoluminescent phosphors of ZnS:M (M = Mn2+ or Cu2+)@ AI2O3 particles may be obtained from LONCO Co., Ltd. (Hong Kong). The phosphor size (~ 28 pm on average) and morphology were characterized using a scanning electron microscope (Tecnai G2 F20 S-TWIN, FEI Nano Ports, USA). The sample composition was determined by energy dispersive X-ray spectroscopy operated with a Bruker model A300 spectrometer. X-ray powder diffraction was performed using an X-ray powder diffractometer (D/MAX-3C, Rigaku Co., Japan).
The mp-DOF of some embodiments incorporate a rectangular waveguide (width W x height H x length L) embedded with a rectangular mechanoluminescent pad (Wi x H1 x Len). The location of the luminescent pad in the waveguide is denoted as (W2, H2), where W2 and H2 are differences in the center coordinates between the luminescent pad and waveguide in the width and thickness directions, respectively. The spacing between the luminescence cores is denoted as Ld. The disclosure next covers a theoretical analysis of light collection efficiency, which depends on the size of the fibers.
First, the luminescent pad is considered to be two-dimensional; that is, Len = 0. According to the imaging theory used for fiber optics, for a square optical fiber, any ray that emanates from the real source and is reflected by a surface is geometrically equivalent to an undeviated ray from a virtual source. A virtual source is identical to the real source, and they are symmetrical about the reflected surface. Multiple reflections off all four fiber walls generate a two- dimensional array of virtual sources in the input plane. The determination of illuminance at a given position P in the output plane of the fiber is equivalent to the process of tracing the ray back to the virtual light source array in the input plane. Therefore, the power (Pp) at point P is actually the integral of the power density of the virtual light sources in a circle with a radius of lB tan <pc centered at point P; that is,
Figure imgf000019_0001
where lB is the distance from the virtual light source plane to the output plane, (r, θ) are polar coordinates in the input plane, and S(r, θ) is the power density function of the virtual light source array in the circle. In the mp-DOF, a two- dimensional virtual light source array is composed of many large rectangles with a size of W x H x L whose sides are adjacent. Inside each large rectangle is a small rectangle with a size of Wi x H1 x Len. Because only the power S in the small rectangle representing each luminescent pad is equal to So, and S = 0 for the area around the small rectangles, the integral in Eq. (2) is approximately the area of all the small rectangles within the circle. Each virtual source in the array is indicated by with a pair of indices (m, n), and Eq. (2) can be written as follows:
Figure imgf000019_0002
where (x, y) are the coordinates of the virtual light source plane, and (xo, yo) are the coordinates of the output plane, and φ is the incident angle of each ray. Considering that the length of each luminescent pad is Len, the power Pp from the three-dimensional luminescent pad is , and Z is the coordinate
Figure imgf000019_0003
in the length direction. Eq. (3) is applicable to every luminescent pad in the mp- DOFs.
The relationship between the size of the mp-DOFs and the light collection efficiency was analysed using COMSOL Multiphysics. The multiphysics model consisted of ray optics and solid-state physics. For a square optical fiber whose cross-sectional area (Si) at the incident end is equal to that (Se) at the exit end, the ratio of the power (Pr) at the exit end to the power (Ps) at the luminescent source is as follows:
Figure imgf000020_0001
where p is the collection angle.
When light propagates in a tapered optical fiber (Si * Se), the incident angle of each reflection is as follows:
Figure imgf000020_0002
where m is the number of reflections, and a is the cone angle.
For a positively tapered optical fiber (Si < Se), α > 0. From Eq. (5), it is known that the incident angle Φ is increased by 2a for each reflection, allowing the light with the first reflection angle larger than φc to be transmitted (i.e., 90° - p + a > φ c); thus,
Figure imgf000020_0003
For a negatively tapered optical fiber (Si > Se),a < 0. The incident angle is reduced by |2α | for each reflection, allowing the light with the last reflection angle larger than φ c to be transmitted (i.e., 90° - β + (2m - 1)α > φ c); thus,
Figure imgf000020_0004
The light transmission efficiency of the three types of optical fibers under mechanical deformation was simulated using commercial finite element analysis software (COMSOL Multiphysics).
Figure 10 illustrates simulations of light collection efficiency with fibers of different sizes. Len, Wi, and H1 denote the length, width, and thickness of the luminescent pads, respectively. Ld represents the interval between the pads. W2 and H2 denote the differences in the center coordinates between the luminescent pad and waveguide in the lateral and horizontal projections, respectively. Figure 10A illustrates Intensity under different Ld and Len. Figure 10B illustrates intensity under different W2 and H2. Figure IOC illustrates intensity versus width Wi under different W2. Figure 10D illustrates intensity versus location H2. Young's modulus of the luminescent pad was 1 MPa and that of the optical fiber was 0.5-4 MPa.
Figure 11 illustrates Young's modulus of waveguide materials versus mechanoluminescence. Relationship between the absolute value of the first principal stress (|E|) of the luminescent pad and Mguide, when Mpad is 1-3 MPa. The "Abs" in the y-axis denotes absolute value. Young's modulus of the elastomeric materials used in the mechanoluminescent pad and waveguide are denoted as Mpad and Mguide, respectively. When Mguide < Mpad (i.e., within the range from the coordinate zero point to the inflection point of each curve in the figure, the first principal stress is positive. Thus, the luminescent pad mainly experiences tensile principal stress. In contrast, when Mguide > Mpad, the first principal stress is negative, and the pad experiences compressing principal stress.
Figure 13 illustrates an optical and electron images of mechanoluminescence- powered distributed optical fibers according to some embodiments. Figure 13A illustrates an optical image of the waveguide comprising the mechanoluminescent pad. Figure 13B illustrates a scanning electron microscopy image of the waveguide and luminescent pad, indicating that ZnS:M@ AI2O3 microparticles were well dispersed in the silica gel matrix.
Selection of elastomeric material for mp-DOFs
Because the elastomeric materials used for the mp-DOFs must efficiently transmit force and light simultaneously, their mechanical and optical properties must be considered. The criteria for selecting materials for waveguide production include a large difference in refractive index between the core and cladding and high transparency. The mechanical properties of the elastomers used in the mechanoluminescent pad and waveguide must be considered simultaneously. For ease of description, Young's modulus of the elastomeric materials used in the mechanoluminescent pad and waveguide are denoted as Mpad and Mguide, respectively.
The luminous intensity of the luminescent pad is proportional to the average first principal stress applied to it. Therefore, the average first principal stress on the luminescent pad when compression (500 kPa) was applied to the waveguide with different Young's moduli was simulated and the results were analysed. The results indicated that the greater the difference between Mpad and Mguide, the greater the average first principal stress applied to the luminescent pad. Since a greatly deformed waveguide with a small Mguide under mechanical force causes severe optical attenuation, an elastomeric material with a larger Mguide was selected to construct the optical fiber.
Figure imgf000023_0001
Table 1 Elastomer materials in the literature and commercially available for stretchable fibers
Fabrication of mp-DOFs
In some embodiments, the mp-DOF was a 36-mm rectangular fiber with a cross section of 2.5 mm (height) by 5 mm (width). ZnS:M phosphors were doped into silica gel (parts A and B were mixed at a 10: 1 ratio) for fabrication of the mechanoluminescent pad, and the mass ratio of ZnS:M to silica gel was 8:2. The optical fiber was fabricated by a simple molding process. First, the luminescent pad was produced by injecting evenly mixed ZnS:M phosphors and silica gel into a rectangular bar (5 x 3 x 0.5 mm) through a syringe and leaving it to thermally cure at 80 °C for approximately 30 min. Thereafter, the mixed silicone LS6946 (parts A and B were mixed at a 10: 1 ratio) was applied over the luminescence pad to form half of the transparent waveguide. Then, demolding of the solidified fiber was performed and the other half of the transparent waveguide was cast using silicone LS-6946. The waveguide was thermally cured at 70 °C for approximately 30 min. Finally, two coating steps were employed. The first was to cover the surface damage of the waveguide during demolding using silicone LS-6946, while the second was to fabricate the cladding of the optical fiber by coating polydimethylsiloxane (PDMS; Sylgard 184, parts A and B were mixed at a 10: 1 ratio) on the fiber and curing it in a 70 °C oven for 30 min. The thickness of the cladding was 200 pm.
Figure 3 illustrates a technical evaluation of mechanoluminescence-powered distributed optical fiber (mp-DOF)-integrated interactive mouthguard according to some embodiments. Figure 3A illustrates an experimental setup for mechanoluminescence stimulation with a wireless, interactive mouthguard comprising multicolor mechanoluminescent pads and a flexible circuit module. Figure 3B illustrates a normalized real-time chromaticity response in x/y coordinates, derived from five occlusal trajectories comfortable to users. The insets are corresponding photographs of the mp-DOF under five force patterns. Figure 3C illustrates chromaticity response of five occlusal patterns under different forces (5-50 N). Figure 3D illustrates classification results of 14 occlusal patterns detected by a 2 x 3 mp-DOF array using machine learning algorithms. Δx = x1 - x2 and Δy = y1 - y2. Figure 3E illustrates different bite positions versus classification accuracy upon biting the yellow-emitting mechanoluminescent pad of some embodiments. The inset is a schematic of the bite positions. Figure 3F illustrates classification accuracy for different 2 x 3 mp- DOF arrays. Figure 3G illustrates classification accuracy of bite patterns for random users. Figure 3H illustrates linear relationship between the mouse cursor movement distance and biting force, recorded when controlling a mouse cursor.
Figure 12 illustrates a part of a fabrication procedure of mechanoluminescence- powered distributed optical fibers according to some embodiments. ZnS:M phosphors were doped into silica gel (monomer/curing agent, 10: 1) for fabrication of the mechanoluminescent pad, and the mass ratio of ZnS:M to silica gel was 8:2. The fabrication process involves provision of the mechanoluminescent pads 1220 on a mould 1210. A first half of a waveguide is applied to the pads at step 1230 followed by removal of the mould at step 1240 to obtain the pads provided on the half waveguide at step 1250. At step 1260 the other half of the waveguide is applied to obtain the occlusal contact sensor array. The sensor array is further subjected to steps 1270 (dip coating), 1272 (gravity spint coating) and 1274 (thermal curing). This is followed by steps 1276 (dip coating), 1278 (gravity spin coating) and 1279 (thermal curing).
Force response test
To measure the force response, a z-axis stage to repeatedly press a force gauge onto the mp-DOF with a force of 5-30 N in 5-N increments was used. The realtime force was measured by the force gauge. The light emission was collected with a spectrometer (Ocean Optics QEpro). Using a fiber optic spectrometer, the light intensity is obtained by integration over time, which is proportional to the exposure time. The force response was tested with an integration time of 128 ms.
Figure 14 illustrates an experimental setup for optical fiber characterization. Figure 14A illustrates a force response test. A spectrometer (Ocean Optics QEPro) equipped with an optical fiber was used to record the spectral distribution and mechanoluminescence intensity. In addition, a force dynamometer (HP-100, Handpi Instruments Co, Ltd., China) was applied to record the force in real time. Figure 14B illustrates a cyclic extension test. During the experiment, two motorized stages stretched the fiber repeatedly, and the spectrometer recorded the luminescence intensity.
Figure 15 illustrates an optical characterization of mechanoluminescence- powered distributed optical fibers according to some embodiments. Figure 15A illustrates luminescence intensity versus applied force. Figure 15B illustrates luminescence intensity at different temperatures. Figure 15C illustrates luminescence response to applied pulse force. Figure 15D illustrates magnified luminescence response for one selected peak indicated by the dotted box in C. Figure 15E illustrates intensity ratio of green to red (G/R) emissions, plotted over 2,000 stretching cycles. Figures 15F intensity of green emission, plotted over 2,000 stretching cycles.
Figure 16 illustrates scanning electron microscopy images of luminescent pad (A) before and (B) after stretching over 2,000 cycles.
Figure 17 illustrates compression maps measured with a 4x4 mechanoluminescence-powered distributed optical fiber array under different force patterns. Scale bars are 4 mm. Figure 17A illustrates photographs of the 4x4 fiber array taken under various conditions (left to right: no pressing, pressing the green-emitting pad, and pressing the red-emitting pad). Figures 17B and 17C illustrate recorded compression maps under different force levels through ID and 2D projections, respectively.
Figure 18 illustrates Mechanical and optical properties of a double-layer mechanoluminescence-powered distributed optical fiber of some embodiments under compression, stretching, and bending, respectively. Figure 18A illustrates the first principal stress distribution mapping, and the average first principal stress E versus the compressive, tensile, and bending forces. Figure 18B is a Ray tracing diagram, and the light intensity detected by the sensor S versus the compressive, tensile, and bending forces.
Temperature response test
The setup for the temperature response test was the same as that used for the force response test, except that the mp-DOF was placed on a heating plate with a digital display. A compression force of 15 N was applied to the green-emitting pad, and the temperature was increased from 20°C to 50°C in 10°C increments.
Dynamic test
The response time of the mp-DOF was explored using a z-axis stage to repeatedly and quickly press the force gauge onto the fiber. A fiber optic spectrometer and a force gauge were used to record the luminescence and force, respectively. The integration time of the spectrometer was 50 ms. Cyclic extension test
A cyclic extension experiment was conducted on a customized mechanical motion platform that could cyclically stretch the fiber. The two frequency- controlled electric translation stages repeatedly moved toward and away from each other to achieve repeated stretching of the optical fiber. The light emission was collected from the side of the fiber through the spectrometer. The mechanoluminescent pad mixed with ZnS:Cu2+ and ZnS:Mn2+/Cu2+ was measured for 2,000 cycles.
The mechanism of a multifunctional soft sensor composed of a double-layer mp- DOF was simulated using COMSOL Multiphysics, which indicated that the sensor was capable of distinguishing the deformation modes of compression, stretching and bending. The simulated structure included two layers of mp-DOFs, separated by a layer of opaque medium with Young's modulus of 0.5 MPa. The light transmittance of the luminescent pad was 5%, and Young's moduli of the luminescent pad and waveguide were 2 and 1.5 MPa, respectively. The size of the luminescent pad was 0.5 x 3 x 6 mm, and the length of the optical fiber was 16 mm. For ease of description, the upper fiber and the embedded luminescent pad are referred to as fiber 1 and pad 1, respectively, and the lower fiber and the embedded luminescent pad are referred to as fiber 2 and pad 2, respectively. Upon pressing, the average principal stress (E1) on pad 1 was much larger than that (E2) on pad 2, and the optical transmission rate (SI) of fiber 1 was also larger than that (S2) of fiber 2. Under tensile force, E1 = E2 and S1 = S2. When the fiber was bent upward, E1 < E2 and S1 < S2.
Compression response test
A double-layer mp-DOF was pressed using the same setup and configuration as in the force response measurements described in Section III. Two color sensors connected to the fibers measured the light intensity of each fiber. The spectrometer was placed in the middle of the double-layer mp- DOF end to measure the spectrum. Stretching response test
A double-layer mp-DOF was stretched on a customized linear translation stage, similar to the setup used in the cyclic extension test described in Section III. One end of the fiber was fixed, while the other end was connected to a reciprocating linear translation stage.
Bending response test
A double-layer mp-DOF was placed between two parallel stages. One stage was fixed, while the other moved forward in controlled increments, causing bending of the luminescent pad in the fiber. The tensile force in the forward direction of the stage was measured with a force gauge.
Since the mp-DOFs are mechanoluminescence-powered with zero power consumption, the main sources of system power consumption are the color sensor (AS73211), power supply chip (TLV62569), and Bluetooth chip (DA14585). AS73211 is an integrated color sensor with low power consumption (3.0 V, 1.5 mA) and low noise. It achieves an accuracy of up to 24-bit signal resolution with an irradiance responsivity of 0.5 pW I cm2. To achieve stable, high-precision measurements, the digital and analog grounds of the circuit were divided, and the power supply was eguipped with a large capacitor to reduce circuit noise as much as possible. The sensor was connected to DA14585 via I2C. DA14585 integrating an ARM Cortex-MO microcontroller is a 5.0 Bluetooth chip with low power consumption, -20 dB transmission sensitivity, and -93 dB receiver sensitivity.
Four AS73211 sensors were connected to a Bluetooth chip by sharing the I2C interface address. The working voltage was 3.0 V. The working current of the Bluetooth transceiver was 3 mA, and the working time was 10 ms/s. Thus, the total average power consumption of the interactive mouthguard under the normal operating condition was calculated to be 9.09 mW.
To reduce the size of the device, a small-size lithium battery (10 x 10 x 1.6 mm) as the power supply, with a battery capacity of 60 mAh was selected. When the mouthguard operates with real-time data processing and Bluetooth communication, the entire system can continue to work for approximately 24 h without recharging the battery.
All circuit components were arranged on a flexible printed circuit board with a thickness of 80 μm. The size of the main circuit was 9 x 12 x 1.6 mm, and the circuit was completely embedded in the mouthguard. The largest circuit component is the Bluetooth chip (5 x 5 mm). The weight of two mp-DOFs was 1.2 g, and the weight of flexible circuit board was 0.2 g. The weight of battery was 1.2 g, and the total weight of the interactive mouthguard was 2.6 g. Thermal images of the circuit board revealed that circuit heating did not exceed 1- after 90 min of operation. Moreover, the electromagnetic radiation intensity of the circuit was equivalent to that of mobile phones.
After the processed mp-DOFs and color sensors were interconnected, they were connected to the Bluetooth chip on the circuit board by a thin flexible wire, and the Bluetooth chip transmitted data to mobile phones or PCs. Then the entire system was integrated into a mouthguard made of thermoplastic polyurethane (TPU) elastomer. TPU has excellent comprehensive properties such as high strength, high toughness, abrasion resistance, and oil resistance, and is widely used in medical, food, and other industries. To facilitate biting, a convex plate was provided at the upper position of the corresponding luminescent pad so that users could more flexibly and accurately determine the bite position.
The raw data included six CIE spectral tristimulus values from two color sensors: X1, X2, YI1, Y2, Z1, and Z2. To increase the classification accuracy, X, Y, Z tristimulus values were converted into the CIE xyY color space:
Figure imgf000029_0001
Fourteen patterns that were comfortable for biting were selected to develop the applications. Therefore, the goal of the processing algorithm was to determine which of the 14 patterns the current bite pattern belonged to based on four input values (x1, X2, y1 and y2). The results processed using threshold evaluation, decision tree (DT), support vector machine (SVM), and an artificial neural network (ANN) algorithm were compared. The threshold evaluation algorithm simply preset the threshold value of each input signal. The DT, SVM, and ANN algorithms were in the built-in application toolboxes of MATLAB: Classification Learner and Neural Pattern Recognition. For the ANN, a two-layer feed-forward neural network comprising sigmoid hidden and SoftMax output neurons was used. This neural network had an input layer of four nodes. The output layer consisted of 14 nodes, and the SoftMax function was used to predict one estimated state among the 14 states. There was a hidden layer between the input and output layers with 10 nodes. The activation function used was the sigmoid function. The SVM algorithm was a quadratic SVM model with quadratic polynomial kernel function, and the DT algorithm was a fine tree model.
The participants wore the smart mouthguard integrated with a 2 x 3 mp-DOF array and bit 14 patterns to obtain a data set. There were more than 300 data points for each pattern. After being preprocessed using Eq. (8), 75% of the data were fed into each algorithm for training. The trained models were used to identify the remaining 25% of the data. The classification accuracy of threshold evaluation was 89.0%. Threshold evaluation is relatively simple; however, it has low accuracy and is difficult to apply to flexible interactions. Moreover, it is difficult to set the threshold directly and independently, making machine learning a necessity. With accuracy rates of 98.4% and 98%, respectively, the ANN and SVM had advantages, and the ANN was selected as the classification algorithm for prototype development.
Control System Design
Figure 19 illustrates a block diagram comprising some components of a mouthguard 1900. Mouthguard 1900 comprises two mp-DOF sensors 1910, 1920. Each sensor 1910 comprises an analog to digital signal converter (ADC) associated with each color of light emitted by the lights emitting pads of the mp-DOF. Each ADC transforms the analogue light intensity data into a digital signal representative of the light intensity of a particular color emitted by the occlusal sensor array of the mouthguard. The mouthguard also comprises a voltage regulator 1930 that regulates the operation of the sensors and the electronics provided on an integrated circuit 1940 part of the mouthguard. The IC 1940 comprises communication busses I2C 1 and I2C 2 to receive signals from the sensors 1910 and 1920. The IC 1940 also comprises a general purpose input output component 1945 and an antenna 1948. The antenna may be a Bluetooth antenna suitable for transmitting output of the IC 1940 to a computing device 1960. The computing device 1960 may be part of an electronic device that is the intended target of the human machine interaction operations. Alternatively, the computing device 1960 may be an intermediary device that receives signals from the mouthguard 1900 and transmits the processed signals (occlusal pattern signals) to an electronic device that is the intended target of the human machine interaction. The computing device 1960 comprises a machine learning model 1965 that processes the signals received from the IC 1940 to generate the occlusal pattern signals.
Figure 20 illustrates circuit schematic and characteristics of some embodiments. Figure 20A illustrates schematic of the circuit designed for the interactive mouthguard. Figure 20B is an image of the flexible printed circuit board according to some embodiments. The circuit, with a thickness of 80 pm, can be bent effectively and well fit to the mouthguard. Figure 20C illustrates thermal images of the circuit board after operation for various times. Circuit-induced heating was less than 1°C after operation for 90 min. Figure 20D illustrates electromagnetic radiation intensity of the circuit, which is equivalent to that of mobile phones.
Figure 4 illustrates application mouthguard for assistive technologies including operation of a smartphone, playing piano and controlling a wheelchair. Figure 4A illustrates a conceptual sketch of a mouthguard integrated with a wheelchair, computer, a smartphone etc. Figure 4B a mapping of the signals recorded generated by a mouthguard with specific actions associated with phone calls using a custom mobile application. Figure 4C illustrates a mapping of signals generated by the mouthguard with specific piano playing operations. Figure 4D illustrates mapping of signals generated by the mouthguard with wheelchair navigation operations. Figure 4E illustrates monitoring wheelchair navigation around a standard 400 m running track (five repeated runs) by Global Positioning System (GPS) with a scale of bar: 40 m. The signals generated by the mouthguard may be differently mapped for different human machine interaction applications. The specific tristimulus values generated by each fibre with its set of occlusal contact sensors may be mapped a specific action in a human machine interaction application as exemplified in Figure 4.
By pairing the interactive mouthguard with a Bluetooth module, real-time bite patterns (occlusal contact patterns) of users can be classified and transferred to a custom mobile application for data display and collection as illustrated in the screenshots of Fig. 24. In addition, users can connect the phone application to make emergency calls by biting the mouthguard with specific occlusal trajectories as exemplified in Fig. 4B. Each occlusal trajectory may correspond to a specific occlusal pattern signal or a part of an occlusal pattern signal.
Design of mouse and keyboard functions
For the mouse function, the up, down, left, right, left-click and right-click functions of a mouse corresponded to bite patterns 3, 8, 6, 10, 1, and 11, respectively. In addition, the navigation distance was defined according to the amplitude of the bite force and the luminous intensity. The interactive mouthguard was used to open, use, and close a web browser. For the keyboard function, the up, down, left, right, enter, and switch between alphabetic and numeric keyboard functions corresponded to bite patterns 3, 8, 6, 10, 1, and 5, respectively. To test the accuracy and efficiency of the keyboard function, a word document was created, and then the word "LOVE" and the number "3.14" were repeatedly written to the word document. The duration to complete the input of all characters did not exceed 32 s (the duration was from the time the participant activated the keyboard until the last character was entered).
Figure 23 demonstrates keyboard function according to some embodiments. The top of the figure illustrates the correspondence between functions and classification numbers in Figure 21. The middle panel displays the navigation trajectory when typing letter keys "LOVE" and number keys "3.14". Biting occlusal pattern 5 switches between the numeric and alphabetic keyboards. The bottom of the figure plots the corresponding output signal of the color sensors. Red, green and blue curves represent the tristimulus values X, Y and Z, respectively. Designing piano keyboard
A piano keyboard with a total of 14 keys was designed to operate with the interactive mouthguard. The song, "Happy Birthday" containing seven notes, could be played using the designed piano keyboard. Bite patterns 5, 3, 1, 10, 8, 6, and 11 corresponded to musical notes E, F, G, A, B, C, and D, respectively. When subjected to biting, the sensor captured the input signal and performed classification, and then directed the piano keyboard to play the corresponding note.
Designing wheelchair navigation
The wheelchair used was a standard electrical wheelchair (W5517, Inuovo Co, Ltd., Germany). Bite patterns 3, 8, 10, and 6 corresponded to the forward, backward, turning left and turning right functions, respectively, and bite pattern 5 controlled switching between starting and braking. Wheelchair control was achieved using the Bluetooth interface. During operation, occlusal pattern data were first transmitted to the computer via Bluetooth chips. After the data were classified and processed, the corresponding action command was sent to the wheelchair via another Bluetooth chip. The circular and figure-8 complex trajectories of the wheelchair controlled by biting demonstrated the flexibility of the system, and five tests on the same trajectory revealed the stability of the system.
Figure 25 illustrates flexibility and accuracy test for wheelchair navigation over 1.6 km around a standard 400-m running track. Circular and figure-8-shaped wheelchair motion trajectories were recorded using Global Positioning System (GPS) after five tests. Figure 25 uses a scale of 1 bar=40 m.
Customized mobile phone application design
A smartphone application (App) was designed to operate with the interactive mouthguard and provide a user-friendly interface for data display and collection. To use this application, the user should first put on the interactive mouthguard and open the App installed on the smartphone, and then establish a secure Bluetooth connection between the App and mouthguard. Subseguently, the App can receive and display the data stream from the mouthguard in real time. The App is capable of plotting a graph of these data streams versus time during the user's physical activity. Data and graphs can be stored on the device, uploaded to cloud servers online, and shared via social media. Additionally, with the App, users can make emergency calls through different bite patterns. The current implementation was programmed in the Android environment, and similar application interfaces can be easily developed for other popular operating systems, such as iOS.
Figure 24 illustrates a mobile application developed for data display and collection. Figure 24A is a homepage of the application after Bluetooth pairing. Figure 24B is a real-time display of data from the interactive mouthguard. Figure 24C illustrates real-time data progression of an individual sensor.
It will be appreciated that many further modifications and permutations of various aspects of the described embodiments are possible. Accordingly, the described aspects are intended to embrace all such alterations, modifications, and variations that fall within the spirit and scope of the appended claims or statements.
Throughout this specification and the claims or statements that follow, unless the context requires otherwise, the word "comprise", and variations such as "comprises" and "comprising", will be understood to imply the inclusion of a stated integer or step or group of integers or steps but not the exclusion of any other integer or step or group of integers or steps.
The reference in this specification to any prior publication (or information derived from it), or to any matter which is known, is not, and should not be taken as an acknowledgment or admission or any form of suggestion that that prior publication (or information derived from it) or known matter forms part of the common general knowledge in the field of endeavour to which this specification relates.

Claims

Claims
1. A system for human machine interaction, the system comprising: a mouthguard; a computing device in communication with the mouthguard; wherein the mouthguard comprises: an array of occlusal contact sensors, wherein each occlusal contact sensor emits light of a designated color in response to occlusal contact; an optical waveguide for propagating light emitted by each occlusal contact sensor to a color sensor, wherein the color sensor generates electrical signals based on color of the received light; an antenna for wirelessly transmitting the electrical signals generated by the color sensor to the computing device; wherein the computing device is configured to process received electrical signals to determine an occlusal pattern signal.
2. The system of claim 1, wherein the computing device comprises a machine-learning model to process the received electrical signals to determine the occlusal pattern signal.
3. The system of claim 2, wherein the machine-learning model comprise a trained feed-forward neural network.
4. The system of any one of claims 1 to 3, wherein the computing device is configured to enable human-machine interaction with an electronic device based on the determined occlusal pattern signal.
5. The system of any one of claims 1 to 4, wherein the array of occlusal contact sensors is combined by the optical waveguide and optical color sensor.
6. The system of any one of claims 1 to 5, wherein the array of occlusal contact sensors is powered by mechanical impact of the occlusal contact.
7. The system of any one of claims 1 to 6, wherein each occlusal contact sensor comprises a mechanoluminescence optical waveguide. The system of any one of claims 1 to 7, wherein the occlusal pattern signal encodes information of a bite performed on the occlusal contact sensor. The system of any one of claims 1 to 8, wherein at least one occlusal contact sensor emits light of an intensity based on an intensity of the occlusal contact; and the color sensor generates electrical signals based on the intensity of light generated by the occlusal contact sensor. Mouthguard for human-machine interaction, comprising: an array of occlusal contact sensors, wherein each occlusal contact sensor emits light of a designated color responsive to occlusal contact; an optical waveguide for propagating light emitted by each occlusal contact sensor to a color sensor, wherein the color sensor generates electrical signals responsive to color of the received light, wherein the electrical signals encode part of an occlusal pattern signal for human-machine interaction; an antenna for wirelessly transmitting the electrical signals generated by the color sensor. The mouthguard of claim 10, wherein the array of occlusal contact sensors is integrated into the optical waveguide, (this Claims should be deleted) The mouthguard of claim 10 or claim 11, wherein the array of occlusal contact sensors is powered by mechanical impact of the occlusal contact. The mouthguard of any one of claims 10 to 12, wherein each occlusal contact sensor comprises a mechanoluminescence sensor. The mouthguard of claim 13, wherein the mechanoluminescence optical waveguide comprises phosphor. The mouthguard of any one of claims 10 to 14, wherein the optical waveguide is an elastic waveguide. The mouthguard of any one of claims 10 to 15, wherein the array of occlusal contact sensors comprises six occlusal contact sensors. The mouthguard of claim 16, wherein the six occlusal contact sensors are disposed in a 2x3 array. The mouthguard of any one of claims 10 to 17, wherein each occlusal contact sensor emits light of a designated color that is distinct from light emitted by an adjacent occlusal contact sensor. A method of human-machine interaction, the method comprising : generating light at an array of occlusal contact sensors in response to occlusal contact, wherein each occlusal contact sensor emits light of a designated color responsive to occlusal contact; generating electrical signals by a color sensor responsive to the light generated by the array of occlusal contact sensors; processing the generated electrical signals by an electronic circuit to generate an occlusal pattern signal; performing human-machine interaction based on the occlusal pattern signal. The method of claim 19, wherein the method further comprises transmitting the occlusal pattern signal to a computing device for humanmachine interaction.
PCT/SG2022/050689 2021-09-29 2022-09-23 Mouthguard system for human machine interaction WO2023055293A2 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG10202110830Q 2021-09-29
SG10202110830Q 2021-09-29

Publications (2)

Publication Number Publication Date
WO2023055293A2 true WO2023055293A2 (en) 2023-04-06
WO2023055293A3 WO2023055293A3 (en) 2023-06-22

Family

ID=85783721

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2022/050689 WO2023055293A2 (en) 2021-09-29 2022-09-23 Mouthguard system for human machine interaction

Country Status (1)

Country Link
WO (1) WO2023055293A2 (en)

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0593061B1 (en) * 1992-10-15 1998-01-14 Fuji Photo Film Co., Ltd. System for analyzing occlusion condition
US8579766B2 (en) * 2008-09-12 2013-11-12 Youhanna Al-Tawil Head set for lingual manipulation of an object, and method for moving a cursor on a display
US11197773B2 (en) * 2020-03-12 2021-12-14 International Business Machines Corporation Intraoral device control system

Also Published As

Publication number Publication date
WO2023055293A3 (en) 2023-06-22

Similar Documents

Publication Publication Date Title
Hou et al. An interactive mouthguard based on mechanoluminescence-powered optical fibre sensors for bite-controlled device operation
Iravantchi et al. Interferi: Gesture sensing using on-body acoustic interferometry
JP6479722B2 (en) Goggles, systems, and methods for providing feedback
JP2023100878A (en) Control of computer via distortion of facial geometry
JP2022526060A (en) Methods and devices for gesture detection and classification
CN102549531B (en) Processor interface
US11653452B1 (en) Flexible circuit board design in a brain computer interface module
CN104007844A (en) Electronic instrument and wearable type input device for same
US20150015489A1 (en) System and method for digital recording of handpainted, handdrawn and handwritten information
CN104023802A (en) Control of electronic device using nerve analysis
WO2005055029A1 (en) Human interface device and human interface system
Ren et al. Mechanoluminescent optical fiber sensors for human-computer interaction
CN114252411A (en) Surface quality sensing using self-mixing interferometry
CN110399050A (en) A kind of infrared stylus and infrared virtual touch-control system
WO2023055293A2 (en) Mouthguard system for human machine interaction
Zhu et al. Machine-learning-assisted soft fiber optic glove system for sign language recognition
US20210282953A1 (en) Intraoral Device Control System
Alemu et al. EchoGest: Soft Ultrasonic Waveguides based Sensing Skin for Subject-Independent Hand Gesture Recognition
CN114894354B (en) Pressure sensing feedback device based on surface structural color and deep learning identification method
CN112380943B (en) Multi-position limb motion capture method based on electrical impedance
Dawood et al. Real-time pressure estimation and localisation with optical tomography-inspired soft skin sensors
Yang et al. A novel triboelectric-optical hybrid tactile sensor for human-machine tactile interaction
CN1240039A (en) Pointing device for computer
US20240011851A1 (en) Optical Soft Skin System for Multimodal Sensing
Fu et al. Development of a programmable digital glove

Legal Events

Date Code Title Description
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22877023

Country of ref document: EP

Kind code of ref document: A2