[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117426758B - Intelligent clothing system and method based on multi-sensing information fusion - Google Patents

Intelligent clothing system and method based on multi-sensing information fusion Download PDF

Info

Publication number
CN117426758B
CN117426758B CN202311764036.2A CN202311764036A CN117426758B CN 117426758 B CN117426758 B CN 117426758B CN 202311764036 A CN202311764036 A CN 202311764036A CN 117426758 B CN117426758 B CN 117426758B
Authority
CN
China
Prior art keywords
module
user
data
sensor
follows
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311764036.2A
Other languages
Chinese (zh)
Other versions
CN117426758A (en
Inventor
余锋
余涵臣
姜明华
刘佳杰
刘莉
周昌龙
宋坤芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Textile University
Original Assignee
Wuhan Textile University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Textile University filed Critical Wuhan Textile University
Priority to CN202311764036.2A priority Critical patent/CN117426758B/en
Publication of CN117426758A publication Critical patent/CN117426758A/en
Application granted granted Critical
Publication of CN117426758B publication Critical patent/CN117426758B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0015Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by features of the telemetry system
    • A61B5/0022Monitoring a patient using a global network, e.g. telephone networks, internet
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/02Detecting, measuring or recording pulse, heart rate, blood pressure or blood flow; Combined pulse/heart-rate/blood pressure determination; Evaluating a cardiovascular condition not otherwise provided for, e.g. using combinations of techniques provided for in this group with electrocardiography or electroauscultation; Heart catheters for measuring blood pressure
    • A61B5/0205Simultaneously evaluating both cardiovascular conditions and different types of body conditions, e.g. heart and respiratory condition
    • A61B5/02055Simultaneously evaluating both cardiovascular condition and temperature
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1112Global tracking of patients, e.g. by using GPS
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/145Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue
    • A61B5/1455Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters
    • A61B5/14551Measuring characteristics of blood in vivo, e.g. gas concentration, pH value; Measuring characteristics of body fluids or tissues, e.g. interstitial fluid, cerebral tissue using optical sensors, e.g. spectral photometrical oximeters for measuring blood gases
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/68Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient
    • A61B5/6801Arrangements of detecting, measuring or recording means, e.g. sensors, in relation to patient specially adapted to be attached to or worn on the body surface
    • A61B5/6802Sensor mounted on worn items
    • A61B5/6804Garments; Clothes
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/74Details of notification to user or communication with user or patient ; user input means
    • A61B5/7405Details of notification to user or communication with user or patient ; user input means using sound
    • A61B5/741Details of notification to user or communication with user or patient ; user input means using sound using synthesised speech
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J5/00Radiation pyrometry, e.g. infrared or optical thermometry
    • G01J5/0022Radiation pyrometry, e.g. infrared or optical thermometry for sensing the radiation of moving bodies
    • G01J5/0025Living bodies
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S19/00Satellite radio beacon positioning systems; Determining position, velocity or attitude using signals transmitted by such systems
    • G01S19/01Satellite radio beacon positioning systems transmitting time-stamped messages, e.g. GPS [Global Positioning System], GLONASS [Global Orbiting Navigation Satellite System] or GALILEO
    • G01S19/13Receivers
    • G01S19/33Multimode operation in different systems which transmit time stamped messages, e.g. GPS/GLONASS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/90Services for handling of emergency or hazardous situations, e.g. earthquake and tsunami warning systems [ETWS]

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Veterinary Medicine (AREA)
  • Medical Informatics (AREA)
  • Molecular Biology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Animal Behavior & Ethology (AREA)
  • General Health & Medical Sciences (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physiology (AREA)
  • Cardiology (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Pulmonology (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Business, Economics & Management (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Optics & Photonics (AREA)
  • Automation & Control Theory (AREA)
  • Emergency Management (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)

Abstract

The invention discloses a smart clothing system and a method based on multi-sensor information fusion, wherein the smart clothing system comprises flexible clothing, a smart monitoring and feedback module, a micro data processing and communication module and a digital twin module, physiological data and positioning data of a user are collected through various high-precision sensors on the smart clothing, the collected data are preprocessed, the preprocessed data are transmitted to a cloud server, the deep learning module is used for analyzing the data and realizing behavior recognition, the physiological health condition of the user is monitored in real time, voice feedback is provided for the user when the physical data of the user are abnormal, the output temperature of an intelligent heating module is regulated according to the environment temperature, the body temperature of the user is ensured to be in a safe range, personalized health diagnosis and suggestion are provided for the user, and omnibearing health protection is provided for the user by combining an intelligent wearing technology and an artificial intelligent technology into a complete smart clothing system.

Description

Intelligent clothing system and method based on multi-sensing information fusion
Technical Field
The invention relates to the field of intelligent clothing, in particular to an intelligent clothing system and method based on multi-sensor information fusion.
Background
With the rapid development of social progress and technology, modern health care has experienced remarkable progress. Health care has become increasingly more intelligent, portable and comprehensive from the earliest simple thermometers, sphygmomanometers and electrocardiographs to today's intelligent wearable garments. Nowadays, the intelligent clothing is not limited to specific people, whether infants or middle-aged and elderly people, whether daily health monitoring or postoperative rehabilitation, and the demands of people on the intelligent clothing are more and more urgent. However, the existing intelligent clothing can only simply monitor basic physiological information, and users cannot accurately judge the health condition of the users only by means of the basic physiological information, so that accurate health management is more difficult.
In the prior art, chinese patent publication No. CN219845100U discloses a wearing garment for detecting physiological signs, wherein the device fixes a detecting instrument on the garment, and after the wearing of the garment, a user can monitor physiological indexes such as body temperature, heart rate, blood oxygen saturation and the like in real time. However, the invention only collects basic data, cannot provide accurate diagnosis results, has low visibility of physiological information, and lacks effective monitoring feedback.
Therefore, there is a need to design a smart garment system and method that solves the above-mentioned problems of the prior art.
Disclosure of Invention
Aiming at the technical problems of the existing intelligent clothing, the invention provides an intelligent clothing system and method based on multi-sensor information fusion, which aim to acquire and monitor physical index data of a user in real time and realize relatively accurate and continuous long-term health monitoring and danger early warning.
In order to achieve the above purpose, the invention adopts the following technical scheme:
a smart clothing system based on multi-sensing information fusion, which comprises a flexible clothing, a smart monitoring and feedback module, a micro data processing and communication module and a digital twin module;
a flexible circuit is arranged in the flexible garment and is used for connecting the intelligent monitoring and feedback module with the micro data processing and communication module; the flexible garment further comprises a power supply module, wherein the power supply module is arranged at the waist of the flexible garment and is used for supplying power for the intelligent garment system;
the intelligent monitoring and feedback module comprises an optical heart rate pulse and blood oxygen saturation monitoring sensor, an infrared body temperature measuring sensor, an inertial navigation attitude sensor, a dual-mode positioning module, a micro-speaker and an intelligent heating module; the optical heart rate pulse and blood oxygen saturation monitoring sensor, the infrared body temperature measuring sensor and the inertial navigation attitude sensor are used for acquiring physiological data original signals of a user in real time; the dual-mode positioning module is used for acquiring positioning data original signals of a user in real time; the intelligent heating module and the micro loudspeaker provide feedback according to the data of the digital twin module;
the micro data processing and communication module comprises a wireless communication module and a micro processor; the wireless communication module and the micro processor are arranged at the waist of the flexible garment, the micro processor is used for preprocessing data acquired by the intelligent monitoring and feedback module, and the wireless communication module is used for establishing real-time communication with the cloud server;
the digital twin module comprises a deep learning module and a digital model module, wherein the deep learning module and the digital model module are loaded in the cloud server, the deep learning module is used for diagnosing and storing user information, and the digital model module realizes interaction of a physical entity and the digital twin body.
Further, an optical heart rate pulse and blood oxygen saturation monitoring sensor and an infrared body temperature measuring sensor are arranged on the rear neck of the flexible garment, and an inertial navigation attitude sensor, a dual-mode positioning module and a micro-speaker are arranged on the waist of the flexible garment; the intelligent heating module is covered on the back, the abdomen and the waist of the flexible garment;
an inertial measurement unit is arranged in the inertial navigation attitude sensor, and comprises an accelerometer and a gyroscope, and is used for rapidly calculating the current motion attitude of a user and outputting acceleration, angular velocity and angle in real time;
the dual-mode positioning module adopts two independent positioning systems of Beidou positioning and GPS positioning and is used for acquiring longitude and latitude positioning information.
The invention also provides a method based on the intelligent clothing system, which comprises the following steps:
s1, acquiring physiological data original signals of a user by using an optical heart rate pulse and blood oxygen saturation monitoring sensor, an infrared body temperature measuring sensor and an inertial navigation attitude sensor, and acquiring positioning data original signals of the user by using a dual-mode positioning module;
s2, carrying out data preprocessing on the data acquired by the intelligent monitoring and feedback module through the micro-processor so as to acquire effective data;
s3, the preprocessed data is transmitted to the cloud server by utilizing the wireless communication module;
s4, using a deep learning module loaded in the cloud server to realize behavior recognition prediction;
s5, constructing a digital twin body model of the user in a digital space by utilizing the digital model module, mapping the real state of the user to the digital space through a user behavior recognition result, and providing voice feedback and temperature feedback according to the health state of the user.
Further, the step S1 specifically includes the following steps:
the optical heart rate pulse and blood oxygen saturation monitoring sensor collects photoelectric volume waveform signals of red light and infrared light reflected by blood vessels of a user at a specific sampling frequency;
the infrared body temperature measuring sensor collects infrared radiation energy emitted by the body surface of the user;
the inertial navigation attitude sensor acquires and outputs acceleration, angular velocity and angle of a user in real time;
and the dual-mode positioning module is used for collecting and outputting longitude and latitude coordinates of a user in real time.
Further, the step S2 specifically includes the following steps:
s2-1, noise reduction processing is carried out on physiological data original signals of a user, which are acquired by an optical heart rate pulse and blood oxygen saturation monitoring sensor and an infrared body temperature measuring sensor;
the formula for removing noise is as follows:
wherein,representation->Time filtering output,/, for>Indicating time->N represents the window size of the filter,/-for the measurement values of (a)>Representing the relative position of the measured values in the window, ranging from 0 to +.>
Further, the step S2 further includes:
s2-2, calculating heart rate, blood oxygen saturation and body temperature of the user by using the data subjected to the noise reduction treatment in the step S2-1;
calculating heart rate and blood oxygen saturation by utilizing the collected changes of the wave crests and the wave bottoms of the photoelectric volume waveform signals of red light and infrared light reflected by the blood vessels of the user, wherein the photoelectric volume waveform is composed of an alternating current component and a direct current component;
the calculation formula for heart rate HR is as follows:
wherein,for the alternating current component in red light +.>For the direct current component in red light, +.>For the alternating current component under infrared light +.>Is a direct current component under infrared light;
blood oxygen saturationThe calculation formula of (2) is as follows:
wherein HR is heart rate, and x, y and z are characteristic constants respectively;
the calculation formula of the body surface temperature BT of the user is as follows:
wherein,infrared radiation energy emitted by the body surface of the user and collected by the infrared body temperature measuring sensor,is still-Fender Boltzmann constant.
Further, the step S4 specifically includes the following steps:
s4-1, dividing behavior data, respectively extracting time features and space features through a bidirectional time feature processing module and a space feature processing module, and then fusing the time features and the space features to obtain integrated features;
s4-2, constructing a classifier model by using the integrated features to perform recognition training;
s4-3, deploying the trained model in a cloud server.
Further, the bidirectional time feature processing module in the step S4-1 is composed of a forward time feature processing module and a reverse time feature processing module, the forward time feature processing module processes the input data according to the forward time sequence, the reverse time feature processing module processes the input data according to the reverse sequence, and the output of the forward time feature processing module and the output of the reverse time feature processing module are connected into a complete sequence, namely, the time feature is extracted;
the specific implementation method of the forward time feature processing module is as follows:
(1) Screening out useless information and inputting current time pointAnd implicit output of the last time point +.>Connected into a vector, and the parameter matrix is removed by screening>And screening off bias parameters->Performing operation, and passing the operation result through +.>The activation function gets the retention ratio value->The formula is as follows:
then the state of the unit at the last time pointAnd retention ratio value->Multiplying to obtain a retention value after screening garbageRThe formula is as follows:
(2) Determining a value that the current input needs to be stored: input of the current time pointAnd implicit output of the last time point +.>Connected into a vector by storing a parameter matrix +.>And a storage bias parameter->Performing operation, and passing the operation result through +.>Activating the function to obtain a storage ratio value->
The formula is as follows:
input of the current time pointAnd implicit output of the last time point +.>Connected into a vector by a candidate parameter matrix +.>And candidate bias parameters->Performing operation, and obtaining a candidate input vector +.>
);
Will store the ratio valueAnd candidate input vector->Multiplying to obtain the current input storage value +.>
(3) Updating the state of the unit, and screening the reserved value after useless informationRWith the current input deposit valueAdding to get updated cell state +.>The formula is as follows:
(4) Control output expression: input of the current time pointAnd implicit output of the last time point +.>Connected into a vector by outputting a parameter matrix +.>And output bias parameter +.>Performing operation, and passing the operation result through +.>Activating the function to obtain an output value which needs to be explicitly expressed at the current time point +.>The formula is as follows:
to update the state of the unitAfter nonlinear change of the tanh activation function, the output value which is required to be explicitly expressed at the current time point is added with +.>Multiplication to obtain the implicit output of the current cell state>
Does not participate in explicit expression, but is only used for output calculation at the next moment.
Further, the spatial feature processing module in the step S4-1 is formed by sequentially connecting a convolution layer, a dense layer block, an attention transition layer block and a dense layer block in series, and is used for extracting spatial features;
the spatial feature processing module comprises three dense layer blocks, and four computing layers are arranged in each dense layer block;
the attention transition layer block is formed by sequentially connecting a convolution layer, a self-adaptive average pooling layer, a batch standardization layer, a linear layer, an ELU activation function layer and a Sigmoid activation function layer in series;
weighted outputThe formula of (2) is as follows:
wherein,representing the entered data>Representing the weights calculated by the attention mechanism.
Further, the classification algorithm formula adopted for constructing the classifier in the step S4-2 is as follows:
wherein,representing the number of behavioural activity types +.>Is an index that traverses all behavioral activity types, +.>Representing the type of behavioral activity currently being calculated, +.>Indicate->Vector weights of individual behavioral activity types, +.>Representing the current calculated->Vector weights of individual behavioral activity types, +.>Indicate->Probability of individual behavioral activity types;
the loss function L used in the recognition training is as follows:
wherein,representing the number of samples, +.>Indicate->Actual tag value of individual samples,/->Indicate->Probability output of each sample through model prediction.
Compared with the prior art, the invention has the beneficial effects that:
(1) The heart rate, blood oxygen saturation, body temperature, motion state and positioning information of a user are collected through various high-precision sensors on the intelligent clothes, signals are transmitted to a micro processor of the side waist through a flexible circuit to conduct data preprocessing, and the preprocessed data are transmitted to the cloud server through a wireless communication module. And using a deep learning module loaded in the cloud server to deploy a deep learning algorithm so as to analyze the physiological data and realize behavior recognition. Meanwhile, a three-dimensional digital twin body model is loaded in a client page so as to monitor the physiological health condition of a user in real time. When the user's body data is abnormal, voice feedback is provided through a micro-speaker in the smart garment system. The intelligent wearing technology and the artificial intelligent technology are combined into a complete intelligent clothing system, the system can analyze physiological data and identify behaviors, personalized health diagnosis and advice are provided, early warning is provided when necessary, and comprehensive health protection is provided for the user.
(2) And the interaction between the physical entity and the digital entity is realized through the digital twin module. And loading the deep learning module in the cloud server. Through the module, the system can diagnose the user and store the user information in the cloud. Meanwhile, the digital model module is responsible for constructing a digital space and a digital twin body model of the user, realizing mapping of the real body of the user into the digital space and controlling the performance of the digital twin body model according to the real-time state of the user.
(3) The digital twin module performs three-dimensional model construction by software to map physical entities into digital space. The module refers to the layout of a real environment to place a digital model, generates a real and effective digital space, creates a digital twin body model, and realizes the mapping of a physical entity and a digital entity by taking the real body of a user as a basis. The heart rate, blood oxygen saturation, body temperature, positioning and motion information collected by the sensor will be displayed in the digital space to reflect the real-time status of the user. Thus, the user can intuitively know the health condition of the user through the digital twin body model.
(4) Digital twinning modules involve overall process control and information feedback. In this module, the digital entity can receive the information transmitted by the physical entity and feed back the digital stream information to the physical entity, thus forming a closed-loop control system. The system has the functions of voice feedback and temperature feedback, and further enhances the information feedback based on digital twin. The physiological information of the user is collected through the sensor, the system transmits the information to the digital space in real time to be displayed, then analysis is carried out, and the analysis result is directly fed back to the user through voice reminding. With such overall process control and accurate information feedback, a user is able to obtain a more comprehensive and accurate health care experience. The perfection and optimization of the system improves the effect of the intelligent clothing system, so that the user can better know and manage the health condition of the user.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, and it is obvious that the drawings in the following description are some embodiments of the present invention, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
FIG. 1 shows a flow chart of a method of an intelligent clothing system based on multi-sensor information fusion in accordance with an embodiment of the present invention;
FIG. 2 shows a feature extraction implementation diagram of a smart garment system based on multi-sensor information fusion in accordance with an embodiment of the present invention;
FIG. 3 is a schematic diagram of a spatial feature processing module of an intelligent clothing system based on multi-sensor information fusion according to an embodiment of the present invention;
fig. 4 shows a schematic diagram of an attention transition layer block structure of a smart garment system based on multi-sensor information fusion according to an embodiment of the invention.
Detailed Description
For a better understanding of the objects, technical solutions and advantages of the present invention, we will describe in further detail with reference to the drawings and examples. It should be understood that these specific examples are intended to illustrate the invention and are not intended to limit the scope of the invention. In addition, the technical features of the present invention may be combined with each other in each embodiment as long as they do not conflict with each other.
The invention provides a smart clothing system based on multi-sensor information fusion, which comprises flexible clothing, a smart monitoring and feedback module, a micro data processing and communication module and a digital twin module;
a flexible circuit is arranged in the flexible garment and is used for connecting the intelligent monitoring and feedback module with the micro data processing and communication module; the flexible garment further comprises a power supply module, wherein the power supply module is arranged at the waist of the flexible garment and is used for supplying power for the intelligent garment system;
the intelligent monitoring and feedback module comprises an optical heart rate pulse and blood oxygen saturation monitoring sensor, an infrared body temperature measuring sensor, an inertial navigation attitude sensor, a dual-mode positioning module, a micro-speaker and an intelligent heating module;
the optical heart rate pulse and blood oxygen saturation monitoring sensor, the infrared body temperature measuring sensor and the inertial navigation attitude sensor are used for acquiring physiological data original signals of a user in real time; the dual-mode positioning module is used for acquiring positioning data original signals of a user in real time; the intelligent heating module and the micro-speaker provide feedback according to the data of the digital twin module.
The optical heart rate pulse and blood oxygen saturation monitoring sensor and the infrared body temperature measuring sensor are arranged on the rear neck of the flexible garment, and the inertial navigation attitude sensor, the dual-mode positioning module and the micro-speaker are arranged on the waist of the flexible garment; the intelligent heating module covers the back, the abdomen and the waist of the flexible garment.
The inertial navigation attitude sensor is internally provided with an inertial measurement unit, and the inertial measurement unit comprises an accelerometer and a gyroscope and is used for rapidly calculating the current motion attitude of a user and outputting acceleration, angular velocity and angle in real time so as to provide basis for behavior recognition.
The dual-mode positioning module adopts two independent positioning systems of Beidou positioning and GPS positioning and is used for acquiring longitude and latitude positioning information so as to provide higher positioning precision and speed under the condition of complex terrain or blocked signals.
The micro data processing and communication module comprises a wireless communication module and a micro processor; the wireless communication module and the microprocessor are both arranged at the waist of the flexible garment, the microprocessor is used for preprocessing data acquired by the intelligent monitoring and feedback module, and the wireless communication module is used for establishing real-time communication with the cloud server.
The digital twin module comprises a deep learning module and a digital model module, wherein the deep learning module and the digital model module are loaded in the cloud server, the deep learning module is used for diagnosing and storing user information, and the digital model module realizes interaction of a physical entity and the digital twin body.
The invention also provides a method based on the intelligent clothing system, as shown in fig. 1, which specifically comprises the following steps:
s1, acquiring physiological data original signals of a user by using an optical heart rate pulse and blood oxygen saturation monitoring sensor, an infrared body temperature measuring sensor and an inertial navigation attitude sensor, and acquiring positioning data original signals of the user by using a dual-mode positioning module;
s2, carrying out data preprocessing on the data acquired by the intelligent monitoring and feedback module through the micro-processor so as to acquire effective data;
s3, the preprocessed data is transmitted to the cloud server by utilizing the wireless communication module;
s4, using a deep learning module loaded in the cloud server to realize behavior recognition prediction;
s5, constructing a digital twin body model of the user in a digital space by utilizing the digital model module, mapping the real state of the user to the digital space through a user behavior recognition result, and providing voice feedback and temperature feedback according to the health state of the user.
Step S1 uses an optical heart rate pulse and blood oxygen saturation monitoring sensor, an infrared body temperature measuring sensor and an inertial navigation attitude sensor to collect physiological data original signals of a user, and uses a dual-mode positioning module to collect positioning data original signals of the user, wherein the steps are as follows:
the optical heart rate pulse and blood oxygen saturation monitoring sensor collects photoelectric volume waveform signals of red light and infrared light reflected by a user blood vessel at a specific sampling frequency;
the infrared body temperature measuring sensor collects infrared radiation energy emitted by the body surface of the user;
the inertial navigation attitude sensor acquires and outputs acceleration, angular velocity and angle of a user in real time;
and the dual-mode positioning module is used for collecting and outputting longitude and latitude coordinates of a user in real time.
Step S2 is to preprocess the data collected by the intelligent monitoring and feedback module through the micro processor to obtain effective data, and the method is as follows:
s2-1, noise reduction processing is carried out on physiological data original signals of a user, which are acquired by an optical heart rate pulse and blood oxygen saturation monitoring sensor and an infrared body temperature measuring sensor;
the formula for removing noise is as follows:
wherein,representation->Time filtering output,/, for>Indicating time->N represents the window size of the filter,/-for the measurement values of (a)>Representing the relative position of the measured values in the window, ranging from 0 to +.>
The acquired physiological data original signals of the user are subjected to noise reduction treatment, so that clear data are obtained.
S2-2, calculating heart rate, blood oxygen saturation and body temperature of the user by using the data subjected to the noise reduction treatment in the step S2-1;
calculating heart rate and blood oxygen saturation by utilizing the collected changes of the wave crests and the wave bottoms of the photoelectric volume waveform signals of red light and infrared light reflected by the blood vessels of the user, wherein the photoelectric volume waveform is composed of an alternating current component and a direct current component;
the fundamental frequency of the alternating current component is determined by the heart rate, which reflects the change in blood volume between systole and diastole, and the direct current component is used mainly for detecting reflected signals of body tissues, bones and muscles, and for measuring the average blood volume of arteries and veins in the blood.
The calculation formula for heart rate HR is as follows:
wherein,for the alternating current component in red light +.>For the direct current component in red light, +.>For the alternating current component under infrared light +.>Is a direct current component under infrared light;
blood oxygen saturationThe calculation formula of (2) is as follows:
wherein HR is heart rate, and x, y and z are characteristic constants respectively;
the calculation formula of the body surface temperature BT of the user is as follows:
wherein,infrared radiation energy emitted by the body surface of the user and collected by the infrared body temperature measuring sensor,is still-Fender Boltzmann constant.
The step S4 uses a deep learning module loaded in the cloud server to implement behavior recognition prediction, which is specifically as follows:
when the deep learning module is used for realizing behavior prediction, a novel algorithm is adopted to extract features, as shown in fig. 2, the algorithm firstly segments behavior data, respectively extracts time features and space features through the bidirectional time feature processing module and the space feature processing module, then fuses the time features and the space features to obtain integrated features, builds a classifier model by using the integrated features to carry out recognition training, and finally deploys the trained model in a cloud server.
S4-1, dividing behavior data, respectively extracting time features and space features through a bidirectional time feature processing module and a space feature processing module, and then fusing the time features and the space features to obtain integrated features;
the bidirectional time feature processing module is composed of a forward time feature processing module and a reverse time feature processing module, wherein the forward time feature processing module processes input data according to a forward time sequence, the reverse time feature processing module processes the input data according to a reverse sequence, and finally, the output of the forward time feature processing module and the output of the reverse time feature processing module are connected into a complete sequence, namely, the time feature is extracted. The bidirectional time feature processing module collects forward information and reverse information carried by each data by using the time feature processing module in the forward direction and the reverse direction, and the representation of each time step of the data processed by the bidirectional time feature processing module contains context information from the forward direction and the reverse direction so as to realize the functions of memorizing and predicting.
The specific implementation method of the forward time feature processing module is as follows:
(1) Screening out useless information and inputting current time pointAnd implicit output of the last time point +.>Connected into a vector, and the parameter matrix is removed by screening>And screening off bias parameters->Performing operation, and passing the operation result through +.>The activation function gets the retention ratio value->The formula is as follows:
then the state of the unit at the last time pointAnd retention ratio value->Multiplying to obtain a retention value after screening garbageRThe formula is as follows:
(2) Determining a value that the current input needs to be stored: input of the current time pointAnd implicit output of the last time point +.>Connected into a vector by storing a parameter matrix +.>And a storage bias parameter->Performing operation, and passing the operation result through +.>Activating the function to obtain a storage ratio value->
The formula is as follows:
input of the current time pointAnd implicit output of the last time point +.>Connected into a vector by a candidate parameter matrix +.>And wait forSelect bias parameter->Performing operation, and obtaining a candidate input vector +.>
);
Will store the ratio valueAnd candidate input vector->Multiplying to obtain the current input storage value +.>
(3) Updating the state of the unit, and screening the reserved value after useless informationRWith the current input deposit valueAdding to get updated cell state +.>The formula is as follows:
(4) Control output expression: input of the current time pointAnd implicit output of the last time point +.>Connected into a vector by outputting a parameter matrix +.>And output bias parameter +.>Performing operation, and passing the operation result through +.>Activating the function to obtain an output value which needs to be explicitly expressed at the current time point +.>The formula is as follows:
to update the state of the unitAfter nonlinear change of the tanh activation function, the output value which is required to be explicitly expressed at the current time point is added with +.>Multiplication to obtain the implicit output of the current cell state>
Does not participate in explicit expression, but is only used for output calculation at the next moment.
The specific implementation method of the reverse time feature processing module is the same as that of the forward time feature processing module, and the difference is only that the vectors are different, and when the reverse time feature processing is calculatedUsing vectors as modules,/>Representing the implicit output of the next time point.
As shown in fig. 3, the spatial feature processing module is composed of a convolution layer, a dense layer block, an attention transition layer block and a dense layer block which are sequentially connected in series and is used for extracting spatial features.
The spatial feature processing module comprises three dense layer blocks, and four computing layers are arranged in each dense layer block. In order to distinguish the importance of each channel, two attention transition layer blocks are inserted between the dense layer blocks, and the attention transition layer blocks are used for detecting the preset weight of each channel and giving a higher weight proportion to the important channel, so that a new calculated weight is obtained. This kind of attention transfer mechanism can increase the attention of the model to important channel, promotes the discernment degree of characteristic.
As shown in fig. 4, the attention transition layer block is formed by sequentially connecting a convolution layer, an adaptive average pooling layer, a batch standardization layer, a linear layer, an ELU activation function layer and a Sigmoid activation function layer in series;
weighted outputThe formula of (2) is as follows:
wherein,representing the entered data>Representing the weights calculated by the attention mechanism.
S4-2, constructing a classifier model by using the integrated features to perform recognition training;
the classification algorithm formula used for constructing the classifier is as follows:
wherein,representing the number of behavioural activity types +.>Is an index that traverses all behavioral activity types, +.>Representing the type of behavioral activity currently being calculated, +.>Indicate->Vector weights of individual behavioral activity types, +.>Representing the current calculated->Vector weights of individual behavioral activity types, +.>Indicate->Probability of individual behavioral activity types.
By the method, the deep learning module can classify the characteristics extracted by the system and obtain the prediction result of each behavior activity category by calculating the probability.
The loss function L used in the recognition training is as follows:
wherein,representing the number of samples, +.>Indicate->Actual tag value of individual samples,/->Indicate->Probability output of each sample through model prediction.
S4-3, deploying the trained model in a cloud server.
S5, constructing a digital twin body model of the user in a digital space by utilizing a digital model module, identifying a prediction result according to the user behavior, mapping the real state of the user to the digital space, and providing voice feedback and temperature feedback according to the health state of the user.
The digital twin module adopts software to construct a three-dimensional model so as to realize mapping of the intelligent clothing system in a digital space. In the module, a digital model is placed in a digital space according to the layout of the real environment so as to generate a real and effective three-dimensional digital space, and a corresponding digital twin body model is created on the basis of the body of a user so as to realize mapping of the real world and the digital world. Heart rate, blood oxygen saturation, body temperature, positioning and behavior information collected by the sensor will be displayed in digital space to reflect the real-time status of the user.
Wherein the digital twinning module is capable of analyzing and processing the collected data. By utilizing a deep learning algorithm, the state of the body can be accurately predicted and diagnosed by combining multiple physiological data, and the analysis result is displayed to the user in a visual mode such as a chart, an image and the like. The user can check the physical state of the user through the mobile phone or other equipment to know whether the potential health problem exists. When obvious abnormality occurs in the physiological data of the user, the intelligent clothing system can early warn in time through the micro-speaker so as to prevent further injury. Meanwhile, when the body temperature is reduced, the system can automatically start the intelligent heating module to keep the body temperature.
The heart rate, blood oxygen saturation, body temperature, motion state and positioning information of a user are collected through various high-precision sensors on the intelligent clothes, signals are transmitted to a micro processor of the side waist through a flexible circuit to conduct data preprocessing, and the preprocessed data are transmitted to the cloud server through a wireless communication module. And using a deep learning module loaded in the cloud server to deploy a deep learning algorithm so as to analyze the physiological data and realize behavior recognition. Meanwhile, a three-dimensional digital twin body model is loaded in a client page so as to monitor the physiological health condition of a user in real time. When the user's body data is abnormal, voice feedback is provided through a micro-speaker in the smart garment system. The intelligent wearing technology and the artificial intelligent technology are combined into a complete intelligent clothing system, the system can analyze physiological data and identify behaviors, personalized health diagnosis and advice are provided, early warning is provided when necessary, and comprehensive health protection is provided for the user.
Through the intelligent clothing system based on multi-sensor information fusion, the digital entity can receive information of the physical entity and feed back digital data to the physical entity, so that a complete closed-loop control process is realized. The system utilizes the functions of voice reminding, temperature feedback and the like to realize information feedback based on digital twinning. The sensor collects physiological information of the user, and the system can not only visually display the information in a digital space, but also analyze and directly feed back the information to the user through voice broadcasting, so that timely and effective monitoring feedback is provided.
It will be apparent to those skilled in the art that various modifications and variations can be made in the present application without departing from the spirit or scope of the application. Thus, if such modifications and variations of the present application fall within the scope of the claims and the equivalents thereof, the present application is intended to cover such modifications and variations.

Claims (7)

1. A method of a smart clothing system based on multi-sensor information fusion, which is characterized in that the smart clothing system comprises a flexible clothing, a smart monitoring and feedback module, a micro data processing and communication module and a digital twin module;
a flexible circuit is arranged in the flexible garment and is used for connecting the intelligent monitoring and feedback module with the micro data processing and communication module; the flexible garment further comprises a power supply module, wherein the power supply module is arranged at the waist of the flexible garment and is used for supplying power for the intelligent garment system;
the intelligent monitoring and feedback module comprises an optical heart rate pulse and blood oxygen saturation monitoring sensor, an infrared body temperature measuring sensor, an inertial navigation attitude sensor, a dual-mode positioning module, a micro-speaker and an intelligent heating module; the optical heart rate pulse and blood oxygen saturation monitoring sensor, the infrared body temperature measuring sensor and the inertial navigation attitude sensor are used for acquiring physiological data original signals of a user in real time; the dual-mode positioning module is used for acquiring positioning data original signals of a user in real time; the intelligent heating module and the micro loudspeaker provide feedback according to the data of the digital twin module;
the micro data processing and communication module comprises a wireless communication module and a micro processor; the wireless communication module and the micro processor are arranged at the waist of the flexible garment, the micro processor is used for preprocessing data acquired by the intelligent monitoring and feedback module, and the wireless communication module is used for establishing real-time communication with the cloud server;
the digital twin module comprises a deep learning module and a digital model module, wherein the deep learning module and the digital model module are loaded in the cloud server, the deep learning module is used for diagnosing and storing user information, and the digital model module realizes interaction of a physical entity and the digital twin body;
the method for the intelligent clothing system based on the multi-sensor information fusion comprises the following steps:
s1, acquiring physiological data original signals of a user by using an optical heart rate pulse and blood oxygen saturation monitoring sensor, an infrared body temperature measuring sensor and an inertial navigation attitude sensor, and acquiring positioning data original signals of the user by using a dual-mode positioning module;
s2, carrying out data preprocessing on the data acquired by the intelligent monitoring and feedback module through the micro-processor so as to acquire effective data;
s3, the preprocessed data is transmitted to the cloud server by utilizing the wireless communication module;
s4, a deep learning module loaded in a cloud server is used for realizing behavior recognition prediction, and the step S4 is specifically as follows:
s4-1, dividing behavior data, respectively extracting time features and space features through a bidirectional time feature processing module and a space feature processing module, and then fusing the time features and the space features to obtain integrated features; the bidirectional time feature processing module in the step S4-1 is composed of a forward time feature processing module and a reverse time feature processing module, the forward time feature processing module processes the input data according to the forward time sequence, the reverse time feature processing module processes the input data according to the reverse sequence, and the output of the forward time feature processing module and the output of the reverse time feature processing module are connected into a complete sequence, namely, the time feature is extracted;
the specific implementation method of the forward time feature processing module is as follows:
(1) Screening out useless information and inputting current time pointAnd implicit output of the last time point +.>Connected into a vector, and the parameter matrix is removed by screening>And screening off bias parameters->Performing operation, and passing the operation result through +.>The activation function gets the retention ratio value->The formula is as follows:
then the state of the unit at the last time pointAnd retention ratio value->Multiplying to obtain a retention value after screening garbageRThe formula is as follows:
(2) Determining a value that the current input needs to be stored: input of the current time pointAnd implicit output at the last time pointConnected into a vector by storing a parameter matrix +.>And a storage bias parameter->Performing operation, and passing the operation result through/>Activating the function to obtain a storage ratio value->
The formula is as follows:
input of the current time pointAnd implicit output of the last time point +.>Connected into a vector by a candidate parameter matrix +.>And candidate bias parameters->Performing operation, and obtaining a candidate input vector +.>
);
Will store the ratio valueAnd candidate input vector->Multiplying to obtain the current input storage value +.>
(3) Updating the state of the unit, and screening the reserved value after useless informationRWith the current input deposit valueAdding to get updated cell state +.>The formula is as follows:
(4) Control output expression: input of the current time pointAnd implicit output of the last time point +.>Connected into a vector by outputting a parameter matrix +.>And output bias parameter +.>Performing operation, and passing the operation result through +.>Activating the function to obtain an output value which needs to be explicitly expressed at the current time point +.>The formula is as follows:
to update the state of the unitAfter nonlinear change of the tanh activation function, the output value which is required to be explicitly expressed at the current time point is added with +.>Multiplication to obtain the implicit output of the current cell state>
The method does not participate in dominant expression and is only used for output calculation at the next moment;
s4-2, constructing a classifier model by using the integrated features to perform recognition training;
s4-3, deploying the trained model in a cloud server;
s5, constructing a digital twin body model of the user in a digital space by utilizing the digital model module, mapping the real state of the user to the digital space through a user behavior recognition result, and providing voice feedback and temperature feedback according to the health state of the user.
2. The method of intelligent clothing system based on multi-sensor information fusion according to claim 1, wherein the optical heart rate pulse and blood oxygen saturation monitoring sensor and the infrared body temperature measuring sensor are mounted on the rear neck of the flexible clothing, and the inertial navigation attitude sensor, the dual-mode positioning module and the micro-speaker are all arranged on the waist of the flexible clothing; the intelligent heating module is covered on the back, the abdomen and the waist of the flexible garment;
an inertial measurement unit is arranged in the inertial navigation attitude sensor, and comprises an accelerometer and a gyroscope, and is used for rapidly calculating the current motion attitude of a user and outputting acceleration, angular velocity and angle in real time;
the dual-mode positioning module adopts two independent positioning systems of Beidou positioning and GPS positioning and is used for acquiring longitude and latitude positioning information.
3. The method of smart garment system based on multi-sensor information fusion according to claim 1 or 2, wherein said step S1 is specifically as follows:
the optical heart rate pulse and blood oxygen saturation monitoring sensor collects photoelectric volume waveform signals of red light and infrared light reflected by blood vessels of a user at a specific sampling frequency;
the infrared body temperature measuring sensor collects infrared radiation energy emitted by the body surface of the user;
the inertial navigation attitude sensor acquires and outputs acceleration, angular velocity and angle of a user in real time;
and the dual-mode positioning module is used for collecting and outputting longitude and latitude coordinates of a user in real time.
4. The method of intelligent clothing system based on multi-sensor information fusion according to claim 3, wherein said step S2 is specifically as follows:
s2-1, noise reduction processing is carried out on physiological data original signals of a user, which are acquired by an optical heart rate pulse and blood oxygen saturation monitoring sensor and an infrared body temperature measuring sensor;
the formula for removing noise is as follows:
wherein,representation->Time filtering output,/, for>Indicating time->N represents the window size of the filter,/-for the measurement values of (a)>Representing the relative position of the measured values in the window, ranging from 0 to +.>
5. The method of intelligent clothing system based on multi-sensor information fusion according to claim 4, wherein the step S2 further comprises:
s2-2, calculating heart rate, blood oxygen saturation and body temperature of the user by using the data subjected to the noise reduction treatment in the step S2-1;
calculating heart rate and blood oxygen saturation by utilizing the collected changes of the wave crests and the wave bottoms of the photoelectric volume waveform signals of red light and infrared light reflected by the blood vessels of the user, wherein the photoelectric volume waveform is composed of an alternating current component and a direct current component;
the calculation formula for heart rate HR is as follows:
wherein,for the alternating current component in red light +.>For the direct current component in red light, +.>For the alternating current component under infrared light +.>Is a direct current component under infrared light;
blood oxygen saturationThe calculation formula of (2) is as follows:
wherein HR is heart rate, and x, y and z are characteristic constants respectively;
the calculation formula of the body surface temperature BT of the user is as follows:
wherein,infrared radiation energy emitted for the user's body surface collected by the infrared body temperature measuring sensor, < >>Is still-Fender Boltzmann constant.
6. The method of intelligent clothing system based on multi-sensor information fusion according to claim 5, wherein the spatial feature processing module in step S4-1 is composed of a convolution layer, a dense layer block, an attention transition layer block and a dense layer block which are sequentially connected in series for extracting spatial features;
the spatial feature processing module comprises three dense layer blocks, and four computing layers are arranged in each dense layer block;
the attention transition layer block is formed by sequentially connecting a convolution layer, a self-adaptive average pooling layer, a batch standardization layer, a linear layer, an ELU activation function layer and a Sigmoid activation function layer in series;
weighted outputThe formula of (2) is as follows:
wherein,representing the entered data>Representing the weights calculated by the attention mechanism.
7. The method of intelligent clothing system based on multi-sensor information fusion according to claim 6, wherein the classification algorithm formula adopted in the classifier constructed in step S4-2 is as follows:
wherein,representing the number of behavioural activity types +.>Is an index that traverses all behavioral activity types, +.>Representing the type of behavioral activity currently being calculated, +.>Indicate->Vector weights of individual behavioral activity types, +.>Representing the current calculated->Vector weights of individual behavioral activity types, +.>Indicate->Probability of individual behavioral activity types;
the loss function L used in the recognition training is as follows:
wherein,representing the number of samples, +.>Indicate->Actual tag value of individual samples,/->Indicate->Probability output of each sample through model prediction.
CN202311764036.2A 2023-12-20 2023-12-20 Intelligent clothing system and method based on multi-sensing information fusion Active CN117426758B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311764036.2A CN117426758B (en) 2023-12-20 2023-12-20 Intelligent clothing system and method based on multi-sensing information fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311764036.2A CN117426758B (en) 2023-12-20 2023-12-20 Intelligent clothing system and method based on multi-sensing information fusion

Publications (2)

Publication Number Publication Date
CN117426758A CN117426758A (en) 2024-01-23
CN117426758B true CN117426758B (en) 2024-04-05

Family

ID=89556888

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311764036.2A Active CN117426758B (en) 2023-12-20 2023-12-20 Intelligent clothing system and method based on multi-sensing information fusion

Country Status (1)

Country Link
CN (1) CN117426758B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106264475A (en) * 2016-10-13 2017-01-04 西安交通大学 Single photoelectric sensor sleep-respiratory multi-physiological-parameter monitoring method and device
CN107432739A (en) * 2017-09-20 2017-12-05 武汉纺织大学 A kind of smart garment system for health monitoring
KR20180075368A (en) * 2016-12-26 2018-07-04 한국과학기술원 Dropout method for improving training speed and memory efficiency on artificial neural network and learning method based on the same
CN113895272A (en) * 2021-10-15 2022-01-07 青岛科技大学 Electric vehicle alternating current charging state monitoring and fault early warning method based on deep learning
CN114678097A (en) * 2022-05-25 2022-06-28 武汉纺织大学 Artificial intelligence and digital twinning system and method for intelligent clothes
CN116720620A (en) * 2023-06-10 2023-09-08 河南工业大学 Grain storage ventilation temperature prediction method based on IPSO algorithm optimization CNN-BiGRU-Attention network model
CN116849628A (en) * 2023-06-28 2023-10-10 贵州师范大学 Intelligent health monitoring system based on singlechip

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9892344B1 (en) * 2015-11-30 2018-02-13 A9.Com, Inc. Activation layers for deep learning networks

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106264475A (en) * 2016-10-13 2017-01-04 西安交通大学 Single photoelectric sensor sleep-respiratory multi-physiological-parameter monitoring method and device
KR20180075368A (en) * 2016-12-26 2018-07-04 한국과학기술원 Dropout method for improving training speed and memory efficiency on artificial neural network and learning method based on the same
CN107432739A (en) * 2017-09-20 2017-12-05 武汉纺织大学 A kind of smart garment system for health monitoring
CN113895272A (en) * 2021-10-15 2022-01-07 青岛科技大学 Electric vehicle alternating current charging state monitoring and fault early warning method based on deep learning
CN114678097A (en) * 2022-05-25 2022-06-28 武汉纺织大学 Artificial intelligence and digital twinning system and method for intelligent clothes
CN116720620A (en) * 2023-06-10 2023-09-08 河南工业大学 Grain storage ventilation temperature prediction method based on IPSO algorithm optimization CNN-BiGRU-Attention network model
CN116849628A (en) * 2023-06-28 2023-10-10 贵州师范大学 Intelligent health monitoring system based on singlechip

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
凌立新,等.化工单元实训操作.重庆大学出版社,2020,第112页. *
深度学习在二维虚拟试衣技术的应用与进展;计算机工程与应用;20230215;第59卷(第11期);第37-45页 *

Also Published As

Publication number Publication date
CN117426758A (en) 2024-01-23

Similar Documents

Publication Publication Date Title
He et al. A low power fall sensing technology based on FD-CNN
Lester et al. A practical approach to recognizing physical activities
Karthika et al. Raspberry Pi-enabled Wearable Sensors for Personal Health Tracking and Analysis
CN104269025B (en) Wearable single node feature and the position choosing method of monitoring is fallen down towards open air
CN110456320A (en) A kind of ULTRA-WIDEBAND RADAR personal identification method based on free space gait temporal aspect
Kang et al. Real-time elderly activity monitoring system based on a tri-axial accelerometer
CN113990500A (en) Vital sign parameter monitoring method and device and storage medium
Parmar et al. A comprehensive survey of various approaches on human fall detection for elderly people
CN211484541U (en) Old person who fuses multisensor falls down prediction device
Dinh et al. A fall and near-fall assessment and evaluation system
CN117426758B (en) Intelligent clothing system and method based on multi-sensing information fusion
CN109620269A (en) Fatigue detection method, device, equipment and readable storage medium storing program for executing
Kabir et al. Secure Your Steps: A Class-Based Ensemble Framework for Real-Time Fall Detection Using Deep Neural Networks
Pascoal et al. Activity recognition in outdoor sports environments: smart data for end-users involving mobile pervasive augmented reality systems
Vermander et al. Intelligent systems for sitting posture monitoring and anomaly detection: an overview
Imran et al. EdgeHARNet: An Edge-Friendly Shallow Convolutional Neural Network for Recognizing Human Activities Using Embedded Inertial Sensors of Smart-Wearables
Biswas et al. CORDIC framework for quaternion-based joint angle computation to classify arm movements
CN116115239A (en) Embarrassing working gesture recognition method for construction workers based on multi-mode data fusion
CN111861275B (en) Household work mode identification method and device
CN115393956A (en) CNN-BilSTM fall detection method for improving attention mechanism
CN116089279A (en) Intelligent cabin HMI evaluation method, system and storage medium
Peng et al. Experimental analysis of artificial neural networks performance for accessing physical activity recognition in daily life
KR20220144983A (en) Emotion recognition system using image and electrocardiogram
Wang et al. SwimSense: Monitoring swimming motion using body sensor networks
CN116026493B (en) Core body temperature detection method and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant