[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117953591B - Intelligent limb rehabilitation assisting method and device - Google Patents

Intelligent limb rehabilitation assisting method and device Download PDF

Info

Publication number
CN117953591B
CN117953591B CN202410354728.8A CN202410354728A CN117953591B CN 117953591 B CN117953591 B CN 117953591B CN 202410354728 A CN202410354728 A CN 202410354728A CN 117953591 B CN117953591 B CN 117953591B
Authority
CN
China
Prior art keywords
data
limb
moving image
image
key point
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202410354728.8A
Other languages
Chinese (zh)
Other versions
CN117953591A (en
Inventor
汪杰
李宏增
张华�
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fourth Military Medical University FMMU
Original Assignee
Fourth Military Medical University FMMU
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fourth Military Medical University FMMU filed Critical Fourth Military Medical University FMMU
Priority to CN202410354728.8A priority Critical patent/CN117953591B/en
Publication of CN117953591A publication Critical patent/CN117953591A/en
Application granted granted Critical
Publication of CN117953591B publication Critical patent/CN117953591B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/10Image acquisition
    • G06V10/16Image acquisition using multiple overlapping images; Image stitching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H20/00ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance
    • G16H20/30ICT specially adapted for therapies or health-improving plans, e.g. for handling prescriptions, for steering therapy or for monitoring patient compliance relating to physical therapies or activities, e.g. physiotherapy, acupressure or exercising

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Health & Medical Sciences (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Epidemiology (AREA)
  • Public Health (AREA)
  • Primary Health Care (AREA)
  • Medical Informatics (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Biophysics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Image Analysis (AREA)

Abstract

本发明涉及计算机视觉技术领域,尤其涉及一种智能肢体康复辅助方法及设备。该方法包括以下步骤:通过图像传感器进行肢体运动数据采集,得到肢体运动图像数据;对肢体运动图像数据进行姿态估计,得到姿态估计数据,并对姿态估计数据进行关键点检测,得到肢体运动关键点检测数据;对肢体运动关键点检测数据进行动作识别,得到肢体运动动作识别数据;根据肢体运动动作识别数据对肢体运动关键点检测数据进行动作参数生成,得到肢体动作参数数据,以进行智能肢体康复辅助作业。本发明能够提供准确可靠的智能肢体康复辅助参数,以实现减低对人力的依赖,减少人力成本以及时间成本。

The present invention relates to the field of computer vision technology, and in particular to an intelligent limb rehabilitation auxiliary method and device. The method comprises the following steps: collecting limb motion data through an image sensor to obtain limb motion image data; performing posture estimation on the limb motion image data to obtain posture estimation data, and performing key point detection on the posture estimation data to obtain limb motion key point detection data; performing action recognition on the limb motion key point detection data to obtain limb motion action recognition data; generating action parameters for the limb motion key point detection data according to the limb motion action recognition data to obtain limb action parameter data, so as to perform intelligent limb rehabilitation auxiliary operations. The present invention can provide accurate and reliable intelligent limb rehabilitation auxiliary parameters to reduce dependence on manpower, reduce manpower costs and time costs.

Description

一种智能肢体康复辅助方法及设备Intelligent limb rehabilitation assisting method and device

技术领域Technical Field

本发明涉及计算机视觉技术领域,尤其涉及一种智能肢体康复辅助方法及设备。The present invention relates to the field of computer vision technology, and in particular to an intelligent limb rehabilitation auxiliary method and device.

背景技术Background technique

常规的肢体康复辅助方法往往采用康复技术或者辅助设备进行肢体康复辅助作业,在这个过程中,往往依赖于人工辅助作业或者简单的设备协助,往往依赖于康复师的经验和技能,但人力资源有限,无法满足所有的需求。计算机视觉是一门研究如何使机器“看”的科学,更进一步的说,就是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。作为一个科学学科,计算机视觉研究相关的理论和技术,试图建立能够从图像或者多维数据中获取“信息”的人工智能系统。这里所指的信息指Shannon定义的,可以用来帮助做一个“决定”的信息。因为感知可以看作是从感官信号中提取信息,所以计算机视觉也可以看作是研究如何使人工系统从图像或多维数据中“感知”的科学。如何结合将肢体康复辅助方法与计算机视觉技术结合起来便成为了一个问题。Conventional limb rehabilitation assistance methods often use rehabilitation technology or assistive equipment to perform limb rehabilitation assistance operations. In this process, they often rely on manual assistance or simple equipment assistance, and often rely on the experience and skills of rehabilitation therapists. However, human resources are limited and cannot meet all needs. Computer vision is a science that studies how to make machines "see". To put it more specifically, it refers to machine vision such as using cameras and computers to replace human eyes to identify, track and measure targets, and further perform graphic processing so that computer processing becomes an image that is more suitable for human eye observation or transmission to instrument detection. As a scientific discipline, computer vision studies related theories and technologies, and attempts to establish an artificial intelligence system that can obtain "information" from images or multidimensional data. The information here refers to information defined by Shannon that can be used to help make a "decision". Because perception can be seen as extracting information from sensory signals, computer vision can also be seen as a science that studies how to make artificial systems "perceive" from images or multidimensional data. How to combine limb rehabilitation assistance methods with computer vision technology has become a problem.

发明内容Summary of the invention

本发明为解决上述技术问题,提出了一种智能肢体康复辅助方法及设备,以解决至少一个上述技术问题。In order to solve the above technical problems, the present invention proposes an intelligent limb rehabilitation auxiliary method and device to solve at least one of the above technical problems.

本申请提供了一种智能肢体康复辅助方法,包括以下步骤:The present application provides an intelligent limb rehabilitation assisting method, comprising the following steps:

步骤S1:通过图像传感器进行肢体运动数据采集,得到肢体运动图像数据;Step S1: collecting limb movement data through an image sensor to obtain limb movement image data;

步骤S2,包括:Step S2 comprises:

步骤S21:对肢体运动图像数据进行姿态估计,得到姿态估计数据;Step S21: performing posture estimation on the limb motion image data to obtain posture estimation data;

步骤S22:对姿态估计数据进行局部关键点检测,得到第一肢体运动关键点检测数据;Step S22: performing local key point detection on the posture estimation data to obtain first limb movement key point detection data;

步骤S23:对姿态估计数据进行全局关键点检测,得到第二肢体运动关键点检测数据;Step S23: performing global key point detection on the posture estimation data to obtain second limb movement key point detection data;

步骤S3:对肢体运动关键点检测数据进行动作识别,得到肢体运动动作识别数据,其中肢体运动关键点检测数据包括第一肢体运动关键点检测数据以及第二肢体运动关键点检测数据;Step S3: performing action recognition on the limb movement key point detection data to obtain limb movement action recognition data, wherein the limb movement key point detection data includes first limb movement key point detection data and second limb movement key point detection data;

步骤S4,包括:Step S4 comprises:

根据肢体运动动作识别数据进行动作分类映射,得到动作分类映射数据;Perform action classification mapping according to the limb movement action recognition data to obtain action classification mapping data;

根据肢体运动关键点检测数据以及动作分类映射数据进行动作时序划分,得到动作时序划分数据;Perform action timing division according to the limb movement key point detection data and the action classification mapping data to obtain action timing division data;

根据动作时序划分数据进行动作特征提取,得到动作特征数据,其中动作特征数据包括动作持续时间数据、动作速度数据、动作加速度数据以及动作角度变化数据;Extract motion features according to the motion time sequence division data to obtain motion feature data, wherein the motion feature data includes motion duration data, motion speed data, motion acceleration data and motion angle change data;

根据动作分类映射数据、动作时序划分数据以及动作特征数据进行动作参数生成,得到肢体动作参数数据,以进行智能肢体康复辅助作业。Motion parameters are generated according to the motion classification mapping data, motion timing division data and motion feature data to obtain limb motion parameter data for performing intelligent limb rehabilitation auxiliary operations.

本发明中利用图像传感器进行肢体运动数据采集,避免了对用户的侵入性检测或操作,提高了用户的舒适度和接受度。通过图像传感器采集的数据可以实时地进行处理和分析,使得康复辅助作业能够及时地根据用户的实际情况进行调整和反馈。通过对肢体运动图像数据进行姿态估计和关键点检测,能够准确获取用户的运动状态和关键点信息,从而实现更加贴合用户实际情况的参数制定生成和监控。该方法利用动作识别技术对肢体运动关键点检测数据进行分析,能够自动识别用户的运动动作,无需人工干预。并且根据识别结果自动生成相应的动作参数,进一步简化了康复辅助作业的操作流程。与传统的康复辅助方法相比,该方法无需使用昂贵的设备或进行复杂的操作,具有较低的成本,并且易于实施和推广,有助于提高康复服务的覆盖率和普及度,降低对人力作业的依赖性。The present invention uses image sensors to collect limb movement data, avoids invasive detection or operation of users, and improves user comfort and acceptance. The data collected by the image sensor can be processed and analyzed in real time, so that rehabilitation assistance operations can be adjusted and fed back in time according to the actual situation of the user. By performing posture estimation and key point detection on limb movement image data, the user's movement state and key point information can be accurately obtained, thereby realizing parameter formulation, generation and monitoring that are more in line with the actual situation of the user. The method uses motion recognition technology to analyze the limb movement key point detection data, and can automatically identify the user's movement without manual intervention. And the corresponding motion parameters are automatically generated according to the recognition results, further simplifying the operation process of rehabilitation assistance operations. Compared with traditional rehabilitation assistance methods, this method does not require the use of expensive equipment or complex operations, has a lower cost, and is easy to implement and promote, which helps to improve the coverage and popularity of rehabilitation services and reduce dependence on human operations.

本发明中通过步骤S21的姿态估计,能够获取到肢体的整体姿态信息,有助于更好地理解肢体的整体运动状态,为后续的关键点检测提供重要参考。步骤S22针对姿态估计数据进行局部关键点检测,能够精细化地识别出肢体的局部关键点,这些局部关键点可以代表肢体的重要部位和运动特征,有助于更准确地分析和理解肢体运动的细节。步骤S23对姿态估计数据进行全局关键点检测,能够从整体上捕获肢体的关键点信息,全局关键点可以反映肢体整体结构和运动状态,有助于更全面地分析和理解肢体的整体运动特征。本发明能够提高关键点检测的准确性和稳定性,有助于更准确地识别和定位肢体的关键点,为后续的动作识别和参数生成提供可靠的数据支持。In the present invention, through the posture estimation of step S21, the overall posture information of the limb can be obtained, which helps to better understand the overall motion state of the limb and provide an important reference for subsequent key point detection. Step S22 performs local key point detection on the posture estimation data, which can finely identify the local key points of the limb. These local key points can represent the important parts and motion characteristics of the limb, which helps to more accurately analyze and understand the details of the limb movement. Step S23 performs global key point detection on the posture estimation data, which can capture the key point information of the limb as a whole. The global key points can reflect the overall structure and motion state of the limb, which helps to more comprehensively analyze and understand the overall motion characteristics of the limb. The present invention can improve the accuracy and stability of key point detection, help to more accurately identify and locate the key points of the limb, and provide reliable data support for subsequent action recognition and parameter generation.

本发明中通过根据肢体运动动作识别数据进行动作分类映射,可以将运动数据映射到特定的动作类别中,从而实现对不同动作的分类和识别,有助于系统更好地理解用户的运动行为,为后续的康复辅助提供有效的数据支持。根据肢体运动关键点检测数据以及动作分类映射数据进行动作时序划分,可以将整个运动过程分解成不同的时间段或阶段,有助于对运动过程进行更加细致和精确的分析,为后续的特征提取和参数生成提供更可靠的基础。根据动作时序划分数据进行动作特征提取,可以从多个方面获取关于运动的信息,如持续时间、速度、加速度以及角度变化等,这些特征数据能够反映出运动的各个方面,为后续的参数生成和康复辅助提供更多的参考依据。根据动作分类映射数据、动作时序划分数据以及动作特征数据进行动作参数生成,可以综合考虑动作的类型、时序以及特征信息,从而得到更加准确的肢体动作参数数据。In the present invention, by performing action classification mapping according to the limb movement action recognition data, the motion data can be mapped to a specific action category, thereby achieving classification and recognition of different actions, which helps the system to better understand the user's movement behavior and provide effective data support for subsequent rehabilitation assistance. According to the limb movement key point detection data and the action classification mapping data, the action timing division can be performed, and the entire movement process can be decomposed into different time periods or stages, which helps to analyze the movement process more carefully and accurately, and provides a more reliable basis for subsequent feature extraction and parameter generation. According to the action timing division data, action feature extraction can be performed, and information about the movement can be obtained from multiple aspects, such as duration, speed, acceleration, and angle change, etc. These feature data can reflect various aspects of the movement, and provide more reference basis for subsequent parameter generation and rehabilitation assistance. According to the action classification mapping data, the action timing division data, and the action feature data, the action parameter generation can be performed, and the type, timing, and feature information of the action can be comprehensively considered, so as to obtain more accurate limb movement parameter data.

优选地,步骤S1具体为:Preferably, step S1 specifically includes:

步骤S11:通过图像传感器以预设的第一阈值角度数据进行第一肢体运动数据采集,得到第一肢体运动图像数据;Step S11: collecting first limb movement data at a preset first threshold angle data through an image sensor to obtain first limb movement image data;

步骤S12:通过图像传感器以预设的第二阈值角度数据进行第二肢体运动数据采集,得到第二肢体运动图像数据;Step S12: collecting second limb movement data at a preset second threshold angle data through an image sensor to obtain second limb movement image data;

步骤S13:对第一肢体运动图像数据以及第二肢体运动图像数据进行图像拼接,得到肢体运动图像数据。Step S13: performing image stitching on the first limb motion image data and the second limb motion image data to obtain limb motion image data.

本发明中通过使用图像传感器,可以实时、精准地采集用户的肢体运动数据。设的第一和第二阈值角度数据可以根据用户的具体情况进行调整,以适应不同用户的运动能力,使得采集到的数据更具可操作性和准确性。通过设置不同的阈值角度,可以从不同角度采集肢体运动数据,使得从多个视角观察和分析用户的运动姿势,更准确地评估肢体运动的状态和特征。将第一和第二肢体运动图像数据进行拼接,可以增加图像的信息丰富度,拼接后的图像可以呈现更完整、更全面的肢体运动状态,有助于提高后续姿态估计和关键点检测的准确性和稳定性。In the present invention, by using an image sensor, the user's limb movement data can be collected in real time and accurately. The first and second threshold angle data can be adjusted according to the specific situation of the user to adapt to the athletic ability of different users, so that the collected data is more operational and accurate. By setting different threshold angles, limb movement data can be collected from different angles, so that the user's movement posture can be observed and analyzed from multiple perspectives, and the state and characteristics of limb movement can be more accurately evaluated. The first and second limb movement image data are spliced to increase the information richness of the image, and the spliced image can present a more complete and comprehensive limb movement state, which helps to improve the accuracy and stability of subsequent posture estimation and key point detection.

优选地,步骤S13具体为:Preferably, step S13 is specifically:

步骤S131:根据预设的第一阈值角度数据对第一肢体运动图像数据进行第一肢体角度检测,得到第一肢体角度检测数据;Step S131: performing first limb angle detection on the first limb motion image data according to preset first threshold angle data to obtain first limb angle detection data;

步骤S132:根据预设的第二阈值角度数据对第二肢体运动图像数据进行第二肢体角度检测,得到第二肢体角度检测数据;Step S132: performing second limb angle detection on the second limb motion image data according to preset second threshold angle data to obtain second limb angle detection data;

步骤S133:根据第一肢体角度检测数据对第一肢体运动图像数据进行图像校正,得到第一肢体运动图像校正数据,并根据第二肢体角度检测数据对第二肢体运动图像数据进行图像校正,得到第二肢体运动图像校正数据;Step S133: performing image correction on the first limb motion image data according to the first limb angle detection data to obtain first limb motion image correction data, and performing image correction on the second limb motion image data according to the second limb angle detection data to obtain second limb motion image correction data;

步骤S134:对第一肢体运动图像校正数据以及第二肢体运动图像校正数据进行特征点提取,得到第一肢体运动图像特征点数据以及第二肢体运动图像特征点数据;Step S134: extracting feature points from the first limb motion image correction data and the second limb motion image correction data to obtain first limb motion image feature point data and second limb motion image feature point data;

步骤S135:对第一肢体运动图像特征点数据以及第二肢体运动图像特征点数据进行特征匹配,得到肢体运动图像特征匹配数据;Step S135: performing feature matching on the first limb motion image feature point data and the second limb motion image feature point data to obtain limb motion image feature matching data;

步骤S136:根据肢体运动图像特征匹配数据对第一肢体运动图像数据以及第二肢体运动图像数据进行透视变换,得到肢体运动图像透视变换数据;Step S136: performing perspective transformation on the first limb movement image data and the second limb movement image data according to the limb movement image feature matching data to obtain limb movement image perspective transformation data;

步骤S137:对第一肢体运动图像透视变换数据以及第二肢体运动图像透视变换数据进行图像融合,得到肢体运动图像融合数据;Step S137: performing image fusion on the first limb motion image perspective transformation data and the second limb motion image perspective transformation data to obtain limb motion image fusion data;

步骤S138:对肢体运动图像融合数据进行边缘修复,得到肢体运动图像数据。Step S138: Perform edge repair on the limb motion image fusion data to obtain limb motion image data.

本发明中步骤S131和S132利用角度检测技术,能够准确地检测出第一和第二肢体的角度信息。通过图像校正(步骤S133),可以对采集到的图像数据进行校正,减少因角度偏差而引入的误差,提高后续处理的准确性。步骤S134和S135利用特征点提取和匹配技术,能够从校正后的图像中提取关键特征点,并通过特征匹配进行关联,有助于准确地捕获肢体运动的关键信息,并保持一致性。步骤S136和S137利用透视变换和图像融合技术,能够将两个校正后的图像进行合成,得到更全面、更完整的肢体运动图像数据,提高图像的信息丰富度和质量,从而更准确地分析肢体运动。步骤S138对图像进行边缘修复,能够修复图像中因校正、变换等过程引入的边缘问题,使得图像更加清晰、准确,提高了后续处理的效果。In the present invention, steps S131 and S132 use angle detection technology to accurately detect the angle information of the first and second limbs. Through image correction (step S133), the collected image data can be corrected to reduce the error introduced by angle deviation and improve the accuracy of subsequent processing. Steps S134 and S135 use feature point extraction and matching technology to extract key feature points from the corrected image and associate them through feature matching, which helps to accurately capture the key information of limb movement and maintain consistency. Steps S136 and S137 use perspective transformation and image fusion technology to synthesize the two corrected images to obtain more comprehensive and complete limb movement image data, improve the information richness and quality of the image, and thus more accurately analyze limb movement. Step S138 performs edge repair on the image, which can repair the edge problems introduced in the image due to correction, transformation and other processes, making the image clearer and more accurate, and improving the effect of subsequent processing.

优选地,步骤S134具体为:Preferably, step S134 is specifically as follows:

步骤S101:对第一肢体运动图像校正数据以及第二肢体运动图像校正数据进行聚类计算,得到第一肢体运动图像聚类数据以及第二肢体运动图像聚类数据;Step S101: performing cluster calculation on the first limb motion image correction data and the second limb motion image correction data to obtain first limb motion image cluster data and second limb motion image cluster data;

步骤S102:对第一肢体运动图像聚类数据以及第二肢体运动图像聚类数据进行聚类中心提取,得到第一图像聚类中心数据以及第二图像聚类中心数据;Step S102: extracting cluster centers of the first limb movement image cluster data and the second limb movement image cluster data to obtain first image cluster center data and second image cluster center data;

步骤S103:对第一肢体运动图像聚类数据以及第二肢体运动图像聚类数据进行簇半径提取,得到第一簇半径数据以及第二簇半径数据;Step S103: extracting cluster radius from the first limb movement image cluster data and the second limb movement image cluster data to obtain first cluster radius data and second cluster radius data;

步骤S104:利用第一肢体运动图像聚类数据对第一簇半径数据进行加权计算,得到第一簇半径加权数据,并利用第二肢体运动图像聚类数据对第二簇半径数据进行加权计算,得到第二簇半径加权数据;Step S104: performing weighted calculation on the first cluster radius data using the first limb motion image cluster data to obtain first cluster radius weighted data, and performing weighted calculation on the second cluster radius data using the second limb motion image cluster data to obtain second cluster radius weighted data;

步骤S105:利用第一簇半径加权数据对第一图像聚类中心数据进行领域像素选择,得到第一肢体运动图像领域像素数据,并利用第二簇半径加权数据对第二图像聚类中心数据进行领域像素选择,得到第二肢体运动图像领域像素数据;Step S105: performing domain pixel selection on the first image cluster center data using the first cluster radius weighted data to obtain first limb motion image domain pixel data, and performing domain pixel selection on the second image cluster center data using the second cluster radius weighted data to obtain second limb motion image domain pixel data;

步骤S106:对第一肢体运动图像领域像素数据以及第二肢体运动图像领域像素数据进行动态像素灰度判断,得到第一肢体运动图像特征点数据以及第二肢体运动图像特征点数据。Step S106: Perform dynamic pixel grayscale judgment on the pixel data in the first limb motion image area and the pixel data in the second limb motion image area to obtain the first limb motion image feature point data and the second limb motion image feature point data.

本发明中步骤S134中,能够对图像数据进行精细化的处理和分析,有助于更准确地捕获图像中的特征信息,提高后续处理的精度和可靠性。步骤S104和S105中利用加权计算和领域像素选择技术,能够根据聚类数据和簇半径数据对图像数据进行加权处理和像素选择,突出图像中的关键信息,提高了对特征点的提取准确性和稳定性。步骤S106中进行动态像素灰度判断,能够根据图像的灰度信息对特征点进行筛选和判断,减少对噪声和干扰的敏感性,提高了对特征点的识别和提取效果。该方法通过多层次、多角度地分析图像数据,提高了对肢体运动图像特征点的提取准确性和稳定性。传统的特征点提取往往采用基于固定阈值的特征点采集,本发明通过图像的自身特性进行适应性调整,从而提高了对特征点的提取效果和图像数据的质量,克服了现有技术存在的精度不足以及提取效果不稳定的问题。In step S134 of the present invention, the image data can be processed and analyzed in a refined manner, which helps to more accurately capture the feature information in the image and improve the accuracy and reliability of subsequent processing. In steps S104 and S105, weighted calculation and field pixel selection technology are used to perform weighted processing and pixel selection on the image data according to clustering data and cluster radius data, highlight the key information in the image, and improve the accuracy and stability of feature point extraction. In step S106, dynamic pixel grayscale judgment is performed, and feature points can be screened and judged according to the grayscale information of the image, reducing the sensitivity to noise and interference, and improving the recognition and extraction effect of feature points. The method analyzes image data at multiple levels and angles to improve the accuracy and stability of feature point extraction of limb movement images. Traditional feature point extraction often uses feature point acquisition based on a fixed threshold. The present invention performs adaptive adjustment through the image's own characteristics, thereby improving the extraction effect of feature points and the quality of image data, and overcoming the problems of insufficient accuracy and unstable extraction effect in the prior art.

优选地,步骤S106具体为:Preferably, step S106 is specifically:

通过光照传感器获取光照角度数据以及光照强度数据;Obtain light angle data and light intensity data through light sensors;

根据光照角度数据对预设的第一阈值角度数据以及预设的第二阈值角度数据进行角度计算,得到第一拍摄光照角度数据以及第二拍摄光照角度数据;Performing angle calculation on preset first threshold angle data and preset second threshold angle data according to the illumination angle data to obtain first shooting illumination angle data and second shooting illumination angle data;

根据第一拍摄光照角度数据以及光照强度数据进行灰度阈值生成,得到第一灰度阈值数据,并根据第二拍摄光照角度数据以及光照强度数据进行灰度阈值生成,得到第二灰度阈值数据;Generating a grayscale threshold according to the first shooting illumination angle data and the illumination intensity data to obtain first grayscale threshold data, and generating a grayscale threshold according to the second shooting illumination angle data and the illumination intensity data to obtain second grayscale threshold data;

对第一肢体运动图像领域像素数据通过第一灰度阈值数据进行像素灰度判断,得到第一肢体运动图像特征点数据,并对第二肢体运动图像领域像素数据通过第二灰度阈值数据进行像素灰度判断,得到第二肢体运动图像特征点数据。The pixel data in the first limb motion image field is subjected to pixel grayscale judgment using the first grayscale threshold data to obtain the first limb motion image feature point data, and the pixel data in the second limb motion image field is subjected to pixel grayscale judgment using the second grayscale threshold data to obtain the second limb motion image feature point data.

本发明中通过利用光照传感器获取的光照角度数据,能够考虑光照方向对图像的影响。根据光照角度数据计算灰度阈值,有助于根据不同光照条件对图像进行自适应调整,提高了图像处理的稳定性和鲁棒性。步骤S106根据光照角度数据和光照强度数据动态生成灰度阈值,能够根据实时的光照情况对图像进行灰度阈值的调整,避免因光照条件不同而导致的图像处理不稳定或误判的问题,提高了对图像特征点的提取准确性。通过根据光照角度数据生成的灰度阈值对图像像素进行灰度判断,能够实现自适应的像素灰度判断,有助于在不同光照条件下准确地提取图像的特征点,克服了现有技术在光照变化较大时特征提取效果不稳定的问题。步骤S106通过光照角度数据计算灰度阈值和进行像素灰度判断,能够提高图像处理的鲁棒性。对于不同光照条件下的图像,能够更加准确地提取特征点,从而提高了后续处理的效果和可靠性。In the present invention, by using the illumination angle data obtained by the illumination sensor, the influence of the illumination direction on the image can be considered. Calculating the grayscale threshold according to the illumination angle data helps to adaptively adjust the image according to different illumination conditions, thereby improving the stability and robustness of image processing. Step S106 dynamically generates the grayscale threshold according to the illumination angle data and the illumination intensity data, and can adjust the grayscale threshold of the image according to the real-time illumination conditions, thereby avoiding the problem of unstable or misjudgment of image processing caused by different illumination conditions, and improving the accuracy of extracting image feature points. By performing grayscale judgment on image pixels according to the grayscale threshold generated by the illumination angle data, adaptive pixel grayscale judgment can be achieved, which helps to accurately extract the feature points of the image under different illumination conditions, and overcomes the problem of unstable feature extraction effect of the prior art when the illumination changes greatly. Step S106 calculates the grayscale threshold and performs pixel grayscale judgment according to the illumination angle data, thereby improving the robustness of image processing. For images under different illumination conditions, feature points can be extracted more accurately, thereby improving the effect and reliability of subsequent processing.

优选地,步骤S22具体为:Preferably, step S22 is specifically:

根据姿态估计数据进行分支网络检测选择,得到分支网络检测数据;Perform branch network detection selection according to the posture estimation data to obtain branch network detection data;

根据分支网络检测数据对姿态估计数据进行特定部位关键点检测,得到特定部位关键点检测数据;Perform key point detection on specific parts of the posture estimation data according to the branch network detection data to obtain key point detection data of specific parts;

将不同的分支网络检测数据对应的特定部位关键点检测数据进行特征融合,得到第一肢体运动关键点检测数据。The key point detection data of specific parts corresponding to the detection data of different branch networks are feature fused to obtain the first limb movement key point detection data.

本发明中通过引入多个分支网络,每个分支网络负责检测特定部位的关键点,可以更加精准地捕捉肢体运动中不同部位的运动信息,从而提高了检测的准确性和全面性。在分支网络检测选择的基础上,针对每个部位分别进行关键点检测,使得检测结果更加细致和具体,有助于在过程中更加精准地分析和评估用户的运动状态。将不同分支网络检测到的特定部位关键点数据进行特征融合,能够考虑不同部位的运动信息,从而得到更加准确的肢体运动关键点检测数据,有助于提高康复辅助系统对用户运动状态的理解和分析能力。In the present invention, by introducing multiple branch networks, each branch network is responsible for detecting the key points of a specific part, which can more accurately capture the motion information of different parts in limb movement, thereby improving the accuracy and comprehensiveness of detection. On the basis of the branch network detection selection, key point detection is performed for each part separately, so that the detection results are more detailed and specific, which helps to more accurately analyze and evaluate the user's motion state in the process. Feature fusion of the key point data of specific parts detected by different branch networks can take into account the motion information of different parts, thereby obtaining more accurate limb movement key point detection data, which helps to improve the rehabilitation assistance system's understanding and analysis capabilities of the user's motion state.

优选地,步骤S3具体为:Preferably, step S3 specifically includes:

对肢体运动关键点检测数据进行特征提取,得到肢体运动关键点特征数据;Extracting features from the limb movement key point detection data to obtain limb movement key point feature data;

对肢体运动关键点特征数据进行特征选择,得到关键点特征选择数据;Performing feature selection on the key point feature data of limb movement to obtain key point feature selection data;

对关键点特征选择数据进行动作识别,得到肢体运动动作识别数据。The key point feature selection data is used for action recognition to obtain limb movement action recognition data.

本发明中通过步骤S3中的特征提取,能够从肢体运动关键点检测数据中提取出代表肢体运动特征的关键点特征数据,有助于将复杂的肢体运动转化为更具体、更易于分析的特征数据。步骤S3中的特征选择过程能够从提取的关键点特征数据中选择出最具代表性和区分度的特征,以减少特征维度并提高识别的准确性,有助于降低数据处理的复杂度,同时保留了对肢体运动的重要信息,提高了动作识别的效果。经过特征提取和选择的数据能够更好地反映肢体运动的特征,从而使得动作识别更加准确。通过选取具有代表性的特征,并结合适当的识别算法,能够更可靠地识别出不同的肢体运动动作,为智能肢体康复辅助提供更精准的数据支持。通过特征选择,可以减少识别算法的计算量,提高计算效率。精心选择的特征能够更好地反映肢体运动的本质特征,从而减少不必要的计算开销,加快动作识别的速度。由于减少了特征维度和计算量,系统能够更快速地处理肢体运动数据,提高了实时性。In the present invention, through the feature extraction in step S3, key point feature data representing the characteristics of limb movement can be extracted from the key point detection data of limb movement, which is helpful to convert complex limb movement into more specific and easier to analyze feature data. The feature selection process in step S3 can select the most representative and discriminative features from the extracted key point feature data to reduce the feature dimension and improve the accuracy of recognition, which helps to reduce the complexity of data processing, while retaining important information about limb movement and improving the effect of action recognition. The data extracted and selected by the features can better reflect the characteristics of limb movement, so that the action recognition is more accurate. By selecting representative features and combining appropriate recognition algorithms, different limb movement actions can be more reliably identified, providing more accurate data support for intelligent limb rehabilitation assistance. Through feature selection, the calculation amount of the recognition algorithm can be reduced and the calculation efficiency can be improved. The carefully selected features can better reflect the essential characteristics of limb movement, thereby reducing unnecessary calculation overhead and speeding up the speed of action recognition. Due to the reduction of feature dimensions and calculation amount, the system can process limb movement data more quickly and improve real-time performance.

优选地,本申请还提供了一种智能肢体康复辅助设备,用于执行如上所述的智能肢体康复辅助方法,该智能肢体康复辅助设备包括:Preferably, the present application also provides an intelligent limb rehabilitation assisting device for executing the intelligent limb rehabilitation assisting method as described above, the intelligent limb rehabilitation assisting device comprising:

肢体运动数据采集模块,用于通过图像传感器进行肢体运动数据采集,得到肢体运动图像数据;A limb movement data acquisition module is used to acquire limb movement data through an image sensor to obtain limb movement image data;

肢体运动关键点检测模块,用于对肢体运动图像数据进行姿态估计,得到姿态估计数据,并对姿态估计数据进行关键点检测,得到肢体运动关键点检测数据;A limb movement key point detection module is used to perform posture estimation on limb movement image data to obtain posture estimation data, and perform key point detection on the posture estimation data to obtain limb movement key point detection data;

动作识别模块,用于对肢体运动关键点检测数据进行动作识别,得到肢体运动动作识别数据;The action recognition module is used to perform action recognition on the limb movement key point detection data to obtain limb movement action recognition data;

智能肢体康复辅助作业模块,用于根据肢体运动动作识别数据对肢体运动关键点检测数据进行动作参数生成,得到肢体动作参数数据,以进行智能肢体康复辅助作业。The intelligent limb rehabilitation auxiliary operation module is used to generate action parameters for limb movement key point detection data according to limb movement action recognition data, and obtain limb movement parameter data for intelligent limb rehabilitation auxiliary operations.

本发明的有益效果在于:利用图像传感器进行肢体运动数据采集,避免了对用户的侵入性检测或操作,提高了用户的舒适度和接受度。通过图像传感器采集的数据可以实时地进行处理和分析,使得康复辅助作业能够及时地根据用户的实际情况进行调整和反馈。通过对肢体运动图像数据进行姿态估计和关键点检测,能够准确获取用户的运动状态和关键点信息,从而实现更加贴合用户实际情况的参数制定生成和监控。该方法利用动作识别技术对肢体运动关键点检测数据进行分析,能够自动识别用户的运动动作,无需人工干预。并且根据识别结果自动生成相应的动作参数,进一步简化了康复辅助作业的操作流程。与传统的康复辅助方法相比,该方法无需使用昂贵的设备或进行复杂的操作,具有较低的成本,并且易于实施和推广,有助于提高康复辅助服务的覆盖率和普及度。The beneficial effects of the present invention are: using image sensors to collect limb movement data avoids invasive detection or operation of users, and improves user comfort and acceptance. The data collected by the image sensor can be processed and analyzed in real time, so that rehabilitation assistance operations can be adjusted and fed back in a timely manner according to the actual situation of the user. By performing posture estimation and key point detection on limb movement image data, the user's movement state and key point information can be accurately obtained, thereby achieving parameter formulation, generation and monitoring that is more in line with the user's actual situation. The method uses motion recognition technology to analyze limb movement key point detection data, and can automatically identify the user's movement without manual intervention. And the corresponding motion parameters are automatically generated according to the recognition results, further simplifying the operation process of rehabilitation assistance operations. Compared with traditional rehabilitation assistance methods, this method does not require the use of expensive equipment or complex operations, has a lower cost, and is easy to implement and promote, which helps to improve the coverage and popularity of rehabilitation assistance services.

附图说明BRIEF DESCRIPTION OF THE DRAWINGS

通过阅读参照以下附图所作的对非限制性实施所作的详细描述,本申请的其它特征、目的和优点将会变得更明显:Other features, objects and advantages of the present application will become more apparent by reading the detailed description of non-limiting implementations made with reference to the following drawings:

图1示出了一实施例的智能肢体康复辅助方法的步骤流程图;FIG1 is a flowchart showing the steps of an intelligent limb rehabilitation assisting method according to an embodiment;

图2示出了一实施例的肢体运动数据采集方法的步骤流程图;FIG2 is a flowchart showing a method for collecting limb movement data according to an embodiment;

图3示出了一实施例的肢体运动图像拼接方法的步骤流程图;FIG3 is a flowchart showing a method for splicing limb motion images according to an embodiment;

图4示出了一实施例的肢体运动图像特征点提取方法的步骤流程图。FIG. 4 shows a flowchart of the steps of a method for extracting feature points from a limb movement image according to an embodiment of the present invention.

具体实施方式Detailed ways

下面结合附图对本发明专利的技术方法进行清楚、完整的描述,显然,所描述的实施例是本发明的一部分实施例,而不是全部的实施例。基于本发明中的实施例,本领域所属的技术人员在没有做出创造性劳动前提下所获得的所有其他实施例,都属于本发明保护的范围。The following is a clear and complete description of the technical method of the present invention in conjunction with the accompanying drawings. Obviously, the described embodiments are part of the embodiments of the present invention, not all of the embodiments. Based on the embodiments of the present invention, all other embodiments obtained by technicians in this field without creative work are within the scope of protection of the present invention.

此外,附图仅为本发明的示意性图解,并非一定是按比例绘制。图中相同的附图标记表示相同或类似的部分,因而将省略对它们的重复描述。附图中所示的一些方框图是功能实体,不一定必须与物理或逻辑上独立的实体相对应。可以采用软件形式来实现功能实体,或在一个或多个硬件模块或集成电路中实现这些功能实体,或在不同网络和/或处理器方法和/或微控制器方法中实现这些功能实体。In addition, the accompanying drawings are only schematic illustrations of the present invention and are not necessarily drawn to scale. The same reference numerals in the figures represent the same or similar parts, and their repeated description will be omitted. Some of the block diagrams shown in the accompanying drawings are functional entities and do not necessarily correspond to physically or logically independent entities. The functional entities can be implemented in software form, or implemented in one or more hardware modules or integrated circuits, or implemented in different networks and/or processor methods and/or microcontroller methods.

应当理解的是,虽然在这里可能使用了术语“第一”、“第二”等等来描述各个单元,但是这些单元不应当受这些术语限制。使用这些术语仅仅是为了将一个单元与另一个单元进行区分。举例来说,在不背离示例性实施例的范围的情况下,第一单元可以被称为第二单元,并且类似地第二单元可以被称为第一单元。这里所使用的术语“和/或”包括其中一个或更多所列出的相关联项目的任意和所有组合。It should be understood that, although the terms "first", "second", etc. may be used herein to describe various units, these units should not be limited by these terms. These terms are used only to distinguish one unit from another unit. For example, without departing from the scope of the exemplary embodiments, the first unit may be referred to as the second unit, and similarly the second unit may be referred to as the first unit. The term "and/or" used herein includes any and all combinations of one or more of the listed associated items.

请参阅图1至图4,本申请提供了一种智能肢体康复辅助方法,包括以下步骤:Please refer to Figures 1 to 4. The present application provides an intelligent limb rehabilitation assisting method, comprising the following steps:

步骤S1:通过图像传感器进行肢体运动数据采集,得到肢体运动图像数据;Step S1: collecting limb movement data through an image sensor to obtain limb movement image data;

具体地,使用RGB摄像头或深度摄像头对用户进行拍摄,获取肢体运动的图像数据。例如,可安装摄像头在康复训练室的墙壁或天花板上,以捕捉用户进行康复训练时的肢体运动。Specifically, an RGB camera or a depth camera is used to shoot the user to obtain image data of limb movement. For example, a camera can be installed on the wall or ceiling of a rehabilitation training room to capture the limb movement of the user during rehabilitation training.

步骤S2:对肢体运动图像数据进行姿态估计,得到姿态估计数据,并对姿态估计数据进行关键点检测,得到肢体运动关键点检测数据;Step S2: performing posture estimation on the limb motion image data to obtain posture estimation data, and performing key point detection on the posture estimation data to obtain limb motion key point detection data;

具体地,使用深度学习模型,如OpenPose或PoseNet,对肢体运动图像进行姿态估计,从而获得用户的姿态估计数据。然后,从姿态估计数据中检测关键点,如肘部、手腕、膝盖等关节位置,得到肢体运动的关键点检测数据。Specifically, a deep learning model, such as OpenPose or PoseNet, is used to estimate the posture of the limb motion image, thereby obtaining the user's posture estimation data. Then, key points, such as the joint positions of the elbow, wrist, knee, etc., are detected from the posture estimation data to obtain key point detection data of limb motion.

具体地,使用OpenPose对肢体运动图像进行姿态估计,得到姿态估计数据。姿态估计数据包含了每个关节点的坐标和置信度,其中共有18个关键点。从姿态估计数据中提取关键点信息,包括肩部、肘部、手腕、髋部、膝盖和脚踝等关节位置。从姿态估计数据中提取了肩部、肘部、手腕、髋部、膝盖和脚踝等共6个关键点的位置信息。Specifically, OpenPose is used to perform posture estimation on the limb motion images to obtain posture estimation data. The posture estimation data contains the coordinates and confidence of each joint point, of which there are 18 key points in total. Key point information is extracted from the posture estimation data, including joint positions such as shoulders, elbows, wrists, hips, knees and ankles. The position information of 6 key points, including shoulders, elbows, wrists, hips, knees and ankles, is extracted from the posture estimation data.

步骤S3:对肢体运动关键点检测数据进行动作识别,得到肢体运动动作识别数据;Step S3: performing action recognition on the limb movement key point detection data to obtain limb movement action recognition data;

具体地,使用机器学习或深度学习算法,对肢体运动关键点检测数据进行处理和分析,从而识别出用户所执行的具体动作。例如,可以使用循环神经网络(RNN)或卷积神经网络(CNN)对序列数据进行分类,以识别用户进行的动作,如抬手、屈膝等。Specifically, machine learning or deep learning algorithms are used to process and analyze the limb movement key point detection data to identify the specific actions performed by the user. For example, a recurrent neural network (RNN) or a convolutional neural network (CNN) can be used to classify sequence data to identify the user's actions, such as raising a hand, bending a knee, etc.

具体地,对于每一帧图像,首先需要将关键点数据转换成特征向量,包括关节之间的相对位置、角度信息、速度信息等。并将时间序列上连续的几帧数据合并成一个序列,以捕捉动作的动态特征。使用一个深度学习模型,如循环神经网络(RNN)或卷积神经网络(CNN),对提取的特征进行训练。训练数据集包括了已经标记好的肢体运动关键点检测数据和对应的动作标签,如抬手和弯腰。对于新的肢体运动关键点检测数据,将其输入训练好的模型中进行预测,得到每一帧图像上对应的动作标签,最终得到的肢体运动动作识别数据包含了每一帧图像上所执行的动作,以及相应的时间信息。Specifically, for each frame of the image, the key point data must first be converted into a feature vector, including the relative position, angle information, speed information, etc. between the joints. And several consecutive frames of data in the time series are merged into a sequence to capture the dynamic characteristics of the action. Use a deep learning model, such as a recurrent neural network (RNN) or a convolutional neural network (CNN), to train the extracted features. The training data set includes the labeled limb movement key point detection data and the corresponding action labels, such as raising hands and bending over. For new limb movement key point detection data, it is input into the trained model for prediction to obtain the corresponding action label on each frame of the image. The final limb movement action recognition data contains the actions performed on each frame of the image, as well as the corresponding time information.

步骤S4:根据肢体运动动作识别数据对肢体运动关键点检测数据进行动作参数生成,得到肢体动作参数数据,以进行智能肢体康复辅助作业。Step S4: Generate motion parameters for the limb motion key point detection data according to the limb motion action recognition data to obtain limb motion parameter data for performing intelligent limb rehabilitation auxiliary operations.

具体地,根据动作识别数据和关键点检测数据,计算用户肢体运动的各项参数,如角度、速度、加速度等,以反映用户的运动状态和执行动作的质量。例如,可以通过关节角度的变化和运动轨迹的分析,生成肢体动作参数数据,并根据这些参数数据为用户提供个性化的康复辅助指导和反馈。Specifically, based on the action recognition data and key point detection data, the user's limb movement parameters, such as angle, speed, acceleration, etc., are calculated to reflect the user's movement state and the quality of the action. For example, limb movement parameter data can be generated through the analysis of joint angle changes and movement trajectories, and personalized rehabilitation assistance guidance and feedback can be provided to users based on these parameter data.

具体地,通过RGB摄像头或深度摄像头对用户进行拍摄,捕捉到肢体运动的图像数据。假设图像的分辨率为1920x1080像素。使用深度学习模型进行姿态估计和关键点检测,如OpenPose。根据图像数据进行处理,得到姿态估计数据和关键点检测数据。假设每个肢体运动的图像数据检测到的坐标位置以及姿态数据都有100个关键点。每个关键点的坐标位置由(x,y)坐标表示,姿态数据由角度值表示。对关键点检测数据进行动作识别,使用循环神经网络(RNN)对关键点序列进行分类。例如,可以将每个关键点的位置坐标和姿态数据作为序列输入,经过RNN模型处理,得到动作的分类结果。假设一共有10种不同的动作分类。根据动作识别数据和关键点检测数据,计算肢体运动的各项参数,例如,可以计算出关节角度的变化情况、运动速度、加速度等。假设对于每个关节,设备计算了其角度变化,速度和加速度的值。以肘关节为例,假设肘关节的角度变化范围在0到180度之间,速度范围在0到100像素/秒之间,加速度范围在-10到10像素/秒²之间。Specifically, the user is photographed by an RGB camera or a depth camera to capture image data of limb movement. Assume that the resolution of the image is 1920x1080 pixels. Use a deep learning model for posture estimation and key point detection, such as OpenPose. Process the image data to obtain posture estimation data and key point detection data. Assume that the coordinate position and posture data detected by the image data of each limb movement have 100 key points. The coordinate position of each key point is represented by (x, y) coordinates, and the posture data is represented by an angle value. Action recognition is performed on the key point detection data, and a recurrent neural network (RNN) is used to classify the key point sequence. For example, the position coordinates and posture data of each key point can be used as sequence input, and the classification result of the action can be obtained after being processed by the RNN model. Assume that there are 10 different action classifications in total. According to the action recognition data and key point detection data, various parameters of limb movement are calculated, for example, the change of joint angle, movement speed, acceleration, etc. can be calculated. Assume that for each joint, the device calculates its angle change, speed and acceleration values. Taking the elbow joint as an example, assume that the angle of the elbow joint ranges from 0 to 180 degrees, the velocity ranges from 0 to 100 pixels/second, and the acceleration ranges from -10 to 10 pixels/second².

本发明中利用图像传感器进行肢体运动数据采集,避免了对用户的侵入性检测或操作,提高了用户的舒适度和接受度。通过图像传感器采集的数据可以实时地进行处理和分析,使得康复辅助作业能够及时地根据用户的实际情况进行调整和反馈。通过对肢体运动图像数据进行姿态估计和关键点检测,能够准确获取用户的运动状态和关键点信息,从而实现更加贴合用户实际情况的参数制定生成和监控。该方法利用动作识别技术对肢体运动关键点检测数据进行分析,能够自动识别用户的运动动作,无需人工干预。并且根据识别结果自动生成相应的动作参数,进一步简化了康复辅助作业的操作流程。与传统的康复辅助方法相比,该方法无需使用昂贵的设备或进行复杂的操作,具有较低的成本,并且易于实施和推广,有助于提高康复服务的覆盖率和普及度。The present invention uses image sensors to collect limb movement data, avoiding invasive detection or operation of users and improving user comfort and acceptance. The data collected by the image sensor can be processed and analyzed in real time, so that rehabilitation assistance operations can be adjusted and fed back in a timely manner according to the actual situation of the user. By performing posture estimation and key point detection on limb movement image data, the user's movement state and key point information can be accurately obtained, thereby achieving parameter formulation, generation and monitoring that is more in line with the user's actual situation. The method uses motion recognition technology to analyze limb movement key point detection data, and can automatically identify the user's movement without manual intervention. And the corresponding motion parameters are automatically generated according to the recognition results, further simplifying the operation process of rehabilitation assistance operations. Compared with traditional rehabilitation assistance methods, this method does not require the use of expensive equipment or complex operations, has a lower cost, and is easy to implement and promote, which helps to improve the coverage and popularity of rehabilitation services.

优选地,步骤S1具体为:Preferably, step S1 specifically includes:

步骤S11:通过图像传感器以预设的第一阈值角度数据进行第一肢体运动数据采集,得到第一肢体运动图像数据;Step S11: collecting first limb movement data at a preset first threshold angle data through an image sensor to obtain first limb movement image data;

具体地,使用RGB摄像头或深度摄像头对用户进行拍摄,设置预设的第一阈值角度,例如设定手臂的最大伸展角度,然后在采集过程中对手臂运动进行监测,当手臂达到预设的角度时,记录该帧图像数据作为第一肢体运动图像数据。Specifically, an RGB camera or a depth camera is used to photograph the user, and a preset first threshold angle is set, such as setting the maximum extension angle of the arm. Then, the arm movement is monitored during the acquisition process. When the arm reaches the preset angle, the frame image data is recorded as the first limb movement image data.

具体地,通过RGB摄像头对用户进行拍摄,并设置预设的第一阈值角度为120度,即手臂的最大伸展角度。在采集过程中,系统实时监测手臂的运动情况,当手臂的伸展角度达到预设的120度时,记录该帧图像数据作为第一肢体运动图像数据。用户在进行康复训练时,系统会不断捕捉手臂的图像,并通过图像处理算法实时计算手臂的伸展角度。当手臂的角度达到或超过120度时,系统会自动保存该帧图像数据作为第一肢体运动图像数据。Specifically, the user is photographed through an RGB camera, and the preset first threshold angle is set to 120 degrees, that is, the maximum extension angle of the arm. During the acquisition process, the system monitors the movement of the arm in real time. When the extension angle of the arm reaches the preset 120 degrees, the frame image data is recorded as the first limb movement image data. When the user is performing rehabilitation training, the system will continuously capture images of the arm and calculate the extension angle of the arm in real time through an image processing algorithm. When the angle of the arm reaches or exceeds 120 degrees, the system will automatically save the frame image data as the first limb movement image data.

具体地,在另一种情况下,预设的第一阈值角度数据为是在采集肢体运动数据时所设定的角度阈值,用于确定图像传感器的拍摄角度,例如,如果设定的第一阈值角度为60度,那么图像传感器将在拍摄角度为60度时开始采集肢体运动数据。Specifically, in another case, the preset first threshold angle data is the angle threshold set when collecting limb movement data, which is used to determine the shooting angle of the image sensor. For example, if the set first threshold angle is 60 degrees, the image sensor will start collecting limb movement data when the shooting angle is 60 degrees.

步骤S12:通过图像传感器以预设的第二阈值角度数据进行第二肢体运动数据采集,得到第二肢体运动图像数据;Step S12: collecting second limb movement data at a preset second threshold angle data through an image sensor to obtain second limb movement image data;

具体地,同样使用RGB摄像头或深度摄像头对用户进行拍摄,设置预设的第二阈值角度,例如设定手臂的最大弯曲角度,然后在采集过程中对手臂运动进行监测,当手臂达到预设的角度时,记录该帧图像数据作为第二肢体运动图像数据。Specifically, an RGB camera or a depth camera is also used to shoot the user, and a preset second threshold angle is set, such as setting the maximum bending angle of the arm. Then, the arm movement is monitored during the acquisition process. When the arm reaches the preset angle, the frame image data is recorded as the second limb motion image data.

具体地,同样通过RGB摄像头对用户进行拍摄,设置预设的第二阈值角度为60度,即手臂的最大弯曲角度。在采集过程中,系统实时监测手臂的运动情况,当手臂的弯曲角度达到预设的60度时,记录该帧图像数据作为第二肢体运动图像数据。当用户进行康复训练时,系统会不断捕捉手臂的图像,并实时计算手臂的弯曲角度。当手臂的角度达到或超过60度时,系统会自动保存该帧图像数据作为第二肢体运动图像数据。Specifically, the user is also photographed through the RGB camera, and the preset second threshold angle is set to 60 degrees, that is, the maximum bending angle of the arm. During the acquisition process, the system monitors the movement of the arm in real time. When the bending angle of the arm reaches the preset 60 degrees, the frame image data is recorded as the second limb motion image data. When the user performs rehabilitation training, the system will continuously capture images of the arm and calculate the bending angle of the arm in real time. When the angle of the arm reaches or exceeds 60 degrees, the system will automatically save the frame image data as the second limb motion image data.

具体地,在另一种情况下,预设的第二阈值角度数据为是在采集肢体运动数据时所设定的角度阈值,用于确定图像传感器的拍摄角度,例如,如果设定的第二阈值角度为120度,那么图像传感器将在拍摄角度为120度时开始采集肢体运动数据。Specifically, in another case, the preset second threshold angle data is the angle threshold set when collecting limb movement data, which is used to determine the shooting angle of the image sensor. For example, if the set second threshold angle is 120 degrees, the image sensor will start collecting limb movement data when the shooting angle is 120 degrees.

步骤S13:对第一肢体运动图像数据以及第二肢体运动图像数据进行图像拼接,得到肢体运动图像数据。Step S13: performing image stitching on the first limb motion image data and the second limb motion image data to obtain limb motion image data.

具体地,将第一肢体运动图像数据和第二肢体运动图像数据进行拼接,可以采用图像处理软件或编程实现图像的拼接。例如,可以通过将两个图像按照特定的方式进行叠加或拼贴,如根据已知的拍摄角度和相机参数(预设的第一阈值角度数据以及预设的第二阈值角度数据),可以计算出透视变换矩阵,以将两个图像在空间中对齐,透视变换会调整图像的几何形状和空间位置,使得两个图像在视觉上对齐,对调整后的图像进行裁剪或填充,以确保它们具有相同的大小和边界,或者,使用特征点匹配算法,包括SIFT(尺度不变特征变换)或SURF(加速鲁棒特征)来检测并匹配两幅图像中的特征点,然后根据匹配结果对图像进行位置调整,将第一肢体运动图像和第二肢体运动图像进行融合处理(基于混合模型的图像融合、像素级别的混合或渐变混合),使得两个图像能够平滑地过渡并合并在一起,并采用基于梯度的边缘对齐或边缘填充技术,来调整图像边缘的位置,使得两个图像在拼接处呈现连续的边缘,以生成一张包含完整肢体运动信息的图像,得到的图像数据就是最终的肢体运动图像数据,用于后续步骤的处理和分析。Specifically, the first limb motion image data and the second limb motion image data are spliced, and image processing software or programming can be used to achieve image splicing. For example, the two images can be superimposed or collaged in a specific manner. For example, based on the known shooting angle and camera parameters (preset first threshold angle data and preset second threshold angle data), a perspective transformation matrix can be calculated to align the two images in space. The perspective transformation adjusts the geometric shape and spatial position of the image so that the two images are visually aligned. The adjusted image is cropped or filled to ensure that they have the same size and border. Alternatively, a feature point matching algorithm, including SIFT (scale-invariant feature transform) or SURF (speeded up robust features) is used to detect and match feature points in the two images, and then the image is adjusted in position based on the matching results. The first limb motion image and the second limb motion image are fused (image fusion based on a hybrid model, pixel-level blending or gradient blending) so that the two images can smoothly transition and merge together, and a gradient-based edge alignment or edge filling technique is used to adjust the position of the image edge so that the two images present continuous edges at the splicing point to generate an image containing complete limb motion information. The obtained image data is the final limb motion image data for processing and analysis in subsequent steps.

本发明中通过使用图像传感器,可以实时、精准地采集用户的肢体运动数据。设的第一和第二阈值角度数据可以根据用户的具体情况进行调整,以适应不同用户的运动能力,使得采集到的数据更具可操作性和准确性。通过设置不同的阈值角度,可以从不同角度采集肢体运动数据,使得从多个视角观察和分析用户的运动姿势,更准确地评估肢体运动的状态和特征。将第一和第二肢体运动图像数据进行拼接,可以增加图像的信息丰富度,拼接后的图像可以呈现更完整、更全面的肢体运动状态,有助于提高后续姿态估计和关键点检测的准确性和稳定性。In the present invention, by using an image sensor, the user's limb movement data can be collected in real time and accurately. The first and second threshold angle data can be adjusted according to the specific situation of the user to adapt to the athletic ability of different users, so that the collected data is more operational and accurate. By setting different threshold angles, limb movement data can be collected from different angles, so that the user's movement posture can be observed and analyzed from multiple perspectives, and the state and characteristics of limb movement can be more accurately evaluated. The first and second limb movement image data are spliced to increase the information richness of the image, and the spliced image can present a more complete and comprehensive limb movement state, which helps to improve the accuracy and stability of subsequent posture estimation and key point detection.

优选地,步骤S13具体为:Preferably, step S13 is specifically:

步骤S131:根据预设的第一阈值角度数据对第一肢体运动图像数据进行第一肢体角度检测,得到第一肢体角度检测数据;Step S131: performing first limb angle detection on the first limb motion image data according to preset first threshold angle data to obtain first limb angle detection data;

具体地,根据预设的第一阈值角度数据,对第一肢体运动图像数据进行图像变形,如仿射变换或透视变换来实现,然后利用边缘检测算法或特征点检测算法来提取肢体的轮廓或关键点,并根据这些信息计算出肢体的角度。Specifically, according to the preset first threshold angle data, the first limb motion image data is deformed, such as by affine transformation or perspective transformation, and then the contour or key points of the limb are extracted using an edge detection algorithm or a feature point detection algorithm, and the angle of the limb is calculated based on this information.

步骤S132:根据预设的第二阈值角度数据对第二肢体运动图像数据进行第二肢体角度检测,得到第二肢体角度检测数据;Step S132: performing second limb angle detection on the second limb motion image data according to preset second threshold angle data to obtain second limb angle detection data;

具体地,根据预设的第二阈值角度数据,对第二肢体运动图像数据进行图像变形,如仿射变换或透视变换来实现,然后采用与步骤S131类似的方法,对第二肢体图像进行角度检测,以获取第二肢体的角度检测数据。Specifically, according to the preset second threshold angle data, the second limb motion image data is deformed, such as by affine transformation or perspective transformation, and then the second limb image is angle detected using a method similar to step S131 to obtain angle detection data of the second limb.

步骤S133:根据第一肢体角度检测数据对第一肢体运动图像数据进行图像校正,得到第一肢体运动图像校正数据,并根据第二肢体角度检测数据对第二肢体运动图像数据进行图像校正,得到第二肢体运动图像校正数据;Step S133: performing image correction on the first limb motion image data according to the first limb angle detection data to obtain first limb motion image correction data, and performing image correction on the second limb motion image data according to the second limb angle detection data to obtain second limb motion image correction data;

具体地,根据第一肢体角度检测数据和第二肢体角度检测数据,对第一肢体运动图像数据和第二肢体运动图像数据进行校正。例如,可以根据检测到的角度信息对图像进行旋转、缩放或裁剪等处理,以使得肢体在图像中的位置和比例符合预期。Specifically, the first limb motion image data and the second limb motion image data are corrected according to the first limb angle detection data and the second limb angle detection data. For example, the image can be rotated, scaled or cropped according to the detected angle information so that the position and proportion of the limb in the image meet expectations.

步骤S134:对第一肢体运动图像校正数据以及第二肢体运动图像校正数据进行特征点提取,得到第一肢体运动图像特征点数据以及第二肢体运动图像特征点数据;Step S134: extracting feature points from the first limb motion image correction data and the second limb motion image correction data to obtain first limb motion image feature point data and second limb motion image feature point data;

具体地,对于第一肢体运动图像校正数据和第二肢体运动图像校正数据,使用特征点提取算法,如SIFT(尺度不变特征变换)或SURF(加速鲁棒特征)等,从图像中提取关键特征点。这些特征点通常是图像中的角点、边缘等显著的位置。Specifically, for the first limb motion image correction data and the second limb motion image correction data, key feature points are extracted from the images using a feature point extraction algorithm, such as SIFT (Scale Invariant Feature Transform) or SURF (Speeded Up Robust Features), etc. These feature points are usually prominent locations such as corners and edges in the image.

步骤S135:对第一肢体运动图像特征点数据以及第二肢体运动图像特征点数据进行特征匹配,得到肢体运动图像特征匹配数据;Step S135: performing feature matching on the first limb motion image feature point data and the second limb motion image feature point data to obtain limb motion image feature matching data;

具体地,将第一肢体运动图像特征点数据和第二肢体运动图像特征点数据进行特征匹配,使用匹配算法(如基于最近邻的方法),将第一图像中的特征点与第二图像中的特征点进行匹配,建立它们之间的对应关系。Specifically, feature matching is performed on the feature point data of the first limb motion image and the feature point data of the second limb motion image, and a matching algorithm (such as a nearest neighbor-based method) is used to match the feature points in the first image with the feature points in the second image to establish a corresponding relationship between them.

步骤S136:根据肢体运动图像特征匹配数据对第一肢体运动图像数据以及第二肢体运动图像数据进行透视变换,得到肢体运动图像透视变换数据;Step S136: performing perspective transformation on the first limb movement image data and the second limb movement image data according to the limb movement image feature matching data to obtain limb movement image perspective transformation data;

具体地,基于特征匹配数据,使用透视变换(perspective transformation)技术对第一肢体运动图像数据和第二肢体运动图像数据进行校正和对齐。透视变换可以校正图像中的扭曲和畸变,使得两幅图像在空间中对齐并保持一致。Specifically, based on the feature matching data, the first limb motion image data and the second limb motion image data are corrected and aligned using perspective transformation technology. Perspective transformation can correct the distortion and distortion in the image so that the two images are aligned and consistent in space.

步骤S137:对第一肢体运动图像透视变换数据以及第二肢体运动图像透视变换数据进行图像融合,得到肢体运动图像融合数据;Step S137: performing image fusion on the first limb motion image perspective transformation data and the second limb motion image perspective transformation data to obtain limb motion image fusion data;

具体地,将经过透视变换处理后的第一肢体运动图像数据和第二肢体运动图像数据进行图像融合,通过图像叠加、混合或加权平均等技术来实现,以保留两幅图像的重要信息并消除重叠和不一致。Specifically, the first limb motion image data and the second limb motion image data after perspective transformation are fused by image superposition, blending or weighted averaging techniques to retain important information of the two images and eliminate overlap and inconsistency.

步骤S138:对肢体运动图像融合数据进行边缘修复,得到肢体运动图像数据。Step S138: Perform edge repair on the limb motion image fusion data to obtain limb motion image data.

具体地,对图像融合数据进行边缘修复。边缘修复的目的是填补图像中的缺失部分或修复边缘处的不连续性,以确保图像的完整性和连续性,通过插值算法或基于边缘信息的像素填充等方法来实现。Specifically, edge restoration is performed on the image fusion data. The purpose of edge restoration is to fill in the missing parts of the image or repair the discontinuity at the edge to ensure the integrity and continuity of the image, which is achieved through methods such as interpolation algorithms or pixel filling based on edge information.

本发明中步骤S131和S132利用角度检测技术,能够准确地检测出第一和第二肢体的角度信息。通过图像校正(步骤S133),可以对采集到的图像数据进行校正,减少因角度偏差而引入的误差,提高后续处理的准确性。步骤S134和S135利用特征点提取和匹配技术,能够从校正后的图像中提取关键特征点,并通过特征匹配进行关联,有助于准确地捕获肢体运动的关键信息,并保持一致性。步骤S136和S137利用透视变换和图像融合技术,能够将两个校正后的图像进行合成,得到更全面、更完整的肢体运动图像数据,提高图像的信息丰富度和质量,从而更准确地分析肢体运动。步骤S138对图像进行边缘修复,能够修复图像中因校正、变换等过程引入的边缘问题,使得图像更加清晰、准确,提高了后续处理的效果。In the present invention, steps S131 and S132 use angle detection technology to accurately detect the angle information of the first and second limbs. Through image correction (step S133), the collected image data can be corrected to reduce the error introduced by angle deviation and improve the accuracy of subsequent processing. Steps S134 and S135 use feature point extraction and matching technology to extract key feature points from the corrected image and associate them through feature matching, which helps to accurately capture the key information of limb movement and maintain consistency. Steps S136 and S137 use perspective transformation and image fusion technology to synthesize the two corrected images to obtain more comprehensive and complete limb movement image data, improve the information richness and quality of the image, and thus more accurately analyze limb movement. Step S138 performs edge repair on the image, which can repair the edge problems introduced in the image due to correction, transformation and other processes, making the image clearer and more accurate, and improving the effect of subsequent processing.

优选地,步骤S134具体为:Preferably, step S134 is specifically as follows:

步骤S101:对第一肢体运动图像校正数据以及第二肢体运动图像校正数据进行聚类计算,得到第一肢体运动图像聚类数据以及第二肢体运动图像聚类数据;Step S101: performing cluster calculation on the first limb motion image correction data and the second limb motion image correction data to obtain first limb motion image cluster data and second limb motion image cluster data;

具体地,使用聚类算法,如K均值聚类或层次聚类,对第一肢体运动图像校正数据和第二肢体运动图像校正数据进行聚类计算。每个聚类代表图像中相似的像素点组成的集合,从而得到第一肢体运动图像聚类数据和第二肢体运动图像聚类数据。Specifically, a clustering algorithm, such as K-means clustering or hierarchical clustering, is used to perform clustering calculation on the first limb motion image correction data and the second limb motion image correction data. Each cluster represents a set of similar pixel points in the image, thereby obtaining the first limb motion image clustering data and the second limb motion image clustering data.

对第一肢体运动图像校正数据和第二肢体运动图像校正数据分别进行K均值聚类计算。假设得到的第一肢体运动图像聚类数据包含5个聚类,每个聚类的像素点分别为:聚类1:包含30个像素点,聚类2:包含25个像素点,聚类3:包含20个像素点,聚类4:包含15个像素点,聚类5:包含10个像素点,第二肢体运动图像聚类数据的聚类情况类似,也包含5个聚类,每个聚类的像素点数量不同。K-means clustering calculation is performed on the first limb motion image correction data and the second limb motion image correction data respectively. Assume that the first limb motion image clustering data obtained contains 5 clusters, and the pixel points of each cluster are: cluster 1: contains 30 pixels, cluster 2: contains 25 pixels, cluster 3: contains 20 pixels, cluster 4: contains 15 pixels, cluster 5: contains 10 pixels, and the clustering of the second limb motion image clustering data is similar, also containing 5 clusters, and the number of pixels in each cluster is different.

步骤S102:对第一肢体运动图像聚类数据以及第二肢体运动图像聚类数据进行聚类中心提取,得到第一图像聚类中心数据以及第二图像聚类中心数据;Step S102: extracting cluster centers of the first limb movement image cluster data and the second limb movement image cluster data to obtain first image cluster center data and second image cluster center data;

具体地,从每个聚类中选择代表性的像素点作为聚类中心,可以选择聚类内部距离平均最小的像素点作为聚类中心,得到第一图像聚类中心数据和第二图像聚类中心数据,用于后续处理。Specifically, a representative pixel point is selected from each cluster as the cluster center, and a pixel point with the smallest average internal distance in the cluster can be selected as the cluster center to obtain first image cluster center data and second image cluster center data for subsequent processing.

从每个聚类中提取代表性的像素点作为聚类中心。假设从第一肢体运动图像聚类数据中提取的聚类中心分别是:聚类1中心:(10,20),聚类2中心:(30,40),聚类3中心:(50,60),聚类4中心:(70,80),聚类5中心:(90,100)。同样地,从第二肢体运动图像聚类数据中提取的聚类中心也有相应的坐标。Representative pixels are extracted from each cluster as cluster centers. Assume that the cluster centers extracted from the first limb motion image cluster data are: Cluster 1 Center: (10, 20), Cluster 2 Center: (30, 40), Cluster 3 Center: (50, 60), Cluster 4 Center: (70, 80), Cluster 5 Center: (90, 100). Similarly, the cluster centers extracted from the second limb motion image cluster data also have corresponding coordinates.

步骤S103:对第一肢体运动图像聚类数据以及第二肢体运动图像聚类数据进行簇半径提取,得到第一簇半径数据以及第二簇半径数据;Step S103: extracting cluster radius from the first limb motion image cluster data and the second limb motion image cluster data to obtain first cluster radius data and second cluster radius data;

具体地,计算每个聚类中所有像素点到聚类中心的平均距离,作为该聚类的簇半径,得到第一肢体运动图像聚类数据的簇半径数据和第二肢体运动图像聚类数据的簇半径数据。Specifically, the average distance from all pixels in each cluster to the cluster center is calculated as the cluster radius of the cluster, and the cluster radius data of the first limb motion image cluster data and the cluster radius data of the second limb motion image cluster data are obtained.

对第一肢体运动图像聚类数据和第二肢体运动图像聚类数据进行聚类计算,并计算每个聚类的平均半径。假设得到的第一肢体运动图像聚类数据的簇半径数据为15,20,25,30,35,而第二肢体运动图像聚类数据的簇半径数据为18,22,27,32,38。The first limb motion image cluster data and the second limb motion image cluster data are clustered and the average radius of each cluster is calculated. It is assumed that the cluster radius data of the first limb motion image cluster data are 15, 20, 25, 30, 35, and the cluster radius data of the second limb motion image cluster data are 18, 22, 27, 32, 38.

步骤S104:利用第一肢体运动图像聚类数据对第一簇半径数据进行加权计算,得到第一簇半径加权数据,并利用第二肢体运动图像聚类数据对第二簇半径数据进行加权计算,得到第二簇半径加权数据;Step S104: performing weighted calculation on the first cluster radius data using the first limb motion image cluster data to obtain first cluster radius weighted data, and performing weighted calculation on the second cluster radius data using the second limb motion image cluster data to obtain second cluster radius weighted data;

具体地,对第一肢体运动图像聚类数据进行分析和处理,得到第一簇半径数据。根据第一肢体运动图像聚类数据的分布情况和特征,利用加权计算方法对第一簇半径数据进行加权。例如,可以根据每个簇的大小、密度、形状等因素进行加权计算,得到第一簇半径加权数据。类似地,对第二肢体运动图像聚类数据进行分析和处理,得到第二簇半径数据,并利用加权计算方法得到第二簇半径加权数据。Specifically, the first limb motion image cluster data is analyzed and processed to obtain the first cluster radius data. According to the distribution and characteristics of the first limb motion image cluster data, the first cluster radius data is weighted using a weighted calculation method. For example, weighted calculation can be performed based on factors such as the size, density, and shape of each cluster to obtain the first cluster radius weighted data. Similarly, the second limb motion image cluster data is analyzed and processed to obtain the second cluster radius data, and the weighted calculation method is used to obtain the second cluster radius weighted data.

具体地,利用线性加权计算方法对簇半径数据进行加权,权重分别为0.2,0.3,0.4,0.5,0.6。对于第一肢体运动图像,加权后的簇半径数据为3.0,6.0,10.0,15.0,21.0,而对于第二肢体运动图像,加权后的簇半径数据为3.6,6.6,10.8,16.0,22.8。Specifically, the cluster radius data are weighted using a linear weighted calculation method, and the weights are 0.2, 0.3, 0.4, 0.5, and 0.6 respectively. For the first limb motion image, the weighted cluster radius data are 3.0, 6.0, 10.0, 15.0, and 21.0, while for the second limb motion image, the weighted cluster radius data are 3.6, 6.6, 10.8, 16.0, and 22.8.

步骤S105:利用第一簇半径加权数据对第一图像聚类中心数据进行领域像素选择,得到第一肢体运动图像领域像素数据,并利用第二簇半径加权数据对第二图像聚类中心数据进行领域像素选择,得到第二肢体运动图像领域像素数据;Step S105: performing domain pixel selection on the first image cluster center data using the first cluster radius weighted data to obtain first limb motion image domain pixel data, and performing domain pixel selection on the second image cluster center data using the second cluster radius weighted data to obtain second limb motion image domain pixel data;

具体地,使用第一簇半径加权数据对第一图像聚类中心数据进行分析,确定每个聚类中心的领域范围。根据第一簇半径加权数据,选择与每个聚类中心相关联的领域像素数据。可以根据加权程度确定领域的大小和形状,以确保选择的像素具有代表性和有效性。类似地,使用第二簇半径加权数据对第二图像聚类中心数据进行分析和领域像素选择。Specifically, the first cluster radius weighted data is used to analyze the first image cluster center data to determine the domain range of each cluster center. According to the first cluster radius weighted data, the domain pixel data associated with each cluster center is selected. The size and shape of the domain can be determined according to the weighted degree to ensure that the selected pixels are representative and effective. Similarly, the second cluster radius weighted data is used to analyze the second image cluster center data and select domain pixels.

具体地,根据加权后的簇半径数据,确定每个聚类中心的领域范围,并选择相应的像素作为领域像素数据。假设在第一肢体运动图像中,选择了每个聚类中心周围的100个像素作为领域像素数据。对于第二肢体运动图像也采取同样的方式选择领域像素数据。Specifically, the domain range of each cluster center is determined according to the weighted cluster radius data, and the corresponding pixels are selected as domain pixel data. Assume that in the first limb motion image, 100 pixels around each cluster center are selected as domain pixel data. The same method is adopted to select domain pixel data for the second limb motion image.

步骤S106:对第一肢体运动图像领域像素数据以及第二肢体运动图像领域像素数据进行动态像素灰度判断,得到第一肢体运动图像特征点数据以及第二肢体运动图像特征点数据。Step S106: Perform dynamic pixel grayscale judgment on the pixel data in the first limb motion image area and the pixel data in the second limb motion image area to obtain the first limb motion image feature point data and the second limb motion image feature point data.

具体地,对第一肢体运动图像领域像素数据和第二肢体运动图像领域像素数据进行动态像素灰度判断。动态像素灰度判断是根据像素的灰度值和运动特征,对图像中的像素进行分类或过滤。例如,可以根据像素的亮度、对比度、颜色等特征,结合运动方向和速度等信息,对像素进行分类,从而筛选出具有代表性的特征点数据。最终得到第一肢体运动图像特征点数据和第二肢体运动图像特征点数据,用于后续的动作参数生成和康复辅助作业。Specifically, dynamic pixel grayscale judgment is performed on the pixel data in the field of the first limb motion image and the pixel data in the field of the second limb motion image. Dynamic pixel grayscale judgment is to classify or filter the pixels in the image according to the grayscale value and motion characteristics of the pixels. For example, the pixels can be classified according to the brightness, contrast, color and other characteristics of the pixels, combined with the information such as the motion direction and speed, so as to filter out representative feature point data. Finally, the feature point data of the first limb motion image and the feature point data of the second limb motion image are obtained for subsequent motion parameter generation and rehabilitation auxiliary work.

具体地,对于选定的领域像素数据,根据像素的灰度值和运动特征进行判断,筛选出具有代表性的特征点数据。假设根据像素的灰度值和运动特征,最终从每组领域像素数据中筛选出了50个特征点数据,用于后续的分析和处理。Specifically, for the selected domain pixel data, the representative feature point data is screened out according to the gray value and motion characteristics of the pixel. Assume that 50 feature point data are finally screened out from each group of domain pixel data according to the gray value and motion characteristics of the pixel for subsequent analysis and processing.

具体地,某个像素点的灰度值为,其中/>表示行,/>表示列。动态像素灰度判断的目标是确定该像素点是否属于动作特征点。首先,定义动态像素灰度判断函数/>,表示像素灰度值/>对于动作特征点的判断结果,即:Specifically, the gray value of a pixel is , where/> Indicates a row, /> The goal of dynamic pixel grayscale judgment is to determine whether the pixel belongs to an action feature point. First, define the dynamic pixel grayscale judgment function/> , represents the pixel gray value/> The judgment result of the action feature point is:

;

接下来,可以通过计算像素点灰度值与周围像素点灰度值的差异来判断该像素点是否属于动作特征点。表示像素点/>周围的邻域像素点集合,动态像素灰度判断函数可以定义为:Next, whether the pixel is an action feature point can be determined by calculating the difference between the grayscale value of the pixel and the grayscale values of surrounding pixels. Indicates pixel point/> The surrounding neighborhood pixel set, the dynamic pixel grayscale judgment function can be defined as:

;

其中是动态像素灰度判断阈值数据,用于控制动态像素灰度判断的灵敏度。如果像素点/>的灰度值与其周围像素点(/>表示/>的变化幅度范围,比如/>为(1,1)表示/>周围1范围内的像素点,如/>、/>、/>、/>、/>、/>、/>)的平均灰度值的差异大于阈值/>,则判断该像素点为动作特征点(即),否则不是(即/>)。in is the dynamic pixel grayscale judgment threshold data, which is used to control the sensitivity of dynamic pixel grayscale judgment. The gray value of the pixel and its surrounding pixels (/> Indicates/> The range of variation, such as/> = (1,1) means/> Pixels within the surrounding range of 1, such as/> 、/> 、/> , 、/> 、/> 、/> 、/> , ) has a difference in average grayscale value greater than the threshold /> , then the pixel is judged to be an action feature point (i.e. ), otherwise not (ie/> ).

本发明中步骤S134中,能够对图像数据进行精细化的处理和分析,有助于更准确地捕获图像中的特征信息,提高后续处理的精度和可靠性。步骤S104和S105中利用加权计算和领域像素选择技术,能够根据聚类数据和簇半径数据对图像数据进行加权处理和像素选择,突出图像中的关键信息,提高了对特征点的提取准确性和稳定性。步骤S106中进行动态像素灰度判断,能够根据图像的灰度信息对特征点进行筛选和判断,减少对噪声和干扰的敏感性,提高了对特征点的识别和提取效果。该方法通过多层次、多角度地分析图像数据,提高了对肢体运动图像特征点的提取准确性和稳定性。传统的特征点提取往往采用基于固定阈值的特征点采集,本发明通过图像的自身特性进行适应性调整,从而提高了对特征点的提取效果和图像数据的质量,克服了现有技术存在的精度不足以及提取效果不稳定的问题。In step S134 of the present invention, the image data can be processed and analyzed in a refined manner, which helps to more accurately capture the feature information in the image and improve the accuracy and reliability of subsequent processing. In steps S104 and S105, weighted calculation and domain pixel selection technology are used to perform weighted processing and pixel selection on the image data according to clustering data and cluster radius data, highlight the key information in the image, and improve the accuracy and stability of feature point extraction. In step S106, dynamic pixel grayscale judgment is performed, and feature points can be screened and judged according to the grayscale information of the image, reducing the sensitivity to noise and interference, and improving the recognition and extraction effect of feature points. The method analyzes image data at multiple levels and angles to improve the accuracy and stability of feature point extraction of limb movement images. Traditional feature point extraction often uses feature point acquisition based on a fixed threshold. The present invention performs adaptive adjustment through the image's own characteristics, thereby improving the extraction effect of feature points and the quality of image data, and overcoming the problems of insufficient accuracy and unstable extraction effect in the prior art.

优选地,步骤S106具体为:Preferably, step S106 is specifically:

通过光照传感器获取光照角度数据以及光照强度数据;Obtain light angle data and light intensity data through light sensors;

具体地,使用光照传感器,如光敏电阻或光电二极管,来测量环境中的光照角度和光照强度,如要测量光照角度,可以采用机械或电子转向的方式,使光敏电阻或光电二极管朝向感兴趣的方向,然后,通过旋转或移动传感器,并测量不同位置下的光照强度,从而确定光照角度。在设备中配置多个光敏电阻或光电二极管,将它们排列成一个阵列,通过在阵列中放置多个传感器(光敏电阻或光电二极管),从不同方向捕获光线。根据这些传感器的测量结果,可以推断出光照的角度信息。具体来说,如果来自特定方向的光照较强,则对应的传感器会显示较高的信号强度,而来自其他方向的光照则会导致较低的信号强度。通过比较不同传感器的信号强度,确定光线的入射角度。这些传感器可以被安装在设备上,以便在运动图像数据采集期间实时获取环境光照信息。Specifically, a light sensor, such as a photoresistor or a photodiode, is used to measure the light angle and light intensity in the environment. To measure the light angle, a mechanical or electronic steering method can be used to make the photoresistor or photodiode face the direction of interest. Then, the light angle is determined by rotating or moving the sensor and measuring the light intensity at different positions. Multiple photoresistors or photodiodes are configured in the device and arranged into an array. By placing multiple sensors (photoresistors or photodiodes) in the array, light is captured from different directions. Based on the measurement results of these sensors, the angle information of the light can be inferred. Specifically, if the light from a specific direction is strong, the corresponding sensor will show a higher signal intensity, while light from other directions will result in a lower signal intensity. By comparing the signal strengths of different sensors, the incident angle of the light is determined. These sensors can be installed on the device to obtain ambient light information in real time during motion image data acquisition.

根据光照角度数据对预设的第一阈值角度数据以及预设的第二阈值角度数据进行角度计算,得到第一拍摄光照角度数据以及第二拍摄光照角度数据;Performing angle calculation on preset first threshold angle data and preset second threshold angle data according to the illumination angle data to obtain first shooting illumination angle data and second shooting illumination angle data;

具体地,获取了每个图像的拍摄角度数据。现在,装置将这些拍摄角度与预设的第一阈值角度和第二阈值角度进行比较,以计算实际的光照角度数据。例如,设光照角度数据为,预设的第一阈值角度数据为/>,预设的第二阈值角度数据为/>,第一拍摄光照角度数据为/>,第二拍摄光照角度数据为/>Specifically, the shooting angle data of each image is obtained. Now, the device compares these shooting angles with the preset first threshold angle and second threshold angle to calculate the actual illumination angle data. For example, if the illumination angle data is , the preset first threshold angle data is/> , the preset second threshold angle data is/> , the first shooting illumination angle data is/> , the second shooting illumination angle data is/> .

;

;

根据第一拍摄光照角度数据以及光照强度数据进行灰度阈值生成,得到第一灰度阈值数据,并根据第二拍摄光照角度数据以及光照强度数据进行灰度阈值生成,得到第二灰度阈值数据。A grayscale threshold is generated according to the first shooting illumination angle data and the illumination intensity data to obtain first grayscale threshold data, and a grayscale threshold is generated according to the second shooting illumination angle data and the illumination intensity data to obtain second grayscale threshold data.

对第一肢体运动图像领域像素数据通过第一灰度阈值数据进行像素灰度判断,得到第一肢体运动图像特征点数据,并对第二肢体运动图像领域像素数据通过第二灰度阈值数据进行像素灰度判断,得到第二肢体运动图像特征点数据。The pixel data in the first limb motion image field is subjected to pixel grayscale judgment using the first grayscale threshold data to obtain the first limb motion image feature point data, and the pixel data in the second limb motion image field is subjected to pixel grayscale judgment using the second grayscale threshold data to obtain the second limb motion image feature point data.

具体地,使用的图像是灰度图像(或者非灰度图像,则先转为灰度图像进行确定特征点),并且根据像素的灰度值来确定特征点。设置第一灰度阈值为100,第二灰度阈值为200。对于第一肢体运动图像领域像素数据:遍历每个像素,获取其灰度值。如果像素的灰度值大于等于第一灰度阈值,则将该像素标记为特征点,将其坐标添加到第一肢体运动图像特征点数据中。对于第二肢体运动图像领域像素数据:同样地,遍历每个像素,获取其灰度值。如果像素的灰度值大于等于第二灰度阈值,则将该像素标记为特征点,将其坐标添加到第二肢体运动图像特征点数据中。Specifically, the image used is a grayscale image (or if it is a non-grayscale image, it is first converted to a grayscale image to determine the feature points), and the feature points are determined according to the grayscale value of the pixel. Set the first grayscale threshold to 100 and the second grayscale threshold to 200. For the pixel data in the first limb motion image field: traverse each pixel and obtain its grayscale value. If the grayscale value of the pixel is greater than or equal to the first grayscale threshold, mark the pixel as a feature point and add its coordinates to the first limb motion image feature point data. For the pixel data in the second limb motion image field: Similarly, traverse each pixel and obtain its grayscale value. If the grayscale value of the pixel is greater than or equal to the second grayscale threshold, mark the pixel as a feature point and add its coordinates to the second limb motion image feature point data.

对于第一肢体运动图像领域像素数据:假设装置选择的一个像素的灰度值为120,而第一灰度阈值为100。由于120大于等于100,装置将该像素标记为特征点,并将其坐标添加到第一肢体运动图像特征点数据中。对于第二肢体运动图像领域像素数据:同样地,假设装置选择的另一个像素的灰度值为180,而第二灰度阈值为200。由于180小于200,装置不将该像素标记为特征点。For the first limb motion image field pixel data: Assume that the grayscale value of a pixel selected by the device is 120, and the first grayscale threshold is 100. Since 120 is greater than or equal to 100, the device marks the pixel as a feature point and adds its coordinates to the first limb motion image feature point data. For the second limb motion image field pixel data: Similarly, assume that the grayscale value of another pixel selected by the device is 180, and the second grayscale threshold is 200. Since 180 is less than 200, the device does not mark the pixel as a feature point.

本发明中通过利用光照传感器获取的光照角度数据,能够考虑光照方向对图像的影响。根据光照角度数据计算灰度阈值,有助于根据不同光照条件对图像进行自适应调整,提高了图像处理的稳定性和鲁棒性。步骤S106根据光照角度数据和光照强度数据动态生成灰度阈值,能够根据实时的光照情况对图像进行灰度阈值的调整,避免因光照条件不同而导致的图像处理不稳定或误判的问题,提高了对图像特征点的提取准确性。通过根据光照角度数据生成的灰度阈值对图像像素进行灰度判断,能够实现自适应的像素灰度判断,有助于在不同光照条件下准确地提取图像的特征点,克服了现有技术在光照变化较大时特征提取效果不稳定的问题。步骤S106通过光照角度数据计算灰度阈值和进行像素灰度判断,能够提高图像处理的鲁棒性。对于不同光照条件下的图像,能够更加准确地提取特征点,从而提高了后续处理的效果和可靠性。In the present invention, by using the illumination angle data obtained by the illumination sensor, the influence of the illumination direction on the image can be considered. Calculating the grayscale threshold according to the illumination angle data helps to adaptively adjust the image according to different illumination conditions, thereby improving the stability and robustness of image processing. Step S106 dynamically generates the grayscale threshold according to the illumination angle data and the illumination intensity data, and can adjust the grayscale threshold of the image according to the real-time illumination conditions, thereby avoiding the problem of unstable or misjudgment of image processing caused by different illumination conditions, and improving the accuracy of extracting image feature points. By performing grayscale judgment on image pixels according to the grayscale threshold generated by the illumination angle data, adaptive pixel grayscale judgment can be achieved, which helps to accurately extract the feature points of the image under different illumination conditions, thereby overcoming the problem that the feature extraction effect of the prior art is unstable when the illumination changes greatly. Step S106 calculates the grayscale threshold and performs pixel grayscale judgment according to the illumination angle data, thereby improving the robustness of image processing. For images under different illumination conditions, feature points can be extracted more accurately, thereby improving the effect and reliability of subsequent processing.

优选地,其中肢体运动关键点检测数据包括第一肢体运动关键点检测数据以及第二肢体运动关键点检测数据,步骤S2具体为:Preferably, the limb movement key point detection data includes first limb movement key point detection data and second limb movement key point detection data, and step S2 is specifically:

步骤S21:对肢体运动图像数据进行姿态估计,得到姿态估计数据;Step S21: performing posture estimation on the limb motion image data to obtain posture estimation data;

具体地,使用深度学习模型或传统计算机视觉算法对肢体运动图像数据进行处理,以获取肢体的姿态信息。例如,可以使用基于卷积神经网络(CNN)的姿态估计模型,如OpenPose,来检测人体姿态关键点的位置。Specifically, a deep learning model or a traditional computer vision algorithm is used to process the limb motion image data to obtain the limb posture information. For example, a posture estimation model based on a convolutional neural network (CNN), such as OpenPose, can be used to detect the positions of key points of human posture.

步骤S22:对姿态估计数据进行局部关键点检测,得到第一肢体运动关键点检测数据;Step S22: performing local key point detection on the posture estimation data to obtain first limb movement key point detection data;

具体地,根据姿态估计数据,通过在特定身体部位或关节周围定义感兴趣区域(ROI),使用特定的关键点检测算法(如基于深度学习的单个关键点检测器)来检测局部关键点。例如,对于手部运动关键点检测,可以使用针对手部关键点检测的深度学习模型,如Hand Keypoint Detection模型。Specifically, based on the pose estimation data, by defining a region of interest (ROI) around a specific body part or joint, a specific keypoint detection algorithm (such as a single keypoint detector based on deep learning) is used to detect local keypoints. For example, for hand motion keypoint detection, a deep learning model for hand keypoint detection, such as the Hand Keypoint Detection model, can be used.

步骤S23:对姿态估计数据进行全局关键点检测,得到第二肢体运动关键点检测数据。Step S23: Perform global key point detection on the posture estimation data to obtain second limb movement key point detection data.

具体地,利用姿态估计数据,对整个人体姿势进行综合性的关键点检测,以获取全局性的关键点信息,通过在整个图像上应用全局关键点检测算法来实现,例如YOLO或Faster R-CNN等目标检测算法,这些算法能够同时检测多个关键点并提供其位置信息,从而得到全局关键点检测数据。Specifically, the posture estimation data is used to perform comprehensive key point detection on the entire human posture to obtain global key point information. This is achieved by applying a global key point detection algorithm to the entire image, such as object detection algorithms such as YOLO or Faster R-CNN, which can simultaneously detect multiple key points and provide their position information, thereby obtaining global key point detection data.

本发明中通过步骤S21的姿态估计,能够获取到肢体的整体姿态信息,有助于更好地理解肢体的整体运动状态,为后续的关键点检测提供重要参考。步骤S22针对姿态估计数据进行局部关键点检测,能够精细化地识别出肢体的局部关键点,这些局部关键点可以代表肢体的重要部位和运动特征,有助于更准确地分析和理解肢体运动的细节。步骤S23对姿态估计数据进行全局关键点检测,能够从整体上捕获肢体的关键点信息,全局关键点可以反映肢体整体结构和运动状态,有助于更全面地分析和理解肢体的整体运动特征。本发明能够提高关键点检测的准确性和稳定性,有助于更准确地识别和定位肢体的关键点,为后续的动作识别和参数生成提供可靠的数据支持。In the present invention, through the posture estimation of step S21, the overall posture information of the limb can be obtained, which helps to better understand the overall motion state of the limb and provide an important reference for subsequent key point detection. Step S22 performs local key point detection on the posture estimation data, which can finely identify the local key points of the limb. These local key points can represent the important parts and motion characteristics of the limb, which helps to more accurately analyze and understand the details of the limb movement. Step S23 performs global key point detection on the posture estimation data, which can capture the key point information of the limb as a whole. The global key points can reflect the overall structure and motion state of the limb, which helps to more comprehensively analyze and understand the overall motion characteristics of the limb. The present invention can improve the accuracy and stability of key point detection, help to more accurately identify and locate the key points of the limb, and provide reliable data support for subsequent action recognition and parameter generation.

优选地,步骤S22具体为:Preferably, step S22 is specifically:

根据姿态估计数据进行分支网络检测选择,得到分支网络检测数据;Perform branch network detection selection according to the posture estimation data to obtain branch network detection data;

具体地,设计一个多分支网络架构,每个分支负责检测特定身体部位的关键点。例如,可以设计一个分支用于检测头部关键点,另一个分支用于检测手臂关键点,另一个分支用于检测腿部关键点等。这些分支可以是单独的卷积神经网络,也可以是共享参数的网络结构。Specifically, a multi-branch network architecture is designed, where each branch is responsible for detecting key points of a specific body part. For example, one branch can be designed to detect key points of the head, another branch can be designed to detect key points of the arm, another branch can be designed to detect key points of the leg, etc. These branches can be separate convolutional neural networks or network structures with shared parameters.

根据分支网络检测数据对姿态估计数据进行特定部位关键点检测,得到特定部位关键点检测数据;Perform key point detection on specific parts of the posture estimation data according to the branch network detection data to obtain key point detection data of specific parts;

具体地,根据每个分支网络检测到的结果,在姿态估计数据中定位特定身体部位的关键点。例如,如果一个分支网络专门用于检测手臂关键点,那么可以根据该分支的检测结果在姿态估计数据中确定手臂的位置,并提取手臂关键点信息。Specifically, based on the results detected by each branch network, the key points of a specific body part are located in the pose estimation data. For example, if a branch network is specifically used to detect arm key points, the position of the arm can be determined in the pose estimation data based on the detection results of the branch, and the arm key point information can be extracted.

将不同的分支网络检测数据对应的特定部位关键点检测数据进行特征融合,得到第一肢体运动关键点检测数据。The key point detection data of specific parts corresponding to the detection data of different branch networks are feature fused to obtain the first limb movement key point detection data.

具体地,将每个分支网络检测到的特定身体部位关键点检测数据进行特征融合,以综合反映整体肢体运动的关键点信息,通过简单的特征拼接、加权平均等方式来实现。例如,可以将不同分支检测到的手臂、腿部等关键点数据进行融合,得到完整的肢体运动关键点检测数据。Specifically, the key point detection data of specific body parts detected by each branch network is feature fused to comprehensively reflect the key point information of the overall limb movement, which is achieved through simple feature splicing, weighted averaging, etc. For example, the key point data of arms, legs, etc. detected by different branches can be fused to obtain complete limb movement key point detection data.

设计了一个包含多个分支的卷积神经网络(CNN),每个分支专门负责检测人体的特定部位关键点,如输入层:输入为姿态估计数据,其中包含了人体姿态关键点的位置信息。多分支卷积神经网络:头部分支:输入层:姿态估计数据,卷积层:多个卷积层进行特征提取,池化层:减小特征图的尺寸,保留主要特征,卷积层:进一步提取特征,全连接层:将特征映射到头部关键点的坐标,输出层:头部关键点的坐标。肩部分支:输入层:姿态估计数据,卷积层:多个卷积层进行特征提取,池化层:减小特征图的尺寸,保留主要特征,卷积层:进一步提取特征,全连接层:将特征映射到肩部关键点的坐标,输出层:肩部关键点的坐标。手臂分支:输入层:姿态估计数据,卷积层:多个卷积层进行特征提取,池化层:减小特征图的尺寸,保留主要特征,卷积层:进一步提取特征,全连接层:将特征映射到手臂关键点的坐标,输出层:手臂关键点的坐标。膝盖分支:输入层:姿态估计数据,卷积层:多个卷积层进行特征提取,池化层:减小特征图的尺寸,保留主要特征,卷积层:进一步提取特征,全连接层:将特征映射到膝盖关键点的坐标,输出层:膝盖关键点的坐标。脚踝分支:输入层:姿态估计数据,卷积层:多个卷积层进行特征提取,池化层:减小特征图的尺寸,保留主要特征,卷积层:进一步提取特征,全连接层:将特征映射到脚踝关键点的坐标,输出层:脚踝关键点的坐标。为了训练这样的网络,通过获取姿态估计训练数据,并根据姿态估计训练数据进行模型/网络训练,采用关键点检测任务的损失函数,例如均方误差(Mean Squared Error,MSE)或者平滑的L1损失(Smooth L1 Loss)。对于每个关键点位置,损失函数表示为:A convolutional neural network (CNN) with multiple branches is designed, each branch is responsible for detecting key points of a specific part of the human body, such as input layer: the input is posture estimation data, which contains the location information of the human posture key points. Multi-branch convolutional neural network: head branch: input layer: posture estimation data, convolution layer: multiple convolution layers for feature extraction, pooling layer: reduce the size of the feature map, retain the main features, convolution layer: further extract features, fully connected layer: map the features to the coordinates of the head key points, output layer: the coordinates of the head key points. Shoulder branch: input layer: posture estimation data, convolution layer: multiple convolution layers for feature extraction, pooling layer: reduce the size of the feature map, retain the main features, convolution layer: further extract features, fully connected layer: map the features to the coordinates of the shoulder key points, output layer: the coordinates of the shoulder key points. Arm branch: input layer: posture estimation data, convolution layer: multiple convolution layers for feature extraction, pooling layer: reduce the size of the feature map, retain the main features, convolution layer: further extract features, fully connected layer: map the features to the coordinates of the arm key points, output layer: the coordinates of the arm key points. Knee branch: Input layer: pose estimation data, Convolution layer: multiple convolution layers for feature extraction, Pooling layer: reduce the size of feature map, retain the main features, Convolution layer: further extract features, Fully connected layer: map features to the coordinates of knee key points, Output layer: coordinates of knee key points. Ankle branch: Input layer: pose estimation data, Convolution layer: multiple convolution layers for feature extraction, Pooling layer: reduce the size of feature map, retain the main features, Convolution layer: further extract features, Fully connected layer: map features to the coordinates of ankle key points, Output layer: coordinates of ankle key points. In order to train such a network, by obtaining pose estimation training data and training the model/network based on the pose estimation training data, the loss function of the key point detection task is adopted, such as Mean Squared Error (MSE) or Smooth L1 Loss. For each key point position , the loss function is expressed as:

;

为模型训练损失值,/>为姿态估计训练数据数量数据,/>为姿态估计训练数据序次项数据,/>为第/>个姿态估计训练数据的网络预测数据,/>为第/>个姿态估计训练数据的真实位置数据。优化函数可以选择随机梯度下降(StochasticGradient Descent,SGD)或者其变种,例如Adam或RMSProp。输入:姿态估计数据,其中包含了人体姿态关键点的位置信息。输出:人体各个特定部位关键点的坐标,例如(x,y)坐标对。例如:第一个分支用于检测头部关键点。第二个分支用于检测手臂关键点。第三个分支用于检测腿部关键点。每个分支网络通过对姿态估计数据的处理,得到相应部位的关键点检测数据。例如:头部分支检测到头部关键点的位置。手臂分支检测到手臂关键点的位置。腿部分支检测到腿部关键点的位置。根据每个分支网络检测到的结果,在姿态估计数据中确定特定身体部位的关键点位置。例如:头部分支的检测结果可以用于定位姿态估计数据中的头部位置,并提取头部关键点信息。手臂分支的检测结果可以用于定位姿态估计数据中的手臂位置,并提取手臂关键点信息。腿部分支的检测结果可以用于定位姿态估计数据中的腿部位置,并提取腿部关键点信息。将不同分支检测到的特定身体部位关键点检测数据进行特征融合,得到完整的肢体运动关键点检测数据。例如:可以将头部、手臂、腿部等分支检测到的关键点数据进行简单的特征拼接或加权平均,以综合反映整体肢体运动的关键点信息。 is the model training loss value, /> The number of training data for posture estimation, /> is the sequence item data of posture estimation training data,/> For the first/> The network prediction data of the pose estimation training data, /> For the first/> The real position data of the pose estimation training data. The optimization function can choose Stochastic Gradient Descent (SGD) or its variants, such as Adam or RMSProp. Input: pose estimation data, which contains the position information of the key points of the human body pose. Output: the coordinates of the key points of each specific part of the human body, such as (x, y) coordinate pairs. For example: the first branch is used to detect the key points of the head. The second branch is used to detect the key points of the arm. The third branch is used to detect the key points of the leg. Each branch network obtains the key point detection data of the corresponding part by processing the pose estimation data. For example: the head branch detects the position of the head key point. The arm branch detects the position of the arm key point. The leg branch detects the position of the leg key point. According to the results detected by each branch network, the key point position of a specific body part is determined in the pose estimation data. For example: the detection result of the head branch can be used to locate the head position in the pose estimation data and extract the head key point information. The detection result of the arm branch can be used to locate the arm position in the pose estimation data and extract the arm key point information. The detection result of the leg branch can be used to locate the leg position in the pose estimation data and extract the leg key point information. The key point detection data of specific body parts detected by different branches are feature fused to obtain complete limb movement key point detection data. For example, the key point data detected by the head, arms, legs and other branches can be simply feature spliced or weighted averaged to comprehensively reflect the key point information of the overall limb movement.

本发明中通过引入多个分支网络,每个分支网络负责检测特定部位的关键点,可以更加精准地捕捉肢体运动中不同部位的运动信息,从而提高了检测的准确性和全面性。在分支网络检测选择的基础上,针对每个部位分别进行关键点检测,使得检测结果更加细致和具体,有助于在过程中更加精准地分析和评估用户的运动状态。将不同分支网络检测到的特定部位关键点数据进行特征融合,能够考虑不同部位的运动信息,从而得到更加准确的肢体运动关键点检测数据,有助于提高康复辅助系统对用户运动状态的理解和分析能力。In the present invention, by introducing multiple branch networks, each branch network is responsible for detecting the key points of a specific part, which can more accurately capture the motion information of different parts in limb movement, thereby improving the accuracy and comprehensiveness of detection. On the basis of the branch network detection selection, key point detection is performed for each part separately, so that the detection results are more detailed and specific, which helps to more accurately analyze and evaluate the user's motion state in the process. Feature fusion of the key point data of specific parts detected by different branch networks can take into account the motion information of different parts, thereby obtaining more accurate limb movement key point detection data, which helps to improve the rehabilitation assistance system's understanding and analysis capabilities of the user's motion state.

优选地,步骤S3具体为:Preferably, step S3 specifically includes:

对肢体运动关键点检测数据进行特征提取,得到肢体运动关键点特征数据;Extracting features from the limb movement key point detection data to obtain limb movement key point feature data;

具体地,通过各种特征提取方法从肢体运动关键点检测数据中提取有意义的特征,例如使用深度神经网络(如卷积神经网络)提取关键点周围的图像特征,或者使用手工设计的特征描述符(如SIFT、HOG等)来描述关键点的外观和形状特征。Specifically, meaningful features are extracted from limb motion key point detection data through various feature extraction methods, such as using deep neural networks (such as convolutional neural networks) to extract image features around key points, or using manually designed feature descriptors (such as SIFT, HOG, etc.) to describe the appearance and shape characteristics of key points.

对肢体运动关键点特征数据进行特征选择,得到关键点特征选择数据;Performing feature selection on the key point feature data of limb movement to obtain key point feature selection data;

具体地,通过特征选择算法,筛选出最具有代表性和判别性的特征,以降低数据维度和减少冗余信息。例如,可以使用主成分分析(PCA)、线性判别分析(LDA)、信息增益等方法进行特征选择,选择对于动作识别最为关键的特征子集。Specifically, the most representative and discriminative features are selected through feature selection algorithms to reduce data dimensions and redundant information. For example, principal component analysis (PCA), linear discriminant analysis (LDA), information gain and other methods can be used for feature selection to select the most critical feature subset for action recognition.

对关键点特征选择数据进行动作识别,得到肢体运动动作识别数据。The key point feature selection data is used for action recognition to obtain limb movement action recognition data.

具体地,使用机器学习或深度学习算法对经过特征选择后的数据进行动作识别,例如,可以使用支持向量机(SVM)、随机森林(Random Forest)、卷积神经网络(CNN)等分类器进行动作识别。训练这些模型以区分不同的肢体运动动作,并在测试时使用这些模型对新的动作数据进行分类,从而得到肢体运动动作识别数据。Specifically, a machine learning or deep learning algorithm is used to perform action recognition on the data after feature selection. For example, classifiers such as support vector machine (SVM), random forest (Random Forest), and convolutional neural network (CNN) can be used for action recognition. These models are trained to distinguish different limb movement actions, and these models are used to classify new action data during testing, thereby obtaining limb movement action recognition data.

本发明中通过步骤S3中的特征提取,能够从肢体运动关键点检测数据中提取出代表肢体运动特征的关键点特征数据,有助于将复杂的肢体运动转化为更具体、更易于分析的特征数据。步骤S3中的特征选择过程能够从提取的关键点特征数据中选择出最具代表性和区分度的特征,以减少特征维度并提高识别的准确性,有助于降低数据处理的复杂度,同时保留了对肢体运动的重要信息,提高了动作识别的效果。经过特征提取和选择的数据能够更好地反映肢体运动的特征,从而使得动作识别更加准确。通过选取具有代表性的特征,并结合适当的识别算法,能够更可靠地识别出不同的肢体运动动作,为智能肢体康复辅助提供更精准的数据支持。通过特征选择,可以减少识别算法的计算量,提高计算效率。精心选择的特征能够更好地反映肢体运动的本质特征,从而减少不必要的计算开销,加快动作识别的速度。由于减少了特征维度和计算量,系统能够更快速地处理肢体运动数据,提高了实时性。In the present invention, through the feature extraction in step S3, key point feature data representing the characteristics of limb movement can be extracted from the key point detection data of limb movement, which is helpful to convert complex limb movement into more specific and easier to analyze feature data. The feature selection process in step S3 can select the most representative and discriminative features from the extracted key point feature data to reduce the feature dimension and improve the accuracy of recognition, which helps to reduce the complexity of data processing, while retaining important information about limb movement and improving the effect of action recognition. The data extracted and selected by the features can better reflect the characteristics of limb movement, so that the action recognition is more accurate. By selecting representative features and combining appropriate recognition algorithms, different limb movement actions can be more reliably identified, providing more accurate data support for intelligent limb rehabilitation assistance. Through feature selection, the calculation amount of the recognition algorithm can be reduced and the calculation efficiency can be improved. The carefully selected features can better reflect the essential characteristics of limb movement, thereby reducing unnecessary calculation overhead and speeding up the speed of action recognition. Due to the reduction of feature dimensions and calculation amount, the system can process limb movement data more quickly and improve real-time performance.

优选地,步骤S4具体为:Preferably, step S4 is specifically:

根据肢体运动动作识别数据进行动作分类映射,得到动作分类映射数据;Perform action classification mapping according to the limb movement action recognition data to obtain action classification mapping data;

具体地,建立一个动作识别模型,如输入层:输入为肢体运动动作识别数据,可以是时间序列的关键点坐标或者图像序列。卷积神经网络(CNN)部分:CNN部分负责从输入数据中提取空间特征。可以使用多层卷积层和池化层来逐步提取特征。循环神经网络(RNN)部分:RNN部分负责处理时间序列数据,捕捉动作的时间演变信息。可以使用LSTM或GRU等RNN单元来构建网络。将CNN部分的输出序列作为RNN的输入序列。全连接层:在RNN部分的输出上添加全连接层,用于将序列信息转换为固定长度的向量表示。加入Dropout层来减少过拟合。输出层:输出层使用softmax函数进行分类,每个类别代表一个特定的动作。输出层的节点数等于动作类别的数量。损失函数和优化函数:损失函数选择交叉熵损失函数(CrossEntropy Loss),用于多类别分类任务。优化函数选择Adam、SGD等优化算法。输入输出:输入:肢体运动动作识别数据,可以是时间序列的关键点坐标或图像序列。输出:动作分类结果,即属于哪个动作类别的概率分布,或者,动作识别模型为一个设定好的序列映射数组,将肢体运动动作识别数据输入模型中进行分类,每个类别代表一个特定的动作。根据模型的输出结果,将每个动作映射到相应的类别,得到动作分类映射数据。Specifically, an action recognition model is established, such as input layer: the input is the limb movement action recognition data, which can be the key point coordinates of the time series or the image sequence. Convolutional neural network (CNN) part: The CNN part is responsible for extracting spatial features from the input data. Multi-layer convolutional layers and pooling layers can be used to gradually extract features. Recurrent neural network (RNN) part: The RNN part is responsible for processing time series data and capturing the time evolution information of the action. The network can be constructed using RNN units such as LSTM or GRU. The output sequence of the CNN part is used as the input sequence of the RNN. Fully connected layer: A fully connected layer is added to the output of the RNN part to convert the sequence information into a fixed-length vector representation. A Dropout layer is added to reduce overfitting. Output layer: The output layer uses the softmax function for classification, and each category represents a specific action. The number of nodes in the output layer is equal to the number of action categories. Loss function and optimization function: The loss function selects the cross entropy loss function (CrossEntropy Loss) for multi-category classification tasks. The optimization function selects optimization algorithms such as Adam and SGD. Input and output: Input: limb movement action recognition data, which can be the key point coordinates of the time series or the image sequence. Output: Action classification result, that is, the probability distribution of which action category it belongs to, or the action recognition model is a set sequence mapping array. The limb movement action recognition data is input into the model for classification. Each category represents a specific action. According to the output of the model, each action is mapped to the corresponding category to obtain the action classification mapping data.

根据肢体运动关键点检测数据以及动作分类映射数据进行动作时序划分,得到动作时序划分数据;Perform action timing division according to the limb movement key point detection data and the action classification mapping data to obtain action timing division data;

具体地,根据关键点检测数据确定动作的开始和结束时刻,对动作进行时序划分。使用基于时间窗口的方法,将关键点数据在时间上划分为若干段,每段代表一个动作阶段。根据动作分类映射数据确定每个动作阶段的具体动作类别。Specifically, the start and end time of the action are determined based on the key point detection data, and the action is divided into time series. Using a time window-based method, the key point data is divided into several segments in time, each segment represents an action stage. The specific action category of each action stage is determined based on the action classification mapping data.

根据动作时序划分数据进行动作特征提取,得到动作特征数据,其中动作特征数据包括动作持续时间数据、动作速度数据、动作加速度数据以及动作角度变化数据;Extract motion features according to the motion time sequence division data to obtain motion feature data, wherein the motion feature data includes motion duration data, motion speed data, motion acceleration data and motion angle change data;

具体地,在每个动作阶段中提取与动作相关的特征。例如,可以计算每个动作阶段的持续时间、平均速度、最大加速度以及关键点之间的角度变化情况等特征。Specifically, features related to the action are extracted in each action stage. For example, features such as the duration, average speed, maximum acceleration, and angle changes between key points of each action stage can be calculated.

根据动作分类映射数据、动作时序划分数据以及动作特征数据进行动作参数生成,得到肢体动作参数数据,以进行智能肢体康复辅助作业。Motion parameters are generated according to the motion classification mapping data, motion timing division data and motion feature data to obtain limb motion parameter data for performing intelligent limb rehabilitation auxiliary operations.

具体地,根据动作时序划分数据和动作特征数据,结合先前建立的动作分类映射关系,生成描述每个动作阶段的动作参数,包括动作的类型、开始时间、结束时间、持续时间、速度、加速度、角度变化等信息,用于智能肢体康复辅助作业的执行和监控。Specifically, based on the action timing division data and action feature data, combined with the previously established action classification mapping relationship, action parameters describing each action stage are generated, including action type, start time, end time, duration, speed, acceleration, angle change and other information, which are used for the execution and monitoring of intelligent limb rehabilitation assisted tasks.

具体地,动作识别模型可以识别以下三种动作:举起物品、持续保持举起状态、放下物品。当模型识别到一个动作时,根据其输出结果,装置将其映射到相应的类别,例如:1表示举起物品,2表示持续保持举起状态,3表示放下物品。假设装置的动作时序划分算法根据关键点检测数据确定了以下时刻:动作开始时刻:t_start=10s,动作结束时刻:t_end=30s,装置将这个时间段划分为三个阶段:阶段1:t_start到t_1(15s),代表举起物品的过程。阶段2:t_1到t_2(20s),代表持续保持举起状态的过程。阶段3:t_2到t_end(30s),代表放下物品的过程。对于每个阶段,装置提取以下动作特征数据:持续时间:阶段1为5秒,阶段2为5秒,阶段3为10秒。速度:根据关键点数据计算各阶段的平均速度。加速度:根据速度数据计算各阶段的平均加速度。角度变化:根据关键点数据计算各阶段内关键点之间的角度变化情况。根据动作分类映射数据、动作时序划分数据以及动作特征数据,装置生成以下肢体动作参数数据:对于每个阶段的类型根据映射关系确定,例如阶段1为举起物品,阶段2为持续保持举起状态,阶段3为放下物品。开始时间和结束时间:根据时序划分确定。生成的肢体动作参数数据包括持续时间、速度、加速度、角度变化等信息。Specifically, the action recognition model can recognize the following three actions: lifting an object, keeping the object lifted, and putting the object down. When the model recognizes an action, the device maps it to the corresponding category based on its output result, for example: 1 means lifting an object, 2 means keeping the object lifted, and 3 means putting the object down. Assume that the action timing division algorithm of the device determines the following moments based on the key point detection data: the start moment of the action: t_start=10s, the end moment of the action: t_end=30s, and the device divides this time period into three stages: Stage 1: t_start to t_1 (15s), representing the process of lifting an object. Stage 2: t_1 to t_2 (20s), representing the process of keeping the object lifted. Stage 3: t_2 to t_end (30s), representing the process of putting the object down. For each stage, the device extracts the following action feature data: Duration: 5 seconds for stage 1, 5 seconds for stage 2, and 10 seconds for stage 3. Speed: The average speed of each stage is calculated based on the key point data. Acceleration: The average acceleration of each stage is calculated based on the speed data. Angle change: Calculate the angle change between key points in each stage based on key point data. Based on the action classification mapping data, action timing division data and action feature data, the device generates the following limb action parameter data: The type of each stage is determined according to the mapping relationship, for example, stage 1 is to lift the object, stage 2 is to keep the lifting state, and stage 3 is to put the object down. Start time and end time: Determined according to the timing division. The generated limb action parameter data includes information such as duration, speed, acceleration, angle change, etc.

本发明中通过根据肢体运动动作识别数据进行动作分类映射,可以将运动数据映射到特定的动作类别中,从而实现对不同动作的分类和识别,有助于系统更好地理解用户的运动行为,为后续的康复辅助提供有效的数据支持。根据肢体运动关键点检测数据以及动作分类映射数据进行动作时序划分,可以将整个运动过程分解成不同的时间段或阶段,有助于对运动过程进行更加细致和精确的分析,为后续的特征提取和参数生成提供更可靠的基础。根据动作时序划分数据进行动作特征提取,可以从多个方面获取关于运动的信息,如持续时间、速度、加速度以及角度变化等,这些特征数据能够反映出运动的各个方面,为后续的参数生成和康复辅助提供更多的参考依据。根据动作分类映射数据、动作时序划分数据以及动作特征数据进行动作参数生成,可以综合考虑动作的类型、时序以及特征信息,从而得到更加准确的肢体动作参数数据。In the present invention, by performing action classification mapping according to the limb movement action recognition data, the motion data can be mapped to a specific action category, thereby achieving classification and recognition of different actions, which helps the system to better understand the user's movement behavior and provide effective data support for subsequent rehabilitation assistance. According to the limb movement key point detection data and the action classification mapping data, the action timing division can be performed, and the entire movement process can be decomposed into different time periods or stages, which helps to analyze the movement process more carefully and accurately, and provides a more reliable basis for subsequent feature extraction and parameter generation. According to the action timing division data, action feature extraction can be performed, and information about the movement can be obtained from multiple aspects, such as duration, speed, acceleration, and angle change, etc. These feature data can reflect various aspects of the movement, and provide more reference basis for subsequent parameter generation and rehabilitation assistance. According to the action classification mapping data, the action timing division data, and the action feature data, the action parameter generation can be performed, and the type, timing, and feature information of the action can be comprehensively considered, so as to obtain more accurate limb movement parameter data.

优选地,本申请还提供了一种智能肢体康复辅助设备,用于执行如上所述的智能肢体康复辅助方法,该智能肢体康复辅助设备包括:Preferably, the present application also provides an intelligent limb rehabilitation assisting device for executing the intelligent limb rehabilitation assisting method as described above, the intelligent limb rehabilitation assisting device comprising:

肢体运动数据采集模块,用于通过图像传感器进行肢体运动数据采集,得到肢体运动图像数据;A limb movement data acquisition module is used to acquire limb movement data through an image sensor to obtain limb movement image data;

肢体运动关键点检测模块,用于对肢体运动图像数据进行姿态估计,得到姿态估计数据,并对姿态估计数据进行关键点检测,得到肢体运动关键点检测数据;A limb movement key point detection module is used to perform posture estimation on limb movement image data to obtain posture estimation data, and perform key point detection on the posture estimation data to obtain limb movement key point detection data;

动作识别模块,用于对肢体运动关键点检测数据进行动作识别,得到肢体运动动作识别数据;The action recognition module is used to perform action recognition on the limb movement key point detection data to obtain limb movement action recognition data;

智能肢体康复辅助作业模块,用于根据肢体运动动作识别数据对肢体运动关键点检测数据进行动作参数生成,得到肢体动作参数数据,以进行智能肢体康复辅助作业。The intelligent limb rehabilitation auxiliary operation module is used to generate action parameters for limb movement key point detection data according to limb movement action recognition data, and obtain limb movement parameter data for intelligent limb rehabilitation auxiliary operations.

本发明的有益效果在于:利用图像传感器进行肢体运动数据采集,避免了对用户的侵入性检测或操作,提高了用户的舒适度和接受度。通过图像传感器采集的数据可以实时地进行处理和分析,使得康复辅助作业能够及时地根据用户的实际情况进行调整和反馈。通过对肢体运动图像数据进行姿态估计和关键点检测,能够准确获取用户的运动状态和关键点信息,从而实现更加贴合用户实际情况的参数制定生成和监控。该方法利用动作识别技术对肢体运动关键点检测数据进行分析,能够自动识别用户的运动动作,无需人工干预。并且根据识别结果自动生成相应的动作参数,进一步简化了康复辅助作业的操作流程。与传统的康复辅助方法相比,该方法无需使用昂贵的设备或进行复杂的操作,具有较低的成本,并且易于实施和推广,有助于提高康复辅助服务的覆盖率和普及度。The beneficial effects of the present invention are: using image sensors to collect limb movement data avoids invasive detection or operation of users, and improves user comfort and acceptance. The data collected by the image sensor can be processed and analyzed in real time, so that rehabilitation assistance operations can be adjusted and fed back in a timely manner according to the actual situation of the user. By performing posture estimation and key point detection on limb movement image data, the user's movement state and key point information can be accurately obtained, thereby achieving parameter formulation, generation and monitoring that is more in line with the user's actual situation. The method uses motion recognition technology to analyze limb movement key point detection data, and can automatically identify the user's movement without manual intervention. And the corresponding motion parameters are automatically generated according to the recognition results, further simplifying the operation process of rehabilitation assistance operations. Compared with traditional rehabilitation assistance methods, this method does not require the use of expensive equipment or complex operations, has a lower cost, and is easy to implement and promote, which helps to improve the coverage and popularity of rehabilitation assistance services.

因此,无论从哪一点来看,均应将实施例看作是示范性的,而且是非限制性的,本发明的范围由所附申请文件而不是上述说明限定,因此旨在将落在申请文件的等同要件的含义和范围内的所有变化涵括在本发明内。Therefore, from any point of view, the embodiments should be regarded as illustrative and non-restrictive, and the scope of the present invention is limited by the attached application documents rather than the above description, and it is intended that all changes within the meaning and scope of equivalent elements of the application documents are included in the present invention.

以上所述仅是本发明的具体实施方式,使本领域技术人员能够理解或实现本发明。对这些实施例的多种修改对本领域的技术人员来说将是显而易见的,本文中所定义的一般原理可以在不脱离本发明的精神或范围的情况下,在其它实施例中实现。因此,本发明将不会被限制于本文所示的这些实施例,而是要符合与本文所发明的原理和新颖特点相一致的最宽的范围。The above description is only a specific embodiment of the present invention, so that those skilled in the art can understand or implement the present invention. Various modifications to these embodiments will be apparent to those skilled in the art, and the general principles defined herein may be implemented in other embodiments without departing from the spirit or scope of the present invention. Therefore, the present invention will not be limited to the embodiments shown herein, but should conform to the widest scope consistent with the principles and novel features invented herein.

Claims (4)

1. An intelligent limb rehabilitation assisting method is characterized by comprising the following steps of:
Step S1, acquiring limb movement data through an image sensor to obtain limb movement image data, wherein the step comprises the following steps of;
Step S11: acquiring first limb movement data by an image sensor according to preset first threshold angle data to obtain first limb movement image data;
Step S12: acquiring second limb movement data by using an image sensor according to preset second threshold angle data to obtain second limb movement image data;
Step S13, including:
Step S131: performing first limb angle detection on the first limb moving image data according to preset first threshold angle data to obtain first limb angle detection data;
Step S132: performing second limb angle detection on second limb moving image data according to preset second threshold angle data to obtain second limb angle detection data;
Step S133: performing image correction on the first limb moving image data according to the first limb angle detection data to obtain first limb moving image correction data, and performing image correction on the second limb moving image data according to the second limb angle detection data to obtain second limb moving image correction data;
Step S134: extracting characteristic points of the first limb moving image correction data and the second limb moving image correction data to obtain first limb moving image characteristic point data and second limb moving image characteristic point data;
Step S135: performing feature matching on the first limb moving image feature point data and the second limb moving image feature point data to obtain limb moving image feature matching data;
step S136: performing perspective transformation on the first limb moving image data and the second limb moving image data according to the limb moving image feature matching data to obtain limb moving image perspective transformation data;
step S137: performing image fusion on the first limb moving image perspective transformation data and the second limb moving image perspective transformation data to obtain limb moving image fusion data;
Step S138: performing edge restoration on the limb moving image fusion data to obtain limb moving image data;
step S2, including:
step S21: carrying out gesture estimation on the limb moving image data to obtain gesture estimation data;
Step S22: detecting local key points of the gesture estimation data to obtain first limb movement key point detection data;
Step S23: performing global key point detection on the gesture estimation data to obtain second limb movement key point detection data;
Step S3: performing motion recognition on limb movement key point detection data to obtain limb movement motion recognition data, wherein the limb movement key point detection data comprise first limb movement key point detection data and second limb movement key point detection data;
Step S4, including:
performing action classification mapping according to the limb movement identification data to obtain action classification mapping data;
performing action time sequence division according to limb movement key point detection data and action classification mapping data to obtain action time sequence division data;
Extracting action characteristics according to the action time sequence dividing data to obtain action characteristic data, wherein the action characteristic data comprises action duration time data, action speed data, action acceleration data and action angle change data;
Generating action parameters according to the action classification mapping data, the action time sequence division data and the action characteristic data to obtain limb action parameter data so as to perform intelligent limb rehabilitation auxiliary operation;
The step S134 specifically includes:
Step S101: clustering calculation is carried out on the first limb moving image correction data and the second limb moving image correction data to obtain first limb moving image clustering data and second limb moving image clustering data;
step S102: extracting a clustering center of the first limb moving image clustering data and the second limb moving image clustering data to obtain first image clustering center data and second image clustering center data;
step S103: cluster radius extraction is carried out on the first limb moving image cluster data and the second limb moving image cluster data to respectively obtain first cluster radius data and second cluster radius data;
step S104: weighting calculation is carried out on the first cluster radius data by using the first limb moving image clustering data to obtain first cluster radius weighting data, and weighting calculation is carried out on the second cluster radius data by using the second limb moving image clustering data to obtain second cluster radius weighting data;
step S105: performing field pixel selection on the first image clustering center data by using the first cluster radius weighting data to obtain first limb moving image field pixel data, and performing field pixel selection on the second image clustering center data by using the second cluster radius weighting data to obtain second limb moving image field pixel data;
Step S106: performing dynamic pixel gray scale judgment on the first limb moving image field pixel data and the second limb moving image field pixel data to obtain first limb moving image feature point data and second limb moving image feature point data;
The step S106 specifically includes:
acquiring illumination angle data and illumination intensity data through an illumination sensor;
Performing angle calculation on preset first threshold angle data and preset second threshold angle data according to the illumination angle data to obtain first shooting illumination angle data and second shooting illumination angle data;
gray threshold generation is carried out according to the first shooting illumination angle data and the illumination intensity data to obtain first gray threshold data, gray threshold generation is carried out according to the second shooting illumination angle data and the illumination intensity data to obtain second gray threshold data;
And performing pixel gray judgment on the first limb moving image field pixel data through the first gray threshold data to obtain first limb moving image feature point data, and performing pixel gray judgment on the second limb moving image field pixel data through the second gray threshold data to obtain second limb moving image feature point data.
2. The method according to claim 1, wherein step S22 is specifically:
according to the gesture estimation data, branch network detection selection is carried out to obtain branch network detection data;
performing specific position key point detection on the gesture estimation data according to the branch network detection data to obtain specific position key point detection data;
and carrying out feature fusion on the specific position key point detection data corresponding to the different branch network detection data to obtain first limb movement key point detection data.
3. The method according to claim 1, wherein step S3 is specifically:
extracting features of the limb movement key point detection data to obtain limb movement key point feature data;
Feature selection is carried out on the feature data of the key points of the limb movement, so that feature selection data of the key points are obtained;
And performing action recognition on the key point feature selection data to obtain limb movement action recognition data.
4. A smart limb rehabilitation assistance device for performing the smart limb rehabilitation assistance method of claim 1, the smart limb rehabilitation assistance device comprising:
The limb movement data acquisition module is used for acquiring limb movement data through the image sensor to obtain limb movement image data;
The limb movement key point detection module is used for carrying out gesture estimation on limb movement image data to obtain gesture estimation data, and carrying out key point detection on the gesture estimation data to obtain limb movement key point detection data;
the motion recognition module is used for performing motion recognition on the limb motion key point detection data to obtain limb motion recognition data;
and the intelligent limb rehabilitation auxiliary operation module is used for generating action parameters of limb movement key point detection data according to limb movement action identification data to obtain limb action parameter data so as to perform intelligent limb rehabilitation auxiliary operation.
CN202410354728.8A 2024-03-27 2024-03-27 Intelligent limb rehabilitation assisting method and device Active CN117953591B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410354728.8A CN117953591B (en) 2024-03-27 2024-03-27 Intelligent limb rehabilitation assisting method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410354728.8A CN117953591B (en) 2024-03-27 2024-03-27 Intelligent limb rehabilitation assisting method and device

Publications (2)

Publication Number Publication Date
CN117953591A CN117953591A (en) 2024-04-30
CN117953591B true CN117953591B (en) 2024-06-14

Family

ID=90805221

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410354728.8A Active CN117953591B (en) 2024-03-27 2024-03-27 Intelligent limb rehabilitation assisting method and device

Country Status (1)

Country Link
CN (1) CN117953591B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN118552543B (en) * 2024-07-30 2024-10-25 天津医科大学总医院 Physical examination auxiliary system based on deep learning convolutional neural network

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657204A (en) * 2021-07-28 2021-11-16 浙江大华技术股份有限公司 Gesture recognition method and related equipment
CN114998983A (en) * 2022-04-12 2022-09-02 长春大学 A limb rehabilitation method based on augmented reality technology and gesture recognition technology

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101776487B (en) * 2009-12-31 2011-05-18 华中科技大学 Infrared focal plane non-uniformity fingerprint extraction and image correction method
CN103646232B (en) * 2013-09-30 2016-08-17 华中科技大学 Aircraft ground moving target infrared image identification device
CN107798281B (en) * 2016-09-07 2021-10-08 北京眼神科技有限公司 Face living body detection method and device based on LBP (local binary pattern) characteristics
CN108985259B (en) * 2018-08-03 2022-03-18 百度在线网络技术(北京)有限公司 Human body action recognition method and device
CN110298279A (en) * 2019-06-20 2019-10-01 暨南大学 A kind of limb rehabilitation training householder method and system, medium, equipment
CN110472554B (en) * 2019-08-12 2022-08-30 南京邮电大学 Table tennis action recognition method and system based on attitude segmentation and key point features
KR102363879B1 (en) * 2019-11-05 2022-02-16 대한민국(국립재활원장) Method for predicting clinical functional assessment scale using feature values derived by upper limb movement of patients
CN112084967A (en) * 2020-09-12 2020-12-15 周美跃 Limb rehabilitation training detection method and system based on artificial intelligence and control equipment
CN114067358B (en) * 2021-11-02 2024-08-13 南京熊猫电子股份有限公司 Human body posture recognition method and system based on key point detection technology

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113657204A (en) * 2021-07-28 2021-11-16 浙江大华技术股份有限公司 Gesture recognition method and related equipment
CN114998983A (en) * 2022-04-12 2022-09-02 长春大学 A limb rehabilitation method based on augmented reality technology and gesture recognition technology

Also Published As

Publication number Publication date
CN117953591A (en) 2024-04-30

Similar Documents

Publication Publication Date Title
WO2022002150A1 (en) Method and device for constructing visual point cloud map
CN107301370B (en) A Limb Action Recognition Method Based on Kinect 3D Skeleton Model
JP5873442B2 (en) Object detection apparatus and object detection method
CN103155003B (en) Posture estimation device and posture estimation method
JP5837508B2 (en) Posture state estimation apparatus and posture state estimation method
CN102971768B (en) Posture state estimation unit and posture state method of estimation
CN107767419A (en) A kind of skeleton critical point detection method and device
CN103886589B (en) Object-oriented automated high-precision edge extracting method
CN104167016B (en) A kind of three-dimensional motion method for reconstructing based on RGB color and depth image
CN108573231B (en) Human body behavior identification method of depth motion map generated based on motion history point cloud
CN111460976B (en) A data-driven real-time hand movement assessment method based on RGB video
CN113516005B (en) Dance action evaluation system based on deep learning and gesture estimation
CN110263768A (en) A kind of face identification method based on depth residual error network
CN117671738B (en) Human body posture recognition system based on artificial intelligence
CN117953591B (en) Intelligent limb rehabilitation assisting method and device
CN109117753A (en) Position recognition methods, device, terminal and storage medium
CN105069745A (en) face-changing system based on common image sensor and enhanced augmented reality technology and method
CN110046544A (en) Digital gesture identification method based on convolutional neural networks
CN109934787B (en) An Image Stitching Method Based on High Dynamic Range
CN116052222A (en) Cattle face recognition method for naturally collecting cattle face image
JP5503510B2 (en) Posture estimation apparatus and posture estimation program
CN110543817A (en) Pedestrian Re-Identification Method Based on Pose-Guided Feature Learning
CN108053425B (en) A kind of high speed correlation filtering method for tracking target based on multi-channel feature
CN118298507A (en) Human body posture recognition method based on transducer
CN111428609A (en) Human body posture recognition method and system based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant