[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN119339006A - Orthopedics 3D printing model construction method and device based on intelligent AI - Google Patents

Orthopedics 3D printing model construction method and device based on intelligent AI Download PDF

Info

Publication number
CN119339006A
CN119339006A CN202411884589.6A CN202411884589A CN119339006A CN 119339006 A CN119339006 A CN 119339006A CN 202411884589 A CN202411884589 A CN 202411884589A CN 119339006 A CN119339006 A CN 119339006A
Authority
CN
China
Prior art keywords
image
segmentation
printing
dimensional model
dimensional
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202411884589.6A
Other languages
Chinese (zh)
Inventor
严俊伟
张惠康
王啸
蒋东冬
朱家伟
尹昭伟
王黎明
梁斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing Duonuo Information Technology Co ltd
Original Assignee
Nanjing Duonuo Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing Duonuo Information Technology Co ltd filed Critical Nanjing Duonuo Information Technology Co ltd
Priority to CN202411884589.6A priority Critical patent/CN119339006A/en
Publication of CN119339006A publication Critical patent/CN119339006A/en
Pending legal-status Critical Current

Links

Landscapes

  • Apparatus For Radiation Diagnosis (AREA)

Abstract

The invention discloses an orthopaedics 3D printing model construction method and device based on intelligent AI, and relates to the technical field of images, wherein the method comprises the steps of collecting a user database, configuring an image imaging scheme and executing multi-view imaging to establish a DICOM format image dataset; placing an image data set in a three-dimensional coordinate system, establishing a registration mapping, preprocessing the image data set, executing bone segmentation, identifying segmentation trust, performing image fusion reconstruction according to the registration mapping and the segmentation trust to generate a three-dimensional model with identification position complexity, performing printing placement fitting based on the three-dimensional model, determining a gravity direction, analyzing a geometric shape to determine a preliminary supporting area, optimizing supporting points according to the position complexity and the preliminary supporting area, establishing a selection optimizing result, optimizing the three-dimensional model according to the selection optimizing result, and generating an orthopaedics 3D printing model of a user. Thereby achieving the technical effects of improving modeling efficiency and precision and improving printing stability.

Description

Orthopedics 3D printing model construction method and device based on intelligent AI
Technical Field
The invention relates to the technical field of images, in particular to an orthopedic 3D printing model construction method and device based on intelligent AI.
Background
The 3D printing technology is applied to the medical field, particularly in the orthopedic operation, can help doctors to more intuitively observe the bone structure of a patient, perform preoperative planning and simulation, and improve the accuracy and success rate of the operation. The existing orthopaedics 3D printing model construction method generally depends on a fixed parameter mode, is assisted by manual operation, and has the technical problems of long modeling time consumption, low precision and poor printing stability.
Disclosure of Invention
The invention provides an orthopedic 3D printing model construction method and device based on intelligent AI, which are used for solving the technical problems of long modeling time consumption, low precision and poor printing stability in the prior art, and realizing the technical effects of improving modeling efficiency and precision and improving printing stability.
In a first aspect, the present invention provides an orthopedic 3D printing model construction method based on intelligent AI, wherein the method comprises:
Acquiring a user database of a user, configuring an image imaging scheme based on the user database, executing multi-view imaging of the user based on the image imaging scheme, establishing an image dataset, storing the image dataset as a DICOM format, establishing a three-dimensional coordinate system, placing the image dataset into the three-dimensional coordinate system, carrying out image registration by using image coordinates in the three-dimensional coordinate system, establishing a registration mapping, preprocessing the image dataset, executing skeleton segmentation of the preprocessed image dataset, establishing segmentation results, wherein the segmentation results have segmentation trust degree marks, carrying out image fusion reconstruction according to the registration mapping and the segmentation trust degree marks, generating a three-dimensional model, marking the position complexity of the three-dimensional model, carrying out print placement fitting based on the three-dimensional model, determining the gravity direction, carrying out geometric analysis of the three-dimensional model in the gravity direction, determining a preliminary support area, carrying out selection optimizing of support points based on the position complexity and the preliminary support area, establishing a selection optimizing result, optimizing the three-dimensional model, and establishing an orthopedics 3D printing model of the user according to the selection optimizing result.
In a second aspect, the present invention further provides an orthopedic 3D printing model construction device based on intelligent AI, wherein the device includes:
The image data acquisition module is used for acquiring a user database of a user, configuring an image imaging scheme based on the user database, executing multi-view imaging of the user based on the image imaging scheme, establishing an image data set, and storing the image data set in a DICOM format.
The three-dimensional registration module is used for establishing a three-dimensional coordinate system, placing the image data set into the three-dimensional coordinate system, carrying out image registration by using the image coordinates in the three-dimensional coordinate system, and establishing registration mapping.
And the skeleton segmentation module is used for carrying out skeleton segmentation on the preprocessed image data set after preprocessing the image data set, and establishing a segmentation result, wherein the segmentation result is provided with a segmentation trust degree mark.
And the fusion reconstruction identification module is used for carrying out image fusion reconstruction according to the registration mapping and the segmentation trust identification, generating a three-dimensional model and identifying the position complexity of the three-dimensional model.
And the placement support module is used for performing printing placement fitting based on the three-dimensional model, determining the gravity direction, performing geometric analysis of the three-dimensional model in the gravity direction, and determining a preliminary support area.
The support optimization module is used for carrying out selection optimization on the support points based on the position complexity and the preliminary support area, and establishing a selection optimization result.
And the entity execution module is used for optimizing the three-dimensional model according to the selection optimizing result and establishing an orthopedics 3D printing model of the user.
The invention discloses an orthopedics 3D printing model construction method and device based on intelligent AI, comprising the steps of collecting and acquiring user database information of a user, making an image imaging scheme based on the database, executing multi-view image imaging of the user, generating an image dataset, storing the image dataset as a DICOM format file, establishing a three-dimensional coordinate system, placing the image dataset into the coordinate system, carrying out image registration by image coordinates in the three-dimensional coordinate system to form a registration mapping relation, preprocessing the image dataset, executing skeleton segmentation operation in the preprocessed image to generate a segmentation result, carrying out fusion reconstruction on the image according to the registration mapping and the segmentation trust identification to generate a three-dimensional model, identifying the space position complexity of the three-dimensional model, printing and placing fitting the three-dimensional model, determining a gravity direction, carrying out geometric analysis based on the gravity direction, determining a supporting area, optimizing and selecting a supporting point based on the position complexity and a preliminary supporting area, generating a selection optimizing result, carrying out optimization processing on the three-dimensional model according to the selection optimizing result, and finally generating the orthopedics 3D printing model of the user. The method and the device for constructing the orthopedic 3D printing model based on the intelligent AI solve the technical problems of long modeling time consumption, low precision and poor printing stability, and realize the technical effects of improving modeling efficiency and precision and improving printing stability.
Drawings
Fig. 1 is a schematic flow chart of an orthopedic 3D printing model construction method based on intelligent AI.
Fig. 2 is a schematic structural diagram of the orthopedic 3D printing model construction device based on intelligent AI.
Reference numerals illustrate an image data acquisition module 11, a three-dimensional registration module 12, a skeleton segmentation module 13, a fusion reconstruction identification module 14, a placement support module 15, a support optimization module 16 and a entity execution module 17.
Detailed Description
The technical scheme provided by the embodiment of the invention aims to solve the technical problems of long modeling time consumption, low precision and poor printing stability in the prior art, and adopts the following overall thought:
First, a user database of the user is acquired and obtained, the database containing basic information and medical image data of the user. Based on the user database, an image imaging scheme is configured, which may include image acquisition modes of different angles, such as CT, MRI, etc.
The user is then imaged at multiple perspectives based on the configured imaging scheme, and the acquired image dataset is saved in DICOM format for subsequent processing and analysis. Then, a three-dimensional coordinate system is established, the acquired image data set is placed into the three-dimensional coordinate system, image registration is carried out by utilizing image coordinates, images with different visual angles can be ensured to be correctly corresponding and mapped, and a registration mapping relation is formed. After the image dataset is preprocessed, skeleton segmentation operation is carried out, skeleton structure information is extracted, and the trust degree of each segmentation result, namely the accuracy degree of each segmentation area, is marked. And fusing and reconstructing images of different visual angles according to the registration mapping and the segmentation trust degree to generate a three-dimensional skeleton model of the user. Meanwhile, based on the segmentation result and the image complexity, the position complexity of the three-dimensional model is identified, which is important for the selection of the supporting points and 3D printing. And then, based on the generated three-dimensional model, fitting the printing placement, determining the gravity direction in the printing process, performing geometric analysis, identifying the area possibly needing to be supported, and determining the preliminary support area. And then, combining the position complexity of the three-dimensional model and the preliminary support area, carrying out optimizing selection on the support points, finding out the most suitable support point layout, and generating an optimizing result of the support point selection. And finally, optimizing the three-dimensional model based on the optimizing result of the supporting points, and finally generating the orthopedics 3D printing model of the user. The model will be used for surgical planning, prosthesis design or other orthopedics related applications.
The foregoing aspects will be better understood by reference to the following detailed description of the invention taken in conjunction with the accompanying drawings and detailed description. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the exemplary embodiments used only to explain the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention. It should be noted that, for convenience of description, only some, but not all of the drawings related to the present invention are shown.
Example 1
Fig. 1 is a flow chart of an orthopedic 3D printing model construction method based on intelligent AI, wherein the method comprises the following steps:
Acquiring a user database of a user, configuring an image imaging scheme based on the user database, executing multi-view imaging of the user based on the image imaging scheme, establishing an image dataset, and storing the image dataset in a DICOM format.
Specifically, firstly, a user database is interacted to obtain user data of a target user, and the user data reflects user characteristic information of the target user and is used for providing reference basis for imaging. The user database comprises data of body type, body posture, position to be modeled, modeling precision requirement and the like of a user.
Specifically, an image imaging scheme is configured based on a user database, related information such as bone position, body shape, posture, sex, age and the like of a user is extracted from the user database to serve as basic data for making the image imaging scheme, and then specific imaging requirements are determined according to health conditions or bone requirements of the user. For example, whether certain regions need to be focused on, whether there is a particular imaging angle or resolution requirement, etc. Then, parameters of the imaging device including device position (height and distance of the imaging device), imaging angle, radiation dose, etc. are adjusted according to the user's body structure data (e.g., bone position, body shape) to ensure coverage of the region of interest to be acquired and to provide acquisition resolution satisfying resolution requirements. Through the steps, the image imaging scheme is ensured to be suitable for the personalized requirements and physical characteristics of the user, so that clear and accurate images are obtained.
Further, the user is imaged at multiple viewing angles according to an imaging scheme. The image data of the user is captured through different angles and directions, so that comprehensive and accurate image information is ensured to be obtained. Illustratively, shooting is performed from a plurality of angles such as front, back, left, right, oblique and the like, and all shot image data are classified and sorted according to an imaging sequence or a view angle sequence, so that a complete image data set is established. The image dataset contains all view images, ensuring that the target region can be reconstructed and viewed completely through multiple views in subsequent analysis.
Specifically, the processed image dataset is converted and saved into DICOM (DIGITAL IMAGING AND Communications IN MEDICINE) format. The DICOM file contains the necessary user information (e.g., name, ID), imaging device parameters, imaging time, and other relevant metadata for subsequent access, recall, and analysis.
Establishing a three-dimensional coordinate system, placing the image dataset into the three-dimensional coordinate system, carrying out image registration by using image coordinates in the three-dimensional coordinate system, and establishing a registration mapping.
Specifically, first, a three-dimensional coordinate system is established as a reference system for image data. The coordinate system is constructed based on the physical position of the device, the center point of the imaging part or other space with fixed reference points, and each axis of the three-dimensional coordinate system respectively represents X, Y, Z three directions, so that the position of each image in the three-dimensional space can be accurately positioned.
In particular, an image dataset resulting from multi-view imaging is read. And determining the initial position and direction of each image in the three-dimensional coordinate system according to the specific view angle, the position information and the imaging time of the imaging device corresponding to each image data, and placing each image in the three-dimensional coordinate system according to the position information.
Specifically, the multi-view image data in the three-dimensional coordinate system is initial image data restored according to the acquisition direction, in actual acquisition, due to the fact that the user gesture or action changes, the situation of imperfect fitting exists among the multi-view image data, accurate bone image data is difficult to form, further image registration is needed, images of different view angles are aligned, and a consistent three-dimensional model is established. The purpose of registration is to correct for differences in position, rotation, scaling, etc. between the multi-view images by a transformation matrix. Image registration in a three-dimensional coordinate system is illustratively performed based on registration algorithms such as RANSAC, 4PCE, IPC and the like.
Illustratively, first, key feature points common to neighboring images are extracted from each image, and the feature points may be determined based on physical structures, edges, textures, or other salient features in the image. Then, a set of four points is extracted from the images at different perspectives using the 4PCS algorithm, the geometric relationships of the four points in the images remaining consistent. Further, the spatial positions of the point sets in the adjacent images are compared, and a similarity transformation between the images is acquired. Finally, the optimal rigid transformation matrix (including rotation, translation and scaling) is solved for the key point set matches. This matrix will be used to transform one of the images so that it is aligned with the images of the other perspectives.
Further, after the optimal transformation matrix is obtained, the matrix is applied to transform the images, all the image data sets are registered, so that the image data sets are aligned to the same reference frame in a three-dimensional coordinate system, and a registration mapping relation of each image is established, wherein the registration mapping relation records the original positions of the images, the transformation matrix after registration and the coordinate positions of the transformation matrix in the three-dimensional space.
And after preprocessing the image dataset, executing skeleton segmentation of the preprocessed image dataset, and establishing a segmentation result, wherein the segmentation result is provided with a segmentation trust degree mark.
Specifically, google segmentation is used to segment skeletal regions in the preprocessed image dataset from other tissues (e.g., muscle, fat, etc.). The segmentation methods include threshold segmentation, region growth, level set methods, etc., deep learning methods, etc.
Illustratively, convolutional Neural Networks (CNNs) are also widely used for image segmentation tasks. First, a set of labeled training data, such as a set of CT images, and corresponding bone segmentation tags are prepared. Wherein, the label is a Boolean value, which is used for indicating whether each pixel belongs to bones. The CNN is then trained using the training data. Involving forward propagation, backward propagation, and parameter updating (e.g., random gradient descent to update network parameters). And after the network training is finished, the method can be used for carrying out skeleton segmentation on the new CT image and outputting the segmentation result with the segmentation trust degree identification. In other words, the preprocessed image is input into the network, which outputs a segmented image (segmentation result), wherein the value of each pixel represents the probability that pixel belongs to the bone (segmentation confidence).
Through the steps of the method, the skeleton of the image data set is segmented, so that pixels belonging to the skeleton in the image data can be accurately identified, and the construction precision of a follow-up three-dimensional model is improved.
And carrying out image fusion reconstruction according to the registration mapping and the segmentation confidence identification, generating a three-dimensional model, and identifying the position complexity of the three-dimensional model.
Specifically, the multi-view image data is aligned into a unified three-dimensional coordinate system based on the established registration map. Through the registration mapping, the skeleton region of each view image can be accurately corresponding to the same three-dimensional space position, and then the segmentation result of each view is weighted by utilizing the segmentation trust degree identification, wherein the part with higher segmentation trust degree occupies larger weight in the three-dimensional model, thereby improving the reliability and the precision of the model. Then, the segmented bone images for each view are fused.
Optionally, a voxel (voxel) level fusion method is adopted to combine the segmentation results of each view angle, eliminate the problem of imperfect fitting caused by the gesture or motion change of the user, and convert the fused image data into three-dimensional voxel (voxel) data.
Optionally, based on voxel data, a grid structure of the three-dimensional model is generated by Marching Cubes or other surface reconstruction algorithms, and the skeleton structure is presented in the form of a three-dimensional geometric model. Preferably, in the generated three-dimensional model, smoothing, denoising or hole filling is adaptively performed to improve the precision and quality of the model.
Specifically, the complexity of different parts of the model is quantified by means of curvature analysis, shape complexity analysis or density analysis and the like, and the position complexity of a plurality of positions is obtained. The higher complexity of the parts means that the areas are more prone to errors in the actual imaging or reconstruction process, and meanwhile, the printing difficulty is higher in the subsequent 3D printing execution process, and fine support setting and printing slicing are required.
Through the steps, a three-dimensional skeleton model with high precision and definite position complexity identification can be generated, and a reliable model foundation is provided for subsequent printing analysis and planning.
In some embodiments, performing image fusion reconstruction according to the registration map and the segmentation confidence identification further comprises:
the weights are configured according to the registration mapping, as follows:
;
Wherein, Characterizing post registration mappingThe weight of the individual view images is such that,Characterization of the first embodimentThe image quality scores of the individual view images,Characterization of the first embodimentThe segmentation confidence identification of the individual view images,The number of view angles to which the registration map corresponds is characterized,Is the view index.
And carrying out pixel fusion according to the configured weight, wherein the pixel fusion comprises the following steps:
;
the values of the pixels after the fusion are characterized, Is the firstPixel values for corresponding positions of the individual view images.
And finishing image fusion reconstruction according to the fused pixel values.
Optionally, before image fusion reconstruction, the image of each view is given a corresponding weight, which depends on the quality of the image and the confidence level of segmentation. Specifically, firstly, based on the weight configuration formula, the weights of fusion of a plurality of view images are calculated, wherein the higher the image quality is, the larger the segmentation confidence identification is, the higher the corresponding weights are, and then, pixel fusion is carried out according to the configured weights. Specifically, the fusion value for each pixel location is derived from a weighted average of all the pixel values for the corresponding location and their weights, ensuring that pixels of the high quality, high segmentation confidence image are prioritized. And finally, stacking the fused multi-view images to generate a three-dimensional image or directly carrying out pixel fusion in a three-dimensional space to finish image fusion reconstruction. Through the steps, the quality and the trust degree of each view angle image are ensured to be considered in the image fusion process, so that a more accurate fusion result is generated.
In some implementations, performing image fusion reconstruction according to the registration mapping and the segmentation confidence identifier, generating a three-dimensional model, and identifying a position complexity of the three-dimensional model, further including:
Extracting fusion data quantity of image fusion reconstruction, carrying out adaptation evaluation according to the fusion data quantity and the position complexity, establishing an adaptation abnormal identifier, generating an additional acquisition instruction based on the adaptation abnormal identifier, controlling imaging equipment to carry out additional data acquisition through the additional acquisition instruction, and carrying out image fusion reconstruction compensation according to an additional data acquisition result.
Specifically, based on a three-dimensional sliding window, the fusion data amount of a plurality of positions of the three-dimensional model and the corresponding position complexity are extracted in a sliding way, wherein the three-dimensional sliding window is a three-dimensional space with a preset size, and can be a limited space with a size determined by three-coordinate directions or an interval space with any two coordinates and any single-coordinate direction, and the three-dimensional sliding window is determined based on the shape characteristics of a target skeleton. And then, calculating the position complexity corresponding to the acquired fusion data quantity, wherein the position complexity comprises the sum of the complexity of a plurality of unit parts of the three-dimensional model between the three-dimensional sliding windows. And performing adaptation evaluation based on the acquired fusion data amount and the corresponding position complexity, and generating an adaptation abnormality identification if the ratio of the fusion data amount to the position complexity does not meet the preset adaptation constraint.
For example, if the ratio of the amount of fused data to the complexity of the location is less than the lower limit of the preset fit constraint, the amount of fused data for that location is too small relative to its complexity, the modeling quality of the region is inadequate, and more data is needed to provide a more accurate reconstruction.
Further, an additional acquisition instruction is generated according to the adaptation anomaly identification, the imaging device is guided to acquire more data in a corresponding area of the adaptation anomaly identification, and image fusion and reconstruction are carried out again according to an acquired additional data acquisition result, so that the image quality of the position is improved.
By the method, the complexity of the positions is combined, the quality balance of the three-dimensional model after image fusion reconstruction at different positions is effectively ensured, reconstruction errors caused by insufficient data at the complicated positions are avoided, and the fact that enough data are available at all positions to perform accurate image fusion and reconstruction is ensured, so that the final image quality is improved.
And performing printing placement fitting based on the three-dimensional model, determining the gravity direction, performing geometric analysis of the three-dimensional model according to the gravity direction, and determining a preliminary supporting area.
Specifically, firstly, the generated three-dimensional model is placed in a virtual printing environment, the printing placement position and angle of the model are simulated, and the optimal placement angle and direction of the model on a printing platform are determined through fitting analysis, so that the stability and the support requirement of the model in the printing process are minimized.
Specifically, after fitting is completed, the gravity direction of the model, namely the action direction of gravity in the actual printing process, is determined, so that the stress condition and the potential deformation area of the model in the printing process can be determined.
Specifically, based on the geometry of the three-dimensional model, an analysis is performed in conjunction with the direction of gravity. Emphasis is placed on identifying areas of the model that may be deformed by gravity or require additional support. Exemplary include protrusions, overhanging portions, surfaces with large angles of inclination, and complex structures of the mold.
Wherein, according to the analysis result of the geometric shape, the preliminary support area of the model is determined. The preliminary support area is used to provide additional support during printing, preventing collapse or deformation of the mold due to gravity or structural weakness. In addition, the choice of support area also allows for how to reduce the difficulty of removing the support after printing, thereby optimizing printing efficiency.
And carrying out selective optimizing of the supporting points based on the position complexity and the preliminary supporting area, and establishing a selective optimizing result.
In some embodiments, the selecting and optimizing the supporting point based on the position complexity and the preliminary supporting area, and establishing a selecting and optimizing result further includes:
The method comprises the steps of taking each preliminary supporting area as an independent area, establishing an area objective function based on area information of the preliminary supporting area, wherein evaluation characteristics of the area objective function comprise the number of supporting points, materials, stability and post-treatment difficulty, carrying out area evaluation on the independent areas, establishing area association of the independent areas, wherein the area association comprises area cooperative association and area competition association, the area evaluation comprises space proximity analysis, mechanical coupling analysis and material sharing analysis, establishing limit constraint of the supporting points, configuring a solution space by using the limit constraint, taking the area objective function as an evaluation function, carrying out selection optimizing of the supporting points in the solution space, carrying out iterative compensation of the selection optimizing through the area association, and establishing a selection optimizing result.
Specifically, the dimensions of the grading index of the preliminary supporting area comprise the number of supporting points, materials, stability and post-treatment difficulty. The smaller the number of support points, the smaller the corresponding support contact area, the smaller the impact on the model surface quality, and correspondingly, the smaller the material consumption, the less waste generated by removing the support, the stability is another important factor in evaluating the support structure, the support structure should be able to maintain the stability of the model throughout the printing process, preventing it from moving or collapsing, and the stability is determined based on the rigidity of the support structure and the degree of connection stability of the support structure to the model body, and the post-processing includes the steps of removing the support structure and cleaning and repairing the model surface. If the support structure is designed to be too complex or difficult to remove, then post-processing difficulties may increase, which may increase manufacturing time and costs.
In particular, there may be a synergistic association (i.e., they work together in some way) or a competing association (i.e., they have a conflict in terms of resources or space) for multiple independent regions. By performing spatial proximity analysis, mechanical coupling analysis, and material sharing analysis on a plurality of independent regions. Region associations of independent regions may be obtained.
Wherein spatial proximity analysis is used to evaluate spatial relationships between regions, such as distance, relative position, etc., between individual regions. Helping to determine which areas may need to share the support structure or which areas may be printed on to each other. Mechanical coupling analysis is used to evaluate the mechanical interactions, such as stress, deformation, etc., that may occur in the various regions during printing. To help determine the support structure to which the work head is subjected, and to predict printing problems that may occur. Material sharing analysis is used to evaluate the material requirements of the various regions during printing, as well as possible material sharing strategies. For example, if two adjacent independent areas have the same or similar orientation, a strategy for sharing the support structure may be devised to increase the stability of the support and reduce material usage and printing time. Through the evaluation, the influence relation among different independent areas is established, so that the 3D printing process is optimized, the efficiency is improved, the cost is reduced, and the printing quality is improved.
Alternatively, the limit constraints are constraints imposed on variables in the optimization problem, defining boundaries of the knowledge space, and exemplary limit constraints include print platform size, support form (e.g., tree, geometric curve, blend, etc.), support spacing, support generation threshold, etc. A solution space is a set of all possible solutions, after setting the limit constraints, which represents the area that is limited under these constraints. Preferably, the features of the solution space are described in connection with a graphical and mathematical model. By setting limit constraint and configuring solution space, the range of the optimization problem can be effectively reduced, and the solution efficiency is improved.
In some implementations, the iterative compensation for selection optimization by region association further includes:
Establishing a cooperative grouping and a competitive grouping by the regional association, constructing a joint objective function by a regional objective function corresponding to the cooperative grouping, performing joint optimization of the cooperative grouping by the joint objective function, performing priority order sequencing on the competitive grouping, establishing a sequencing result, establishing order optimization based on the sequencing result, and completing iterative compensation according to the joint optimization and the order optimization.
Specifically, first, according to the inter-region relationship, a plurality of independent spaces are divided into cooperative groups and competing groups, wherein the cooperative groups include a plurality of independent spaces with cooperative relationship, such as a plurality of model branches hanging to the same side, the competing groups include a plurality of independent spaces with competing relationship, such as a plurality of model branches hanging to opposite directions or model parts conflicted in space or resource, or the printing sequence thereof may affect the printing efficiency and quality
Specifically, a joint objective function is constructed based on the objective functions of the synergistic groups to obtain a printing strategy that maximizes these synergistic effects. The areas competing for the grouping are then prioritized and then optimized based on this ordering result, including, for example, deciding which areas should be printed preferentially and how to arrange the printing order to minimize conflicts and waste of resources.
Furthermore, according to the result of the combined optimization and the sequential optimization, iterative compensation is performed, and printing efficiency and quality are further improved by adjusting printing parameters or modifying support design. The above method steps understand the complexity of the model to obtain an optimal printing strategy by considering the relationships and interactions between the various regions, and their impact on the overall target.
And optimizing the three-dimensional model according to the selection optimizing result, and establishing an orthopedics 3D printing model of the user.
In some embodiments, the method further comprises:
The method comprises the steps of carrying out fixed point position recognition of a model on the three-dimensional model, carrying out scale division on the basis of fixed point position recognition results, establishing M scale division results, carrying out spatial scale filtering from a coarse scale to a fine scale on the basis of the M scale division results, carrying out filtering fusion of the scale division results after all the scale division results are filtered, and updating the three-dimensional model according to the filtering fusion results.
Specifically, first, key points or feature points of a 3D model are identified as fixed point positions to perform scale division, and a scale division result is obtained, where the scale division result includes a series of sub-models or regions, and each sub-model or region has its own scale or resolution. For example, some parts of the model may be partitioned into coarse scales (low resolution) and other parts may be partitioned into fine scales (high resolution). Illustratively, the setpoint position includes a point of discontinuity of curvature, a point of abrupt change of complexity, and the like.
Specifically, the result of each scale division is filtered to eliminate noise and smooth data, and the geometric shape and structure of each scale division are optimized to improve printing efficiency and quality. Preferably, filtering is performed by adaptive parameter tuning to process different parts of the model at different scales to accommodate the complexity and diversity of the model.
Illustratively, the filtering parameters are adjusted according to the curvature or complexity of the different portions of the model to optimize the printing effect of each portion. For coarse-scale (less curvature or less complexity) parts of the model, larger filter parameters are used to smooth large geometries or structures, while for fine-scale (more curvature or more complexity) parts of the model, details are preserved by smaller filter parameters, improving the print quality of the model while improving the print efficiency.
Further, the filtering results of all scale divisions are fused together, and then the 3D model is updated according to the fusion results so as to achieve the optimal printing effect. Through the method steps, the characteristic point analysis and the filtering method are combined, the 3D model is subjected to multi-scale analysis and optimization, and the printing efficiency and quality are improved.
In some implementations, spatial scale filtering from coarse scale to fine scale is performed based on the M scale division results, further comprising:
spatial scale filtering is performed by the formula as follows:
;
Wherein, Representing verticesAfter the filtering process the new position vector is filtered,Is the vertexIs used to determine the position of the object,In order to smooth the coefficient of the coefficient,Characterization of verticesIs defined by a set of adjacent vertices of the model,Characterization of verticesIs provided with a plurality of the adjacent vertices,Is the vertexAnd the vertexWeights between, characterize verticesTo the vertexIs used to determine the degree of influence of the location update of (c),Is the vertexIs used for the position vector of (a),As a normal deviation control factor,Is the vertexIs defined in the specification.
In particular, the smoothing coefficientThe degree of position smoothing is determined. Larger sizeWill bring the vertex position closer to the average position of the adjacent vertices, smallerMore information of the original location is retained. Normal deviation control factorFor adjusting the movement of the vertex in the normal direction for modifying the surface geometry.
In summary, the orthopedic 3D printing model construction method based on intelligent AI provided by the invention has the following technical effects:
The method comprises the steps of acquiring and acquiring user database information of a user, formulating an image imaging scheme based on the database, executing multi-view image imaging of the user, generating an image dataset, storing the image dataset as a DICOM format file, establishing a three-dimensional coordinate system, placing the image dataset into the coordinate system, registering images according to image coordinates in the three-dimensional coordinate system to form a registration mapping relation, preprocessing the image dataset, executing skeleton segmentation operation in the preprocessed image to generate a segmentation result with segmentation trust degree identification, fusing and reconstructing the image according to the registration mapping and the segmentation trust degree identification to generate a three-dimensional model, identifying the spatial position complexity of the three-dimensional model, performing printing and placement fitting on the three-dimensional model, determining the gravity direction, performing geometric analysis based on the gravity direction, determining a preliminary supporting area, optimizing and selecting supporting points based on the position complexity and the preliminary supporting area, generating a selection optimizing result, optimizing the three-dimensional model according to the selection optimizing result, and finally generating the 3D printing model of the user. Therefore, the technical effects of improving modeling efficiency and precision and improving printing stability are achieved.
Example two
Fig. 2 is a schematic structural diagram of the orthopedic 3D printing model construction device based on intelligent AI. For example, the flow schematic diagram of the intelligent AI-based orthopedic 3D printing model construction method of fig. 1 can be implemented by the structure shown in fig. 2.
Based on the same conception as the orthopedic 3D printing model construction method based on the intelligent AI in the embodiment, the orthopedic 3D printing model construction device based on the intelligent AI further comprises:
The image data acquisition module 11 is configured to acquire a user database of a user, configure an image imaging scheme based on the user database, perform multi-view imaging of the user based on the image imaging scheme, establish an image dataset, and store the image dataset in DICOM format.
The three-dimensional registration module 12 is configured to establish a three-dimensional coordinate system, place the image dataset into the three-dimensional coordinate system, perform image registration with image coordinates in the three-dimensional coordinate system, and establish a registration map.
And the skeleton segmentation module 13 is used for performing skeleton segmentation of the preprocessed image data set after preprocessing the image data set, and establishing a segmentation result, wherein the segmentation result is provided with a segmentation trust degree mark.
And the fusion reconstruction identification module 14 is used for carrying out image fusion reconstruction according to the registration mapping and the segmentation trust identification, generating a three-dimensional model and identifying the position complexity of the three-dimensional model.
And the placement support module 15 is used for performing printing placement fitting based on the three-dimensional model, determining the gravity direction, performing geometric analysis of the three-dimensional model in the gravity direction, and determining a preliminary support area.
And the support optimization module 16 is used for carrying out selection optimization of the support points based on the position complexity and the preliminary support area, and establishing a selection optimization result.
And the entity execution module 17 is used for optimizing the three-dimensional model according to the selection optimizing result and establishing an orthopedics 3D printing model of the user.
Wherein the fusion reconstruction identification module 14 comprises:
The registration mapping weight configuration unit is used for configuring weights according to the registration mapping, and the formula is as follows:
Wherein, Characterizing post registration mappingThe weight of the individual view images is such that,Characterization of the first embodimentThe image quality scores of the individual view images,Characterization of the first embodimentThe segmentation confidence identification of the individual view images,The number of view angles to which the registration map corresponds is characterized,Is the view index.
The pixel fusion unit is used for carrying out pixel fusion according to the configured weight, and the pixel fusion unit is as follows:
the values of the pixels after the fusion are characterized, Is the firstPixel values for corresponding positions of the individual view images.
And the image fusion reconstruction unit is used for completing image fusion reconstruction according to the fused pixel values.
In some implementations, the fused reconstruction identification module 14 further includes:
And the fusion data volume extraction and adaptation evaluation unit is used for extracting the fusion data volume of the image fusion reconstruction, carrying out adaptation evaluation according to the fusion data volume and the position complexity, and establishing an adaptation abnormal identifier.
And the additional acquisition instruction generation unit is used for generating additional acquisition instructions based on the adaptation abnormality identification.
And the additional data acquisition and fusion reconstruction compensation unit is used for controlling the imaging equipment to acquire additional data through the additional acquisition instruction and carrying out image fusion reconstruction compensation according to an additional data acquisition result.
In some embodiments, the support optimization module 16 includes:
And the support region independence processing unit is used for taking each preliminary support region as an independent region, establishing a region objective function based on the region information of the preliminary support region, wherein the evaluation characteristics of the region objective function comprise the number of support points, materials, stability and post-treatment difficulty.
The regional evaluation and association establishing unit is used for carrying out regional evaluation on the independent regions and establishing regional association of the independent regions, wherein the regional association comprises regional cooperative association and regional competitive association, and the regional evaluation comprises spatial proximity analysis, mechanical coupling analysis and material sharing analysis.
The support point limit constraint and selection optimizing unit is used for establishing limit constraint of the support points, configuring a solution space by the limit constraint, taking the regional objective function as an evaluation function, executing selection optimizing of the support points in the solution space, carrying out iterative compensation of selection optimizing by regional association, and establishing a selection optimizing result.
In some implementations, the support point limit constraint and selection optimizing unit in the support optimizing module 16 includes:
And the cooperative packet and competing packet construction unit is used for establishing the cooperative packet and competing packet by the area association.
And the joint objective function construction and collaborative grouping optimization unit is used for constructing a joint objective function through the regional objective function corresponding to the collaborative grouping, and performing collaborative grouping joint optimization through the joint objective function.
And the competitive grouping priority ordering unit is used for ordering the priority orders of the competitive grouping and establishing an ordering result.
And the sequence optimization and iteration compensation unit is used for establishing sequence optimization based on the sequencing result and completing iteration compensation according to the joint optimization and the sequence optimization.
In some embodiments, the system further comprises:
and the three-dimensional model fixed point position recognition unit is used for recognizing the fixed point position of the model for the three-dimensional model, and performing scale division based on the fixed point position recognition result to establish M scale division results.
And the spatial scale filtering unit is used for performing spatial scale filtering from coarse scale to fine scale based on M scale division results.
And the filtering fusion and three-dimensional model updating unit is used for executing the filtering fusion of each scale division result after all the scale division results are filtered, and updating the three-dimensional model according to the filtering fusion result.
In some implementations, the spatial scale filtering unit in the system is configured to:
spatial scale filtering is performed by the formula as follows:
Wherein, Representing verticesAfter the filtering process the new position vector is filtered,Is the vertexIs used to determine the position of the object,In order to smooth the coefficient of the coefficient,Characterization of verticesIs defined by a set of adjacent vertices of the model,Characterization of verticesIs provided with a plurality of the adjacent vertices,Is the vertexAnd the vertexWeights between, characterize verticesTo the vertexIs used to determine the degree of influence of the location update of (c),Is the vertexIs used for the position vector of (a),As a normal deviation control factor,Is the vertexIs defined in the specification.
It should be understood that the embodiments mentioned in this specification focus on differences from other embodiments, and the specific embodiment in the first embodiment is equally applicable to the orthopedic 3D printing model building device based on intelligent AI described in the second embodiment, which is not further developed herein for brevity of description.
It is to be understood that both the foregoing description and the embodiments of the present invention enable one skilled in the art to utilize the present invention. Meanwhile, the invention is not limited to the above-mentioned embodiments, and it should be understood that those skilled in the art may still modify the technical solutions described in the above-mentioned embodiments or substitute some technical features thereof, and these modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the invention, and all the modifications or substitutions should be included in the protection scope of the invention.

Claims (8)

1.基于智能AI的骨科3D打印模型构建方法,其特征在于,所述方法包括:1. A method for constructing an orthopedic 3D printing model based on intelligent AI, characterized in that the method comprises: 采集获得用户的用户数据库,基于所述用户数据库配置影像成像方案,基于所述影像成像方案执行用户的多视角成像,建立图像数据集,并将所述图像数据集保存为DICOM格式;Acquire a user database of the user, configure an imaging scheme based on the user database, perform multi-view imaging of the user based on the imaging scheme, establish an image data set, and save the image data set in a DICOM format; 建立三维坐标系,将所述图像数据集置入所述三维坐标系,以所述三维坐标系内的图像坐标进行图像配准,并建立配准映射;Establishing a three-dimensional coordinate system, placing the image data set into the three-dimensional coordinate system, performing image registration with the image coordinates in the three-dimensional coordinate system, and establishing a registration mapping; 对所述图像数据集预处理后,执行预处理后的图像数据集的骨骼分割,建立分割结果,其中,所述分割结果带有分割信任度标识;After preprocessing the image data set, performing bone segmentation of the preprocessed image data set to establish a segmentation result, wherein the segmentation result carries a segmentation confidence mark; 根据所述配准映射和所述分割信任度标识进行图像融合重建,生成三维模型,并标识三维模型的位置复杂度;Performing image fusion reconstruction according to the registration mapping and the segmentation confidence identifier to generate a three-dimensional model, and identifying the position complexity of the three-dimensional model; 基于所述三维模型进行打印摆放拟合,确定重力方向,以所述重力方向进行三维模型的几何形状分析,确定初步支撑区域;Performing printing placement fitting based on the three-dimensional model to determine the direction of gravity, performing geometric shape analysis of the three-dimensional model based on the direction of gravity to determine a preliminary support area; 基于所述位置复杂度、初步支撑区域进行支撑点的选择寻优,建立选择寻优结果;Based on the position complexity and the preliminary support area, the support points are selected and optimized, and the selection and optimization results are established; 根据所述选择寻优结果优化所述三维模型,建立用户的骨科3D打印模型。The three-dimensional model is optimized according to the selection optimization result to establish an orthopedic 3D printing model for the user. 2.如权利要求1所述的基于智能AI的骨科3D打印模型构建方法,其特征在于,所述基于所述位置复杂度、初步支撑区域进行支撑点的选择寻优,建立选择寻优结果,还包括:2. The method for constructing an orthopedic 3D printing model based on intelligent AI according to claim 1, characterized in that the selection and optimization of support points based on the position complexity and the preliminary support area, and the establishment of the selection and optimization results, further comprises: 将每个初步支撑区域作为独立区域,基于初步支撑区域的区域信息建立区域目标函数,所述区域目标函数的评价特征包括支撑点数量、材料、稳定度、后处理难度;Taking each preliminary support area as an independent area, establishing a regional objective function based on the regional information of the preliminary support area, wherein the evaluation characteristics of the regional objective function include the number of support points, material, stability, and post-processing difficulty; 对所述独立区域进行区域评价,建立独立区域的区域关联,所述区域关联包括区域协同关联和区域竞争关联,所述区域评价包括空间临近分析、力学耦合分析、材料共享分析;Performing regional evaluation on the independent regions, establishing regional associations of the independent regions, wherein the regional associations include regional collaborative associations and regional competitive associations, and the regional evaluation includes spatial proximity analysis, mechanical coupling analysis, and material sharing analysis; 建立支撑点的极限约束,以所述极限约束配置解空间,将所述区域目标函数作为评价函数,在解空间内执行支撑点的选择寻优,并通过区域关联进行选择寻优的迭代补偿,建立选择寻优结果。Establish limit constraints on support points, configure solution space with the limit constraints, use the regional objective function as an evaluation function, perform selection optimization of support points in the solution space, perform iterative compensation of selection optimization through regional association, and establish selection optimization results. 3.如权利要求2所述的基于智能AI的骨科3D打印模型构建方法,其特征在于,所述通过区域关联进行选择寻优的迭代补偿,还包括:3. The method for constructing an orthopedic 3D printing model based on intelligent AI according to claim 2, wherein the iterative compensation for selecting and optimizing by regional association further comprises: 以所述区域关联建立协同分组和竞争分组;Establishing cooperative groups and competitive groups based on the regional associations; 通过所述协同分组对应的区域目标函数构建联合目标函数,以所述联合目标函数进行协同分组的联合优化;Constructing a joint objective function through the regional objective functions corresponding to the collaborative groups, and performing joint optimization of the collaborative groups with the joint objective function; 对所述竞争分组进行优先级顺序排序,建立排序结果;Sorting the competing groups in order of priority and establishing a sorting result; 基于所述排序结果建立顺序优化,根据所述联合优化和顺序优化完成迭代补偿。A sequential optimization is established based on the sorting result, and iterative compensation is completed according to the joint optimization and the sequential optimization. 4.如权利要求1所述的基于智能AI的骨科3D打印模型构建方法,其特征在于,所述根据所述配准映射和所述分割信任度标识进行图像融合重建,还包括:4. The method for constructing an orthopedic 3D printing model based on intelligent AI according to claim 1, characterized in that the image fusion reconstruction is performed according to the registration mapping and the segmentation confidence mark, and further comprises: 根据配准映射配置权重,公式如下:The weights are configured according to the registration map, as follows: ; 其中,表征配准映射后第个视角图像的权重,表征第个视角图像的图像质量评分,表征第个视角图像的分割信任度标识,表征配准映射对应的视角数量,为视角索引;in, Characterize the registration mapping The weight of the image from each view, Characterization The image quality score of the image from each perspective is Characterization The segmentation confidence level of each view image is Characterizes the number of viewpoints corresponding to the registration map, is the perspective index; 根据配置完成的权重进行像素融合,如下:Pixel fusion is performed according to the configured weights as follows: ; 表征融合后的像素值,为第个视角图像对应位置的像素值; Represents the pixel value after fusion, For the The pixel value of the corresponding position of the perspective image; 根据融合后的像素值完成图像融合重建。Image fusion reconstruction is completed based on the fused pixel values. 5.如权利要求1所述的基于智能AI的骨科3D打印模型构建方法,其特征在于,所述根据所述配准映射和所述分割信任度标识进行图像融合重建,生成三维模型,并标识三维模型的位置复杂度,还包括:5. The method for constructing an orthopedic 3D printing model based on intelligent AI according to claim 1, characterized in that the image fusion reconstruction is performed according to the registration mapping and the segmentation confidence mark to generate a three-dimensional model and mark the position complexity of the three-dimensional model, and further comprises: 提取图像融合重建的融合数据量,以所述融合数据量和所述位置复杂度进行适配评价,建立适配异常标识;Extracting the amount of fused data of image fusion reconstruction, performing adaptation evaluation based on the amount of fused data and the position complexity, and establishing an adaptation abnormality mark; 基于所述适配异常标识生成附加采集指令;generating an additional collection instruction based on the adaptation exception identifier; 通过所述附加采集指令控制成像设备进行附加数据采集,并以附加数据采集结果进行图像融合重建补偿。The additional acquisition instruction is used to control the imaging device to perform additional data acquisition, and the additional data acquisition result is used to perform image fusion reconstruction compensation. 6.如权利要求5所述的基于智能AI的骨科3D打印模型构建方法,其特征在于,所述方法还包括:6. The method for constructing an orthopedic 3D printing model based on intelligent AI according to claim 5, characterized in that the method further comprises: 对所述三维模型进行模型的定点位置识别,基于定点位置识别结果进行尺度划分,建立M个尺度划分结果;Performing fixed-point position recognition on the three-dimensional model, performing scale division based on the fixed-point position recognition result, and establishing M scale division results; 基于M个尺度划分结果进行由粗尺度向细尺度的空间尺度滤波;Based on the M scale division results, spatial scale filtering from coarse scale to fine scale is performed; 当全部的尺度划分结果均滤波完成后,执行各尺度划分结果的滤波融合,根据滤波融合结果更新三维模型。After all scale division results are filtered, filtering fusion of each scale division result is performed, and the three-dimensional model is updated according to the filtering fusion results. 7.如权利要求6所述的基于智能AI的骨科3D打印模型构建方法,其特征在于,所述基于M个尺度划分结果进行由粗尺度向细尺度的空间尺度滤波,还包括:7. The method for constructing an orthopedic 3D printing model based on intelligent AI according to claim 6, characterized in that the spatial scale filtering from coarse scale to fine scale based on the M scale division results further comprises: 通过公式进行空间尺度滤波,如下:The spatial scale filtering is performed through the formula as follows: ; 其中,表示顶点在滤波处理后的新位置向量,为顶点的原始位置向量,为平滑系数,表征顶点的相邻顶点集合,表征顶点的任一相邻顶点,为顶点与顶点之间的权重,表征顶点对顶点的位置更新影响程度,为顶点的位置向量,为法向偏差控制因子,为顶点的法线向量。in, Represents a vertex The new position vector after filtering is Vertex The original position vector, is the smoothing coefficient, Characterizing Vertices The set of adjacent vertices of Characterizing Vertices Any adjacent vertex of Vertex With Vertex The weight between them represents the vertex Opposite Point The impact of location updates, Vertex The position vector of is the normal deviation control factor, Vertex The normal vector of . 8.基于智能AI的骨科3D打印模型构建装置,其特征在于,所述装置用于执行权利要求1-7任意一项的基于智能AI的骨科3D打印模型构建方法,所述装置包括:8. An orthopedic 3D printing model construction device based on intelligent AI, characterized in that the device is used to execute the orthopedic 3D printing model construction method based on intelligent AI according to any one of claims 1 to 7, and the device comprises: 图像数据采集模块,所述图像数据采集模块用于采集获得用户的用户数据库,基于所述用户数据库配置影像成像方案,基于所述影像成像方案执行用户的多视角成像,建立图像数据集,并将所述图像数据集保存为DICOM格式;An image data acquisition module, the image data acquisition module is used to acquire a user database of a user, configure an image imaging scheme based on the user database, perform multi-view imaging of the user based on the image imaging scheme, establish an image data set, and save the image data set in DICOM format; 三维配准模块,所述三维配准模块用于建立三维坐标系,将所述图像数据集置入所述三维坐标系,以所述三维坐标系内的图像坐标进行图像配准,并建立配准映射;A three-dimensional registration module, the three-dimensional registration module is used to establish a three-dimensional coordinate system, place the image data set into the three-dimensional coordinate system, perform image registration with the image coordinates in the three-dimensional coordinate system, and establish a registration mapping; 骨骼分割模块,所述骨骼分割模块用于对所述图像数据集预处理后,执行预处理后的图像数据集的骨骼分割,建立分割结果,其中,所述分割结果带有分割信任度标识;A skeleton segmentation module, wherein the skeleton segmentation module is used to perform skeleton segmentation of the preprocessed image data set after preprocessing the image data set, and establish a segmentation result, wherein the segmentation result carries a segmentation confidence mark; 融合重建标识模块,所述融合重建标识模块用于根据所述配准映射和所述分割信任度标识进行图像融合重建,生成三维模型,并标识三维模型的位置复杂度;A fusion reconstruction identification module, the fusion reconstruction identification module is used to perform image fusion reconstruction according to the registration map and the segmentation confidence identification, generate a three-dimensional model, and identify the position complexity of the three-dimensional model; 摆放支撑模块,所述摆放支撑模块用于基于所述三维模型进行打印摆放拟合,确定重力方向,以所述重力方向进行三维模型的几何形状分析,确定初步支撑区域;A placement support module, the placement support module is used to perform printing placement fitting based on the three-dimensional model, determine the gravity direction, perform geometric shape analysis of the three-dimensional model based on the gravity direction, and determine a preliminary support area; 支撑优化模块,所述支撑优化模块用于基于所述位置复杂度、初步支撑区域进行支撑点的选择寻优,建立选择寻优结果;A support optimization module, the support optimization module is used to select and optimize support points based on the position complexity and the preliminary support area, and establish a selection optimization result; 实体执行模块,所述实体执行模块用于根据所述选择寻优结果优化所述三维模型,建立用户的骨科3D打印模型。The entity execution module is used to optimize the three-dimensional model according to the selection optimization result and establish an orthopedic 3D printing model for the user.
CN202411884589.6A 2024-12-20 2024-12-20 Orthopedics 3D printing model construction method and device based on intelligent AI Pending CN119339006A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202411884589.6A CN119339006A (en) 2024-12-20 2024-12-20 Orthopedics 3D printing model construction method and device based on intelligent AI

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202411884589.6A CN119339006A (en) 2024-12-20 2024-12-20 Orthopedics 3D printing model construction method and device based on intelligent AI

Publications (1)

Publication Number Publication Date
CN119339006A true CN119339006A (en) 2025-01-21

Family

ID=94265329

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202411884589.6A Pending CN119339006A (en) 2024-12-20 2024-12-20 Orthopedics 3D printing model construction method and device based on intelligent AI

Country Status (1)

Country Link
CN (1) CN119339006A (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091347A (en) * 2014-07-26 2014-10-08 刘宇清 Intracranial tumor operation planning and simulating method based on 3D print technology
US20180165867A1 (en) * 2016-11-16 2018-06-14 Terarecon, Inc. System and method for three-dimensional printing, holographic and virtual reality rendering from medical image processing
CN116229023A (en) * 2023-01-09 2023-06-06 浙江钧控智能科技有限公司 Human body three-dimensional curved surface modeling method based on 3D vision
CN117392328A (en) * 2023-12-07 2024-01-12 四川云实信息技术有限公司 Three-dimensional live-action modeling method and system based on unmanned aerial vehicle cluster
CN118608709A (en) * 2024-05-31 2024-09-06 重庆交通大学 Real-time 3D modeling of earthwork engineering, calculation of earthwork quantity and construction progress monitoring method based on drone measurement
CN118781012A (en) * 2024-09-10 2024-10-15 南京晨新医疗科技有限公司 3D ultra-high-definition fluorescence medical endoscope imaging method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104091347A (en) * 2014-07-26 2014-10-08 刘宇清 Intracranial tumor operation planning and simulating method based on 3D print technology
US20180165867A1 (en) * 2016-11-16 2018-06-14 Terarecon, Inc. System and method for three-dimensional printing, holographic and virtual reality rendering from medical image processing
CN116229023A (en) * 2023-01-09 2023-06-06 浙江钧控智能科技有限公司 Human body three-dimensional curved surface modeling method based on 3D vision
CN117392328A (en) * 2023-12-07 2024-01-12 四川云实信息技术有限公司 Three-dimensional live-action modeling method and system based on unmanned aerial vehicle cluster
CN118608709A (en) * 2024-05-31 2024-09-06 重庆交通大学 Real-time 3D modeling of earthwork engineering, calculation of earthwork quantity and construction progress monitoring method based on drone measurement
CN118781012A (en) * 2024-09-10 2024-10-15 南京晨新医疗科技有限公司 3D ultra-high-definition fluorescence medical endoscope imaging method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
XIN TIAN,ET AL: "Optimal Transport-Based Graph Matching for 3D Retinal Oct Image Registration", IEEE, 18 October 2022 (2022-10-18) *
许阳等: "腹腔镜下肝脏手术增强现实三维影像导航平台的构建与应用", 中国临床医学, vol. 30, no. 1, 25 February 2023 (2023-02-25) *

Similar Documents

Publication Publication Date Title
CN112037200B (en) A method for automatic recognition and model reconstruction of anatomical features in medical images
CN110189352B (en) Tooth root extraction method based on oral cavity CBCT image
CN111311655B (en) Multi-mode image registration method, device, electronic equipment and storage medium
CN112885453A (en) Method and system for identifying pathological changes in subsequent medical images
CN109493346A (en) It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device
CN112614169B (en) 2D/3D spine CT (computed tomography) level registration method based on deep learning network
US8867804B2 (en) Method and apparatus for automatically generating trim lines for cranial remodeling devices
CN100421128C (en) Method and image processing system for segmenting tomographic image data
CN107665497A (en) In a kind of medical image calculate ambition than method
US12106856B2 (en) Image processing apparatus, image processing method, and program for segmentation correction of medical image
WO2024021523A1 (en) Graph network-based method and system for fully automatic segmentation of cerebral cortex surface
CN118279302A (en) Three-dimensional reconstruction detection method and system for brain tumor image
EP2689344A2 (en) Knowledge-based automatic image segmentation
CN117934689B (en) Multi-tissue segmentation and three-dimensional rendering method for fracture CT image
EP3933757A1 (en) Method of determining clinical reference points and pre-surgical planning
CN111369662A (en) Three-dimensional model reconstruction method and system for blood vessels in CT (computed tomography) image
CN113962957A (en) Medical image processing method, bone image processing method, device and equipment
CN119339006A (en) Orthopedics 3D printing model construction method and device based on intelligent AI
CN112562070A (en) Craniosynostosis operation cutting coordinate generation system based on template matching
US7792360B2 (en) Method, a computer program, and apparatus, an image analysis system and an imaging system for an object mapping in a multi-dimensional dataset
CN118967950B (en) Three-dimensional image guiding correction planning method, system, device and medium
CN118866251B (en) Medical image-based orthopedic plan generation method, device and storage medium
KR102689375B1 (en) Skeleton estimate apparatus using multiple x-ray views and method thereof
Ting Shape Statistical Model-Based Regression for Predicting Anatomical Landmarks
CN119360049A (en) Model training method and device for extracting human skeleton characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination