CN119339006A - Orthopedics 3D printing model construction method and device based on intelligent AI - Google Patents
Orthopedics 3D printing model construction method and device based on intelligent AI Download PDFInfo
- Publication number
- CN119339006A CN119339006A CN202411884589.6A CN202411884589A CN119339006A CN 119339006 A CN119339006 A CN 119339006A CN 202411884589 A CN202411884589 A CN 202411884589A CN 119339006 A CN119339006 A CN 119339006A
- Authority
- CN
- China
- Prior art keywords
- image
- segmentation
- printing
- dimensional model
- dimensional
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000010146 3D printing Methods 0.000 title claims abstract description 42
- 238000010276 construction Methods 0.000 title claims abstract description 22
- 230000000399 orthopedic effect Effects 0.000 title claims description 36
- 230000011218 segmentation Effects 0.000 claims abstract description 73
- 230000004927 fusion Effects 0.000 claims abstract description 65
- 238000003384 imaging method Methods 0.000 claims abstract description 49
- 238000007639 printing Methods 0.000 claims abstract description 46
- 238000000034 method Methods 0.000 claims abstract description 37
- 238000013507 mapping Methods 0.000 claims abstract description 30
- 230000005484 gravity Effects 0.000 claims abstract description 23
- 210000000988 bone and bone Anatomy 0.000 claims abstract description 11
- 238000007781 pre-processing Methods 0.000 claims abstract description 9
- 238000005457 optimization Methods 0.000 claims description 40
- 238000004458 analytical method Methods 0.000 claims description 36
- 238000001914 filtration Methods 0.000 claims description 24
- 230000006870 function Effects 0.000 claims description 24
- 238000011156 evaluation Methods 0.000 claims description 19
- 239000000463 material Substances 0.000 claims description 13
- 230000006978 adaptation Effects 0.000 claims description 12
- 238000012512 characterization method Methods 0.000 claims description 10
- 230000002860 competitive effect Effects 0.000 claims description 7
- 230000008878 coupling Effects 0.000 claims description 5
- 238000010168 coupling process Methods 0.000 claims description 5
- 238000005859 coupling reaction Methods 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 4
- 230000005856 abnormality Effects 0.000 claims description 3
- 238000012805 post-processing Methods 0.000 claims description 3
- 230000000694 effects Effects 0.000 abstract description 7
- 230000008569 process Effects 0.000 description 11
- 239000011159 matrix material Substances 0.000 description 7
- 230000009466 transformation Effects 0.000 description 6
- 238000012163 sequencing technique Methods 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 238000010586 diagram Methods 0.000 description 3
- 208000022821 personality disease Diseases 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000002195 synergetic effect Effects 0.000 description 3
- 238000012549 training Methods 0.000 description 3
- 230000009471 action Effects 0.000 description 2
- 230000037237 body shape Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000033001 locomotion Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000006467 substitution reaction Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 239000002699 waste material Substances 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 238000004140 cleaning Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000007499 fusion processing Methods 0.000 description 1
- 230000036541 health Effects 0.000 description 1
- 238000003709 image segmentation Methods 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000013178 mathematical model Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 210000003205 muscle Anatomy 0.000 description 1
- 238000007500 overflow downdraw method Methods 0.000 description 1
- 230000005855 radiation Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000547 structure data Methods 0.000 description 1
- 210000001519 tissue Anatomy 0.000 description 1
- 238000013519 translation Methods 0.000 description 1
Landscapes
- Apparatus For Radiation Diagnosis (AREA)
Abstract
The invention discloses an orthopaedics 3D printing model construction method and device based on intelligent AI, and relates to the technical field of images, wherein the method comprises the steps of collecting a user database, configuring an image imaging scheme and executing multi-view imaging to establish a DICOM format image dataset; placing an image data set in a three-dimensional coordinate system, establishing a registration mapping, preprocessing the image data set, executing bone segmentation, identifying segmentation trust, performing image fusion reconstruction according to the registration mapping and the segmentation trust to generate a three-dimensional model with identification position complexity, performing printing placement fitting based on the three-dimensional model, determining a gravity direction, analyzing a geometric shape to determine a preliminary supporting area, optimizing supporting points according to the position complexity and the preliminary supporting area, establishing a selection optimizing result, optimizing the three-dimensional model according to the selection optimizing result, and generating an orthopaedics 3D printing model of a user. Thereby achieving the technical effects of improving modeling efficiency and precision and improving printing stability.
Description
Technical Field
The invention relates to the technical field of images, in particular to an orthopedic 3D printing model construction method and device based on intelligent AI.
Background
The 3D printing technology is applied to the medical field, particularly in the orthopedic operation, can help doctors to more intuitively observe the bone structure of a patient, perform preoperative planning and simulation, and improve the accuracy and success rate of the operation. The existing orthopaedics 3D printing model construction method generally depends on a fixed parameter mode, is assisted by manual operation, and has the technical problems of long modeling time consumption, low precision and poor printing stability.
Disclosure of Invention
The invention provides an orthopedic 3D printing model construction method and device based on intelligent AI, which are used for solving the technical problems of long modeling time consumption, low precision and poor printing stability in the prior art, and realizing the technical effects of improving modeling efficiency and precision and improving printing stability.
In a first aspect, the present invention provides an orthopedic 3D printing model construction method based on intelligent AI, wherein the method comprises:
Acquiring a user database of a user, configuring an image imaging scheme based on the user database, executing multi-view imaging of the user based on the image imaging scheme, establishing an image dataset, storing the image dataset as a DICOM format, establishing a three-dimensional coordinate system, placing the image dataset into the three-dimensional coordinate system, carrying out image registration by using image coordinates in the three-dimensional coordinate system, establishing a registration mapping, preprocessing the image dataset, executing skeleton segmentation of the preprocessed image dataset, establishing segmentation results, wherein the segmentation results have segmentation trust degree marks, carrying out image fusion reconstruction according to the registration mapping and the segmentation trust degree marks, generating a three-dimensional model, marking the position complexity of the three-dimensional model, carrying out print placement fitting based on the three-dimensional model, determining the gravity direction, carrying out geometric analysis of the three-dimensional model in the gravity direction, determining a preliminary support area, carrying out selection optimizing of support points based on the position complexity and the preliminary support area, establishing a selection optimizing result, optimizing the three-dimensional model, and establishing an orthopedics 3D printing model of the user according to the selection optimizing result.
In a second aspect, the present invention further provides an orthopedic 3D printing model construction device based on intelligent AI, wherein the device includes:
The image data acquisition module is used for acquiring a user database of a user, configuring an image imaging scheme based on the user database, executing multi-view imaging of the user based on the image imaging scheme, establishing an image data set, and storing the image data set in a DICOM format.
The three-dimensional registration module is used for establishing a three-dimensional coordinate system, placing the image data set into the three-dimensional coordinate system, carrying out image registration by using the image coordinates in the three-dimensional coordinate system, and establishing registration mapping.
And the skeleton segmentation module is used for carrying out skeleton segmentation on the preprocessed image data set after preprocessing the image data set, and establishing a segmentation result, wherein the segmentation result is provided with a segmentation trust degree mark.
And the fusion reconstruction identification module is used for carrying out image fusion reconstruction according to the registration mapping and the segmentation trust identification, generating a three-dimensional model and identifying the position complexity of the three-dimensional model.
And the placement support module is used for performing printing placement fitting based on the three-dimensional model, determining the gravity direction, performing geometric analysis of the three-dimensional model in the gravity direction, and determining a preliminary support area.
The support optimization module is used for carrying out selection optimization on the support points based on the position complexity and the preliminary support area, and establishing a selection optimization result.
And the entity execution module is used for optimizing the three-dimensional model according to the selection optimizing result and establishing an orthopedics 3D printing model of the user.
The invention discloses an orthopedics 3D printing model construction method and device based on intelligent AI, comprising the steps of collecting and acquiring user database information of a user, making an image imaging scheme based on the database, executing multi-view image imaging of the user, generating an image dataset, storing the image dataset as a DICOM format file, establishing a three-dimensional coordinate system, placing the image dataset into the coordinate system, carrying out image registration by image coordinates in the three-dimensional coordinate system to form a registration mapping relation, preprocessing the image dataset, executing skeleton segmentation operation in the preprocessed image to generate a segmentation result, carrying out fusion reconstruction on the image according to the registration mapping and the segmentation trust identification to generate a three-dimensional model, identifying the space position complexity of the three-dimensional model, printing and placing fitting the three-dimensional model, determining a gravity direction, carrying out geometric analysis based on the gravity direction, determining a supporting area, optimizing and selecting a supporting point based on the position complexity and a preliminary supporting area, generating a selection optimizing result, carrying out optimization processing on the three-dimensional model according to the selection optimizing result, and finally generating the orthopedics 3D printing model of the user. The method and the device for constructing the orthopedic 3D printing model based on the intelligent AI solve the technical problems of long modeling time consumption, low precision and poor printing stability, and realize the technical effects of improving modeling efficiency and precision and improving printing stability.
Drawings
Fig. 1 is a schematic flow chart of an orthopedic 3D printing model construction method based on intelligent AI.
Fig. 2 is a schematic structural diagram of the orthopedic 3D printing model construction device based on intelligent AI.
Reference numerals illustrate an image data acquisition module 11, a three-dimensional registration module 12, a skeleton segmentation module 13, a fusion reconstruction identification module 14, a placement support module 15, a support optimization module 16 and a entity execution module 17.
Detailed Description
The technical scheme provided by the embodiment of the invention aims to solve the technical problems of long modeling time consumption, low precision and poor printing stability in the prior art, and adopts the following overall thought:
First, a user database of the user is acquired and obtained, the database containing basic information and medical image data of the user. Based on the user database, an image imaging scheme is configured, which may include image acquisition modes of different angles, such as CT, MRI, etc.
The user is then imaged at multiple perspectives based on the configured imaging scheme, and the acquired image dataset is saved in DICOM format for subsequent processing and analysis. Then, a three-dimensional coordinate system is established, the acquired image data set is placed into the three-dimensional coordinate system, image registration is carried out by utilizing image coordinates, images with different visual angles can be ensured to be correctly corresponding and mapped, and a registration mapping relation is formed. After the image dataset is preprocessed, skeleton segmentation operation is carried out, skeleton structure information is extracted, and the trust degree of each segmentation result, namely the accuracy degree of each segmentation area, is marked. And fusing and reconstructing images of different visual angles according to the registration mapping and the segmentation trust degree to generate a three-dimensional skeleton model of the user. Meanwhile, based on the segmentation result and the image complexity, the position complexity of the three-dimensional model is identified, which is important for the selection of the supporting points and 3D printing. And then, based on the generated three-dimensional model, fitting the printing placement, determining the gravity direction in the printing process, performing geometric analysis, identifying the area possibly needing to be supported, and determining the preliminary support area. And then, combining the position complexity of the three-dimensional model and the preliminary support area, carrying out optimizing selection on the support points, finding out the most suitable support point layout, and generating an optimizing result of the support point selection. And finally, optimizing the three-dimensional model based on the optimizing result of the supporting points, and finally generating the orthopedics 3D printing model of the user. The model will be used for surgical planning, prosthesis design or other orthopedics related applications.
The foregoing aspects will be better understood by reference to the following detailed description of the invention taken in conjunction with the accompanying drawings and detailed description. It should be apparent that the described embodiments are only some embodiments of the present invention and not all embodiments of the present invention, and it should be understood that the present invention is not limited by the exemplary embodiments used only to explain the present invention. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to fall within the scope of the invention. It should be noted that, for convenience of description, only some, but not all of the drawings related to the present invention are shown.
Example 1
Fig. 1 is a flow chart of an orthopedic 3D printing model construction method based on intelligent AI, wherein the method comprises the following steps:
Acquiring a user database of a user, configuring an image imaging scheme based on the user database, executing multi-view imaging of the user based on the image imaging scheme, establishing an image dataset, and storing the image dataset in a DICOM format.
Specifically, firstly, a user database is interacted to obtain user data of a target user, and the user data reflects user characteristic information of the target user and is used for providing reference basis for imaging. The user database comprises data of body type, body posture, position to be modeled, modeling precision requirement and the like of a user.
Specifically, an image imaging scheme is configured based on a user database, related information such as bone position, body shape, posture, sex, age and the like of a user is extracted from the user database to serve as basic data for making the image imaging scheme, and then specific imaging requirements are determined according to health conditions or bone requirements of the user. For example, whether certain regions need to be focused on, whether there is a particular imaging angle or resolution requirement, etc. Then, parameters of the imaging device including device position (height and distance of the imaging device), imaging angle, radiation dose, etc. are adjusted according to the user's body structure data (e.g., bone position, body shape) to ensure coverage of the region of interest to be acquired and to provide acquisition resolution satisfying resolution requirements. Through the steps, the image imaging scheme is ensured to be suitable for the personalized requirements and physical characteristics of the user, so that clear and accurate images are obtained.
Further, the user is imaged at multiple viewing angles according to an imaging scheme. The image data of the user is captured through different angles and directions, so that comprehensive and accurate image information is ensured to be obtained. Illustratively, shooting is performed from a plurality of angles such as front, back, left, right, oblique and the like, and all shot image data are classified and sorted according to an imaging sequence or a view angle sequence, so that a complete image data set is established. The image dataset contains all view images, ensuring that the target region can be reconstructed and viewed completely through multiple views in subsequent analysis.
Specifically, the processed image dataset is converted and saved into DICOM (DIGITAL IMAGING AND Communications IN MEDICINE) format. The DICOM file contains the necessary user information (e.g., name, ID), imaging device parameters, imaging time, and other relevant metadata for subsequent access, recall, and analysis.
Establishing a three-dimensional coordinate system, placing the image dataset into the three-dimensional coordinate system, carrying out image registration by using image coordinates in the three-dimensional coordinate system, and establishing a registration mapping.
Specifically, first, a three-dimensional coordinate system is established as a reference system for image data. The coordinate system is constructed based on the physical position of the device, the center point of the imaging part or other space with fixed reference points, and each axis of the three-dimensional coordinate system respectively represents X, Y, Z three directions, so that the position of each image in the three-dimensional space can be accurately positioned.
In particular, an image dataset resulting from multi-view imaging is read. And determining the initial position and direction of each image in the three-dimensional coordinate system according to the specific view angle, the position information and the imaging time of the imaging device corresponding to each image data, and placing each image in the three-dimensional coordinate system according to the position information.
Specifically, the multi-view image data in the three-dimensional coordinate system is initial image data restored according to the acquisition direction, in actual acquisition, due to the fact that the user gesture or action changes, the situation of imperfect fitting exists among the multi-view image data, accurate bone image data is difficult to form, further image registration is needed, images of different view angles are aligned, and a consistent three-dimensional model is established. The purpose of registration is to correct for differences in position, rotation, scaling, etc. between the multi-view images by a transformation matrix. Image registration in a three-dimensional coordinate system is illustratively performed based on registration algorithms such as RANSAC, 4PCE, IPC and the like.
Illustratively, first, key feature points common to neighboring images are extracted from each image, and the feature points may be determined based on physical structures, edges, textures, or other salient features in the image. Then, a set of four points is extracted from the images at different perspectives using the 4PCS algorithm, the geometric relationships of the four points in the images remaining consistent. Further, the spatial positions of the point sets in the adjacent images are compared, and a similarity transformation between the images is acquired. Finally, the optimal rigid transformation matrix (including rotation, translation and scaling) is solved for the key point set matches. This matrix will be used to transform one of the images so that it is aligned with the images of the other perspectives.
Further, after the optimal transformation matrix is obtained, the matrix is applied to transform the images, all the image data sets are registered, so that the image data sets are aligned to the same reference frame in a three-dimensional coordinate system, and a registration mapping relation of each image is established, wherein the registration mapping relation records the original positions of the images, the transformation matrix after registration and the coordinate positions of the transformation matrix in the three-dimensional space.
And after preprocessing the image dataset, executing skeleton segmentation of the preprocessed image dataset, and establishing a segmentation result, wherein the segmentation result is provided with a segmentation trust degree mark.
Specifically, google segmentation is used to segment skeletal regions in the preprocessed image dataset from other tissues (e.g., muscle, fat, etc.). The segmentation methods include threshold segmentation, region growth, level set methods, etc., deep learning methods, etc.
Illustratively, convolutional Neural Networks (CNNs) are also widely used for image segmentation tasks. First, a set of labeled training data, such as a set of CT images, and corresponding bone segmentation tags are prepared. Wherein, the label is a Boolean value, which is used for indicating whether each pixel belongs to bones. The CNN is then trained using the training data. Involving forward propagation, backward propagation, and parameter updating (e.g., random gradient descent to update network parameters). And after the network training is finished, the method can be used for carrying out skeleton segmentation on the new CT image and outputting the segmentation result with the segmentation trust degree identification. In other words, the preprocessed image is input into the network, which outputs a segmented image (segmentation result), wherein the value of each pixel represents the probability that pixel belongs to the bone (segmentation confidence).
Through the steps of the method, the skeleton of the image data set is segmented, so that pixels belonging to the skeleton in the image data can be accurately identified, and the construction precision of a follow-up three-dimensional model is improved.
And carrying out image fusion reconstruction according to the registration mapping and the segmentation confidence identification, generating a three-dimensional model, and identifying the position complexity of the three-dimensional model.
Specifically, the multi-view image data is aligned into a unified three-dimensional coordinate system based on the established registration map. Through the registration mapping, the skeleton region of each view image can be accurately corresponding to the same three-dimensional space position, and then the segmentation result of each view is weighted by utilizing the segmentation trust degree identification, wherein the part with higher segmentation trust degree occupies larger weight in the three-dimensional model, thereby improving the reliability and the precision of the model. Then, the segmented bone images for each view are fused.
Optionally, a voxel (voxel) level fusion method is adopted to combine the segmentation results of each view angle, eliminate the problem of imperfect fitting caused by the gesture or motion change of the user, and convert the fused image data into three-dimensional voxel (voxel) data.
Optionally, based on voxel data, a grid structure of the three-dimensional model is generated by Marching Cubes or other surface reconstruction algorithms, and the skeleton structure is presented in the form of a three-dimensional geometric model. Preferably, in the generated three-dimensional model, smoothing, denoising or hole filling is adaptively performed to improve the precision and quality of the model.
Specifically, the complexity of different parts of the model is quantified by means of curvature analysis, shape complexity analysis or density analysis and the like, and the position complexity of a plurality of positions is obtained. The higher complexity of the parts means that the areas are more prone to errors in the actual imaging or reconstruction process, and meanwhile, the printing difficulty is higher in the subsequent 3D printing execution process, and fine support setting and printing slicing are required.
Through the steps, a three-dimensional skeleton model with high precision and definite position complexity identification can be generated, and a reliable model foundation is provided for subsequent printing analysis and planning.
In some embodiments, performing image fusion reconstruction according to the registration map and the segmentation confidence identification further comprises:
the weights are configured according to the registration mapping, as follows:
;
Wherein, Characterizing post registration mappingThe weight of the individual view images is such that,Characterization of the first embodimentThe image quality scores of the individual view images,Characterization of the first embodimentThe segmentation confidence identification of the individual view images,The number of view angles to which the registration map corresponds is characterized,Is the view index.
And carrying out pixel fusion according to the configured weight, wherein the pixel fusion comprises the following steps:
;
the values of the pixels after the fusion are characterized, Is the firstPixel values for corresponding positions of the individual view images.
And finishing image fusion reconstruction according to the fused pixel values.
Optionally, before image fusion reconstruction, the image of each view is given a corresponding weight, which depends on the quality of the image and the confidence level of segmentation. Specifically, firstly, based on the weight configuration formula, the weights of fusion of a plurality of view images are calculated, wherein the higher the image quality is, the larger the segmentation confidence identification is, the higher the corresponding weights are, and then, pixel fusion is carried out according to the configured weights. Specifically, the fusion value for each pixel location is derived from a weighted average of all the pixel values for the corresponding location and their weights, ensuring that pixels of the high quality, high segmentation confidence image are prioritized. And finally, stacking the fused multi-view images to generate a three-dimensional image or directly carrying out pixel fusion in a three-dimensional space to finish image fusion reconstruction. Through the steps, the quality and the trust degree of each view angle image are ensured to be considered in the image fusion process, so that a more accurate fusion result is generated.
In some implementations, performing image fusion reconstruction according to the registration mapping and the segmentation confidence identifier, generating a three-dimensional model, and identifying a position complexity of the three-dimensional model, further including:
Extracting fusion data quantity of image fusion reconstruction, carrying out adaptation evaluation according to the fusion data quantity and the position complexity, establishing an adaptation abnormal identifier, generating an additional acquisition instruction based on the adaptation abnormal identifier, controlling imaging equipment to carry out additional data acquisition through the additional acquisition instruction, and carrying out image fusion reconstruction compensation according to an additional data acquisition result.
Specifically, based on a three-dimensional sliding window, the fusion data amount of a plurality of positions of the three-dimensional model and the corresponding position complexity are extracted in a sliding way, wherein the three-dimensional sliding window is a three-dimensional space with a preset size, and can be a limited space with a size determined by three-coordinate directions or an interval space with any two coordinates and any single-coordinate direction, and the three-dimensional sliding window is determined based on the shape characteristics of a target skeleton. And then, calculating the position complexity corresponding to the acquired fusion data quantity, wherein the position complexity comprises the sum of the complexity of a plurality of unit parts of the three-dimensional model between the three-dimensional sliding windows. And performing adaptation evaluation based on the acquired fusion data amount and the corresponding position complexity, and generating an adaptation abnormality identification if the ratio of the fusion data amount to the position complexity does not meet the preset adaptation constraint.
For example, if the ratio of the amount of fused data to the complexity of the location is less than the lower limit of the preset fit constraint, the amount of fused data for that location is too small relative to its complexity, the modeling quality of the region is inadequate, and more data is needed to provide a more accurate reconstruction.
Further, an additional acquisition instruction is generated according to the adaptation anomaly identification, the imaging device is guided to acquire more data in a corresponding area of the adaptation anomaly identification, and image fusion and reconstruction are carried out again according to an acquired additional data acquisition result, so that the image quality of the position is improved.
By the method, the complexity of the positions is combined, the quality balance of the three-dimensional model after image fusion reconstruction at different positions is effectively ensured, reconstruction errors caused by insufficient data at the complicated positions are avoided, and the fact that enough data are available at all positions to perform accurate image fusion and reconstruction is ensured, so that the final image quality is improved.
And performing printing placement fitting based on the three-dimensional model, determining the gravity direction, performing geometric analysis of the three-dimensional model according to the gravity direction, and determining a preliminary supporting area.
Specifically, firstly, the generated three-dimensional model is placed in a virtual printing environment, the printing placement position and angle of the model are simulated, and the optimal placement angle and direction of the model on a printing platform are determined through fitting analysis, so that the stability and the support requirement of the model in the printing process are minimized.
Specifically, after fitting is completed, the gravity direction of the model, namely the action direction of gravity in the actual printing process, is determined, so that the stress condition and the potential deformation area of the model in the printing process can be determined.
Specifically, based on the geometry of the three-dimensional model, an analysis is performed in conjunction with the direction of gravity. Emphasis is placed on identifying areas of the model that may be deformed by gravity or require additional support. Exemplary include protrusions, overhanging portions, surfaces with large angles of inclination, and complex structures of the mold.
Wherein, according to the analysis result of the geometric shape, the preliminary support area of the model is determined. The preliminary support area is used to provide additional support during printing, preventing collapse or deformation of the mold due to gravity or structural weakness. In addition, the choice of support area also allows for how to reduce the difficulty of removing the support after printing, thereby optimizing printing efficiency.
And carrying out selective optimizing of the supporting points based on the position complexity and the preliminary supporting area, and establishing a selective optimizing result.
In some embodiments, the selecting and optimizing the supporting point based on the position complexity and the preliminary supporting area, and establishing a selecting and optimizing result further includes:
The method comprises the steps of taking each preliminary supporting area as an independent area, establishing an area objective function based on area information of the preliminary supporting area, wherein evaluation characteristics of the area objective function comprise the number of supporting points, materials, stability and post-treatment difficulty, carrying out area evaluation on the independent areas, establishing area association of the independent areas, wherein the area association comprises area cooperative association and area competition association, the area evaluation comprises space proximity analysis, mechanical coupling analysis and material sharing analysis, establishing limit constraint of the supporting points, configuring a solution space by using the limit constraint, taking the area objective function as an evaluation function, carrying out selection optimizing of the supporting points in the solution space, carrying out iterative compensation of the selection optimizing through the area association, and establishing a selection optimizing result.
Specifically, the dimensions of the grading index of the preliminary supporting area comprise the number of supporting points, materials, stability and post-treatment difficulty. The smaller the number of support points, the smaller the corresponding support contact area, the smaller the impact on the model surface quality, and correspondingly, the smaller the material consumption, the less waste generated by removing the support, the stability is another important factor in evaluating the support structure, the support structure should be able to maintain the stability of the model throughout the printing process, preventing it from moving or collapsing, and the stability is determined based on the rigidity of the support structure and the degree of connection stability of the support structure to the model body, and the post-processing includes the steps of removing the support structure and cleaning and repairing the model surface. If the support structure is designed to be too complex or difficult to remove, then post-processing difficulties may increase, which may increase manufacturing time and costs.
In particular, there may be a synergistic association (i.e., they work together in some way) or a competing association (i.e., they have a conflict in terms of resources or space) for multiple independent regions. By performing spatial proximity analysis, mechanical coupling analysis, and material sharing analysis on a plurality of independent regions. Region associations of independent regions may be obtained.
Wherein spatial proximity analysis is used to evaluate spatial relationships between regions, such as distance, relative position, etc., between individual regions. Helping to determine which areas may need to share the support structure or which areas may be printed on to each other. Mechanical coupling analysis is used to evaluate the mechanical interactions, such as stress, deformation, etc., that may occur in the various regions during printing. To help determine the support structure to which the work head is subjected, and to predict printing problems that may occur. Material sharing analysis is used to evaluate the material requirements of the various regions during printing, as well as possible material sharing strategies. For example, if two adjacent independent areas have the same or similar orientation, a strategy for sharing the support structure may be devised to increase the stability of the support and reduce material usage and printing time. Through the evaluation, the influence relation among different independent areas is established, so that the 3D printing process is optimized, the efficiency is improved, the cost is reduced, and the printing quality is improved.
Alternatively, the limit constraints are constraints imposed on variables in the optimization problem, defining boundaries of the knowledge space, and exemplary limit constraints include print platform size, support form (e.g., tree, geometric curve, blend, etc.), support spacing, support generation threshold, etc. A solution space is a set of all possible solutions, after setting the limit constraints, which represents the area that is limited under these constraints. Preferably, the features of the solution space are described in connection with a graphical and mathematical model. By setting limit constraint and configuring solution space, the range of the optimization problem can be effectively reduced, and the solution efficiency is improved.
In some implementations, the iterative compensation for selection optimization by region association further includes:
Establishing a cooperative grouping and a competitive grouping by the regional association, constructing a joint objective function by a regional objective function corresponding to the cooperative grouping, performing joint optimization of the cooperative grouping by the joint objective function, performing priority order sequencing on the competitive grouping, establishing a sequencing result, establishing order optimization based on the sequencing result, and completing iterative compensation according to the joint optimization and the order optimization.
Specifically, first, according to the inter-region relationship, a plurality of independent spaces are divided into cooperative groups and competing groups, wherein the cooperative groups include a plurality of independent spaces with cooperative relationship, such as a plurality of model branches hanging to the same side, the competing groups include a plurality of independent spaces with competing relationship, such as a plurality of model branches hanging to opposite directions or model parts conflicted in space or resource, or the printing sequence thereof may affect the printing efficiency and quality
Specifically, a joint objective function is constructed based on the objective functions of the synergistic groups to obtain a printing strategy that maximizes these synergistic effects. The areas competing for the grouping are then prioritized and then optimized based on this ordering result, including, for example, deciding which areas should be printed preferentially and how to arrange the printing order to minimize conflicts and waste of resources.
Furthermore, according to the result of the combined optimization and the sequential optimization, iterative compensation is performed, and printing efficiency and quality are further improved by adjusting printing parameters or modifying support design. The above method steps understand the complexity of the model to obtain an optimal printing strategy by considering the relationships and interactions between the various regions, and their impact on the overall target.
And optimizing the three-dimensional model according to the selection optimizing result, and establishing an orthopedics 3D printing model of the user.
In some embodiments, the method further comprises:
The method comprises the steps of carrying out fixed point position recognition of a model on the three-dimensional model, carrying out scale division on the basis of fixed point position recognition results, establishing M scale division results, carrying out spatial scale filtering from a coarse scale to a fine scale on the basis of the M scale division results, carrying out filtering fusion of the scale division results after all the scale division results are filtered, and updating the three-dimensional model according to the filtering fusion results.
Specifically, first, key points or feature points of a 3D model are identified as fixed point positions to perform scale division, and a scale division result is obtained, where the scale division result includes a series of sub-models or regions, and each sub-model or region has its own scale or resolution. For example, some parts of the model may be partitioned into coarse scales (low resolution) and other parts may be partitioned into fine scales (high resolution). Illustratively, the setpoint position includes a point of discontinuity of curvature, a point of abrupt change of complexity, and the like.
Specifically, the result of each scale division is filtered to eliminate noise and smooth data, and the geometric shape and structure of each scale division are optimized to improve printing efficiency and quality. Preferably, filtering is performed by adaptive parameter tuning to process different parts of the model at different scales to accommodate the complexity and diversity of the model.
Illustratively, the filtering parameters are adjusted according to the curvature or complexity of the different portions of the model to optimize the printing effect of each portion. For coarse-scale (less curvature or less complexity) parts of the model, larger filter parameters are used to smooth large geometries or structures, while for fine-scale (more curvature or more complexity) parts of the model, details are preserved by smaller filter parameters, improving the print quality of the model while improving the print efficiency.
Further, the filtering results of all scale divisions are fused together, and then the 3D model is updated according to the fusion results so as to achieve the optimal printing effect. Through the method steps, the characteristic point analysis and the filtering method are combined, the 3D model is subjected to multi-scale analysis and optimization, and the printing efficiency and quality are improved.
In some implementations, spatial scale filtering from coarse scale to fine scale is performed based on the M scale division results, further comprising:
spatial scale filtering is performed by the formula as follows:
;
Wherein, Representing verticesAfter the filtering process the new position vector is filtered,Is the vertexIs used to determine the position of the object,In order to smooth the coefficient of the coefficient,Characterization of verticesIs defined by a set of adjacent vertices of the model,Characterization of verticesIs provided with a plurality of the adjacent vertices,Is the vertexAnd the vertexWeights between, characterize verticesTo the vertexIs used to determine the degree of influence of the location update of (c),Is the vertexIs used for the position vector of (a),As a normal deviation control factor,Is the vertexIs defined in the specification.
In particular, the smoothing coefficientThe degree of position smoothing is determined. Larger sizeWill bring the vertex position closer to the average position of the adjacent vertices, smallerMore information of the original location is retained. Normal deviation control factorFor adjusting the movement of the vertex in the normal direction for modifying the surface geometry.
In summary, the orthopedic 3D printing model construction method based on intelligent AI provided by the invention has the following technical effects:
The method comprises the steps of acquiring and acquiring user database information of a user, formulating an image imaging scheme based on the database, executing multi-view image imaging of the user, generating an image dataset, storing the image dataset as a DICOM format file, establishing a three-dimensional coordinate system, placing the image dataset into the coordinate system, registering images according to image coordinates in the three-dimensional coordinate system to form a registration mapping relation, preprocessing the image dataset, executing skeleton segmentation operation in the preprocessed image to generate a segmentation result with segmentation trust degree identification, fusing and reconstructing the image according to the registration mapping and the segmentation trust degree identification to generate a three-dimensional model, identifying the spatial position complexity of the three-dimensional model, performing printing and placement fitting on the three-dimensional model, determining the gravity direction, performing geometric analysis based on the gravity direction, determining a preliminary supporting area, optimizing and selecting supporting points based on the position complexity and the preliminary supporting area, generating a selection optimizing result, optimizing the three-dimensional model according to the selection optimizing result, and finally generating the 3D printing model of the user. Therefore, the technical effects of improving modeling efficiency and precision and improving printing stability are achieved.
Example two
Fig. 2 is a schematic structural diagram of the orthopedic 3D printing model construction device based on intelligent AI. For example, the flow schematic diagram of the intelligent AI-based orthopedic 3D printing model construction method of fig. 1 can be implemented by the structure shown in fig. 2.
Based on the same conception as the orthopedic 3D printing model construction method based on the intelligent AI in the embodiment, the orthopedic 3D printing model construction device based on the intelligent AI further comprises:
The image data acquisition module 11 is configured to acquire a user database of a user, configure an image imaging scheme based on the user database, perform multi-view imaging of the user based on the image imaging scheme, establish an image dataset, and store the image dataset in DICOM format.
The three-dimensional registration module 12 is configured to establish a three-dimensional coordinate system, place the image dataset into the three-dimensional coordinate system, perform image registration with image coordinates in the three-dimensional coordinate system, and establish a registration map.
And the skeleton segmentation module 13 is used for performing skeleton segmentation of the preprocessed image data set after preprocessing the image data set, and establishing a segmentation result, wherein the segmentation result is provided with a segmentation trust degree mark.
And the fusion reconstruction identification module 14 is used for carrying out image fusion reconstruction according to the registration mapping and the segmentation trust identification, generating a three-dimensional model and identifying the position complexity of the three-dimensional model.
And the placement support module 15 is used for performing printing placement fitting based on the three-dimensional model, determining the gravity direction, performing geometric analysis of the three-dimensional model in the gravity direction, and determining a preliminary support area.
And the support optimization module 16 is used for carrying out selection optimization of the support points based on the position complexity and the preliminary support area, and establishing a selection optimization result.
And the entity execution module 17 is used for optimizing the three-dimensional model according to the selection optimizing result and establishing an orthopedics 3D printing model of the user.
Wherein the fusion reconstruction identification module 14 comprises:
The registration mapping weight configuration unit is used for configuring weights according to the registration mapping, and the formula is as follows:
。
Wherein, Characterizing post registration mappingThe weight of the individual view images is such that,Characterization of the first embodimentThe image quality scores of the individual view images,Characterization of the first embodimentThe segmentation confidence identification of the individual view images,The number of view angles to which the registration map corresponds is characterized,Is the view index.
The pixel fusion unit is used for carrying out pixel fusion according to the configured weight, and the pixel fusion unit is as follows:
。
the values of the pixels after the fusion are characterized, Is the firstPixel values for corresponding positions of the individual view images.
And the image fusion reconstruction unit is used for completing image fusion reconstruction according to the fused pixel values.
In some implementations, the fused reconstruction identification module 14 further includes:
And the fusion data volume extraction and adaptation evaluation unit is used for extracting the fusion data volume of the image fusion reconstruction, carrying out adaptation evaluation according to the fusion data volume and the position complexity, and establishing an adaptation abnormal identifier.
And the additional acquisition instruction generation unit is used for generating additional acquisition instructions based on the adaptation abnormality identification.
And the additional data acquisition and fusion reconstruction compensation unit is used for controlling the imaging equipment to acquire additional data through the additional acquisition instruction and carrying out image fusion reconstruction compensation according to an additional data acquisition result.
In some embodiments, the support optimization module 16 includes:
And the support region independence processing unit is used for taking each preliminary support region as an independent region, establishing a region objective function based on the region information of the preliminary support region, wherein the evaluation characteristics of the region objective function comprise the number of support points, materials, stability and post-treatment difficulty.
The regional evaluation and association establishing unit is used for carrying out regional evaluation on the independent regions and establishing regional association of the independent regions, wherein the regional association comprises regional cooperative association and regional competitive association, and the regional evaluation comprises spatial proximity analysis, mechanical coupling analysis and material sharing analysis.
The support point limit constraint and selection optimizing unit is used for establishing limit constraint of the support points, configuring a solution space by the limit constraint, taking the regional objective function as an evaluation function, executing selection optimizing of the support points in the solution space, carrying out iterative compensation of selection optimizing by regional association, and establishing a selection optimizing result.
In some implementations, the support point limit constraint and selection optimizing unit in the support optimizing module 16 includes:
And the cooperative packet and competing packet construction unit is used for establishing the cooperative packet and competing packet by the area association.
And the joint objective function construction and collaborative grouping optimization unit is used for constructing a joint objective function through the regional objective function corresponding to the collaborative grouping, and performing collaborative grouping joint optimization through the joint objective function.
And the competitive grouping priority ordering unit is used for ordering the priority orders of the competitive grouping and establishing an ordering result.
And the sequence optimization and iteration compensation unit is used for establishing sequence optimization based on the sequencing result and completing iteration compensation according to the joint optimization and the sequence optimization.
In some embodiments, the system further comprises:
and the three-dimensional model fixed point position recognition unit is used for recognizing the fixed point position of the model for the three-dimensional model, and performing scale division based on the fixed point position recognition result to establish M scale division results.
And the spatial scale filtering unit is used for performing spatial scale filtering from coarse scale to fine scale based on M scale division results.
And the filtering fusion and three-dimensional model updating unit is used for executing the filtering fusion of each scale division result after all the scale division results are filtered, and updating the three-dimensional model according to the filtering fusion result.
In some implementations, the spatial scale filtering unit in the system is configured to:
spatial scale filtering is performed by the formula as follows:
。
Wherein, Representing verticesAfter the filtering process the new position vector is filtered,Is the vertexIs used to determine the position of the object,In order to smooth the coefficient of the coefficient,Characterization of verticesIs defined by a set of adjacent vertices of the model,Characterization of verticesIs provided with a plurality of the adjacent vertices,Is the vertexAnd the vertexWeights between, characterize verticesTo the vertexIs used to determine the degree of influence of the location update of (c),Is the vertexIs used for the position vector of (a),As a normal deviation control factor,Is the vertexIs defined in the specification.
It should be understood that the embodiments mentioned in this specification focus on differences from other embodiments, and the specific embodiment in the first embodiment is equally applicable to the orthopedic 3D printing model building device based on intelligent AI described in the second embodiment, which is not further developed herein for brevity of description.
It is to be understood that both the foregoing description and the embodiments of the present invention enable one skilled in the art to utilize the present invention. Meanwhile, the invention is not limited to the above-mentioned embodiments, and it should be understood that those skilled in the art may still modify the technical solutions described in the above-mentioned embodiments or substitute some technical features thereof, and these modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the spirit and scope of the technical solutions of the embodiments of the invention, and all the modifications or substitutions should be included in the protection scope of the invention.
Claims (8)
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411884589.6A CN119339006A (en) | 2024-12-20 | 2024-12-20 | Orthopedics 3D printing model construction method and device based on intelligent AI |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202411884589.6A CN119339006A (en) | 2024-12-20 | 2024-12-20 | Orthopedics 3D printing model construction method and device based on intelligent AI |
Publications (1)
Publication Number | Publication Date |
---|---|
CN119339006A true CN119339006A (en) | 2025-01-21 |
Family
ID=94265329
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202411884589.6A Pending CN119339006A (en) | 2024-12-20 | 2024-12-20 | Orthopedics 3D printing model construction method and device based on intelligent AI |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN119339006A (en) |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104091347A (en) * | 2014-07-26 | 2014-10-08 | 刘宇清 | Intracranial tumor operation planning and simulating method based on 3D print technology |
US20180165867A1 (en) * | 2016-11-16 | 2018-06-14 | Terarecon, Inc. | System and method for three-dimensional printing, holographic and virtual reality rendering from medical image processing |
CN116229023A (en) * | 2023-01-09 | 2023-06-06 | 浙江钧控智能科技有限公司 | Human body three-dimensional curved surface modeling method based on 3D vision |
CN117392328A (en) * | 2023-12-07 | 2024-01-12 | 四川云实信息技术有限公司 | Three-dimensional live-action modeling method and system based on unmanned aerial vehicle cluster |
CN118608709A (en) * | 2024-05-31 | 2024-09-06 | 重庆交通大学 | Real-time 3D modeling of earthwork engineering, calculation of earthwork quantity and construction progress monitoring method based on drone measurement |
CN118781012A (en) * | 2024-09-10 | 2024-10-15 | 南京晨新医疗科技有限公司 | 3D ultra-high-definition fluorescence medical endoscope imaging method and device |
-
2024
- 2024-12-20 CN CN202411884589.6A patent/CN119339006A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104091347A (en) * | 2014-07-26 | 2014-10-08 | 刘宇清 | Intracranial tumor operation planning and simulating method based on 3D print technology |
US20180165867A1 (en) * | 2016-11-16 | 2018-06-14 | Terarecon, Inc. | System and method for three-dimensional printing, holographic and virtual reality rendering from medical image processing |
CN116229023A (en) * | 2023-01-09 | 2023-06-06 | 浙江钧控智能科技有限公司 | Human body three-dimensional curved surface modeling method based on 3D vision |
CN117392328A (en) * | 2023-12-07 | 2024-01-12 | 四川云实信息技术有限公司 | Three-dimensional live-action modeling method and system based on unmanned aerial vehicle cluster |
CN118608709A (en) * | 2024-05-31 | 2024-09-06 | 重庆交通大学 | Real-time 3D modeling of earthwork engineering, calculation of earthwork quantity and construction progress monitoring method based on drone measurement |
CN118781012A (en) * | 2024-09-10 | 2024-10-15 | 南京晨新医疗科技有限公司 | 3D ultra-high-definition fluorescence medical endoscope imaging method and device |
Non-Patent Citations (2)
Title |
---|
XIN TIAN,ET AL: "Optimal Transport-Based Graph Matching for 3D Retinal Oct Image Registration", IEEE, 18 October 2022 (2022-10-18) * |
许阳等: "腹腔镜下肝脏手术增强现实三维影像导航平台的构建与应用", 中国临床医学, vol. 30, no. 1, 25 February 2023 (2023-02-25) * |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN112037200B (en) | A method for automatic recognition and model reconstruction of anatomical features in medical images | |
CN110189352B (en) | Tooth root extraction method based on oral cavity CBCT image | |
CN111311655B (en) | Multi-mode image registration method, device, electronic equipment and storage medium | |
CN112885453A (en) | Method and system for identifying pathological changes in subsequent medical images | |
CN109493346A (en) | It is a kind of based on the gastric cancer pathology sectioning image dividing method more lost and device | |
CN112614169B (en) | 2D/3D spine CT (computed tomography) level registration method based on deep learning network | |
US8867804B2 (en) | Method and apparatus for automatically generating trim lines for cranial remodeling devices | |
CN100421128C (en) | Method and image processing system for segmenting tomographic image data | |
CN107665497A (en) | In a kind of medical image calculate ambition than method | |
US12106856B2 (en) | Image processing apparatus, image processing method, and program for segmentation correction of medical image | |
WO2024021523A1 (en) | Graph network-based method and system for fully automatic segmentation of cerebral cortex surface | |
CN118279302A (en) | Three-dimensional reconstruction detection method and system for brain tumor image | |
EP2689344A2 (en) | Knowledge-based automatic image segmentation | |
CN117934689B (en) | Multi-tissue segmentation and three-dimensional rendering method for fracture CT image | |
EP3933757A1 (en) | Method of determining clinical reference points and pre-surgical planning | |
CN111369662A (en) | Three-dimensional model reconstruction method and system for blood vessels in CT (computed tomography) image | |
CN113962957A (en) | Medical image processing method, bone image processing method, device and equipment | |
CN119339006A (en) | Orthopedics 3D printing model construction method and device based on intelligent AI | |
CN112562070A (en) | Craniosynostosis operation cutting coordinate generation system based on template matching | |
US7792360B2 (en) | Method, a computer program, and apparatus, an image analysis system and an imaging system for an object mapping in a multi-dimensional dataset | |
CN118967950B (en) | Three-dimensional image guiding correction planning method, system, device and medium | |
CN118866251B (en) | Medical image-based orthopedic plan generation method, device and storage medium | |
KR102689375B1 (en) | Skeleton estimate apparatus using multiple x-ray views and method thereof | |
Ting | Shape Statistical Model-Based Regression for Predicting Anatomical Landmarks | |
CN119360049A (en) | Model training method and device for extracting human skeleton characteristics |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination |