[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

US20170128026A1 - Networked imaging system with real-time feedback loop - Google Patents

Networked imaging system with real-time feedback loop Download PDF

Info

Publication number
US20170128026A1
US20170128026A1 US15/409,856 US201715409856A US2017128026A1 US 20170128026 A1 US20170128026 A1 US 20170128026A1 US 201715409856 A US201715409856 A US 201715409856A US 2017128026 A1 US2017128026 A1 US 2017128026A1
Authority
US
United States
Prior art keywords
imaging
motion
information
devices
platform
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/409,856
Inventor
Adam Deitz
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wenzel Spine Inc
Original Assignee
Ortho Kinematics Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ortho Kinematics Inc filed Critical Ortho Kinematics Inc
Priority to US15/409,856 priority Critical patent/US20170128026A1/en
Assigned to ORTHO KINEMATICS, INC. reassignment ORTHO KINEMATICS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DEITZ, ADAM
Publication of US20170128026A1 publication Critical patent/US20170128026A1/en
Assigned to STATERA SPINE, INC. reassignment STATERA SPINE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ORTHO KINEMATICS, INC.
Assigned to WENZEL SPINE, INC. reassignment WENZEL SPINE, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: STATERA SPINE, INC.
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/04Positioning of patients; Tiltable beds or the like
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/0002Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network
    • A61B5/0004Remote monitoring of patients using telemetry, e.g. transmission of vital signals via a communication network characterised by the type of physiological signal transmitted
    • A61B5/0013Medical image data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/05Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves 
    • A61B5/055Detecting, measuring or recording for diagnosis by means of electric currents or magnetic fields; Measuring using microwaves or radio waves  involving electronic [EMR] or nuclear [NMR] magnetic resonance, e.g. magnetic resonance imaging
    • A61B5/0555
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1116Determining posture transitions
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/103Detecting, measuring or recording devices for testing the shape, pattern, colour, size or movement of the body or parts thereof, for diagnostic purposes
    • A61B5/11Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb
    • A61B5/1126Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique
    • A61B5/1128Measuring movement of the entire body or parts thereof, e.g. head or hand tremor, mobility of a limb using a particular sensing technique using image analysis
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4528Joints
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/45For evaluating or diagnosing the musculoskeletal system or teeth
    • A61B5/4538Evaluating a particular part of the muscoloskeletal system or a particular medical condition
    • A61B5/4561Evaluating static posture, e.g. undesirable back curvature
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/70Means for positioning the patient in relation to the detecting, measuring or recording means
    • A61B5/702Posture restraints
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/70Means for positioning the patient in relation to the detecting, measuring or recording means
    • A61B5/704Tables
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/04Positioning of patients; Tiltable beds or the like
    • A61B6/0487Motor-assisted positioning
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/46Arrangements for interfacing with the operator or the patient
    • A61B6/467Arrangements for interfacing with the operator or the patient characterised by special input means
    • A61B6/469Arrangements for interfacing with the operator or the patient characterised by special input means for selecting a region of interest [ROI]
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/50Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications
    • A61B6/505Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment specially adapted for specific body parts; specially adapted for specific clinical applications for diagnosis of bone
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B6/00Apparatus or devices for radiation diagnosis; Apparatus or devices for radiation diagnosis combined with radiation therapy equipment
    • A61B6/54Control of apparatus or devices for radiation diagnosis
    • A61B6/547Control of apparatus or devices for radiation diagnosis involving tracking of position of the device or parts of the device
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/374NMR or MRI
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B90/00Instruments, implements or accessories specially adapted for surgery or diagnosis and not covered by any of the groups A61B1/00 - A61B50/00, e.g. for luxation treatment or for protecting wound edges
    • A61B90/36Image-producing devices or illumination devices not otherwise provided for
    • A61B90/37Surgical systems with images on a monitor during operation
    • A61B2090/376Surgical systems with images on a monitor during operation using X-rays, e.g. fluoroscopy
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2562/00Details of sensors; Constructional details of sensor housings or probes; Accessories for sensors
    • A61B2562/16Details of sensor housings or probes; Details of structural supports for sensors
    • A61B2562/17Comprising radiolucent components
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B2576/00Medical imaging apparatus involving image processing or analysis

Definitions

  • One of the most prevalent joint problems is back pain, particularly in the “small of the back” or lumbosacral (L4-S1) region.
  • the pain severely limits a person's functional ability and quality of life.
  • Such pain can result from a variety of spinal pathologies.
  • the vertebral bodies, intervertebral discs, laminae, spinous process, articular processes, or facets of one or more spinal vertebrae can become damaged, such that the vertebrae no longer articulate or properly align with each other. This can result in an undesired anatomy, loss of mobility, and pain or discomfort.
  • Another type of orthopedic intervention is the spinal treatment decompressive laminectomy.
  • spinal stenosis or other spinal pathology
  • the tissue(s) hard and/or soft tissues
  • a procedure which involves excision of part or all of the laminae and other tissues to relieve compression of nerves is called a decompressive laminectomy. See, for example, U.S. Pat. No. 5,019,081, to Watanabe, for Laminectomy Surgical Process; U.S. Pat. No.
  • 2005/0240193 to Layne for Devices for creating voids in interior body regions and related methods 2006/0149136 to Seto for Elongating balloon device and method for soft tissue expansion; 2007/0067034 to Chirico for Implantable Devices and Methods for Treating Micro-Architecture Deterioration of Bone Tissue; 2006/0264952 to Nelson for Methods of Using Minimally Invasive Actuable Bone Fixation Devices.
  • Health care providers rely on an understanding of joint anatomy and mechanics when evaluating a subject's suspected joint problem and/or biomechanical performance issue. Understanding anatomy and joint biomechanics assists in the diagnosis and evaluation of a subject for an orthopedic intervention.
  • diagnostic tools are limited in the level of detail and analysis that can be achieved.
  • the intention is to address a specific structural or mechanical problem within the joint. For example, a surgeon might prescribe a spinal fusion procedure to physically immobilize the vertebra of a subject suffering from vertebral instability, or a physical therapist might prescribe exercises to strengthen a specific tendon or muscle that is responsible for a joint problem, etc.
  • Muscle guarding is a well established concept that is hypothesized to be highly prevalent among sufferers of joint pain, specifically that of the neck and back. In muscle guarding, a subject responds to chronic pain by immobilizing the painful area through involuntary muscle involvement. The ability to isolate different muscle groups is desirable to determine which muscle group or combination of groups, if any, could be contributing to, or responsible for, any joint dysfunction.
  • the level of entrenchment of muscle guarding behavior cannot currently be determined.
  • the operative question in determining the level of “entrenchment” of any observed muscle guarding is to determine if the muscle guarding behavior is one which conservative methods of therapy could address through non-surgical therapy, or alternatively determining that the muscle guarding behavior so “entrenched” that such efforts would be futile and surgery should be considered.
  • joint dysfunctions may not always present themselves in the movements traditionally measured during spinal kinematic studies such as flexion-extension and side-bending in either “full” non-weight-bearing or “full” weight-bearing planes of movement, which correspond to lying down and standing up postures respectively.
  • Certain painful movements occur during joint rotation when the plane of rotation is somewhere between these two postures.
  • Certain other painful movements only occur when the subject is rotating his or her spine while in a bent posture.
  • gravitational forces are relatively evenly distributed across the surface area of the vertebrae.
  • gravitational forces are concentrated on the sections of the vertebrae located toward the direction of the bend. Detecting motion dysfunctions that occur only when in a standing bent posture requires the replication of joint motion in that specific bent posture in a controlled, repeatable, and measurable manner during examination.
  • Static images are a small number of images of a joint structure taken at different points in the joint's range of motion, with the subject remaining still in each position while the image is being captured.
  • Static imaging studies have focused mainly on detecting structural changes to the bones and other internal joint structures.
  • An example of the diagnostic application of static imaging studies is with the detection of spinal disc degeneration by the use of plain X-rays, MR images and discograms.
  • these applications yield poor diagnostic performance with an unacceptably high proportion of testing events yielding either inconclusive or false positive/false negative diagnostic results (Lawrence, J. S.
  • the diagnostic interpretation of such measurements would normally be based on a comparative analysis of joint motion measurements across a wide population of subjects, and would strive to identify statistically significant differences in these measurements between “normal” and “unhealthy” subjects, such that any given subject can be classified as “normal” or “unhealthy” based on that subject's joint motion measurement values. For such purposes, it is necessary to reduce the background variability of measurements across tested subjects as much as possible, so that any observed difference between “normal” and “unhealthy” subjects can be definitively attributable to a specific condition.
  • U.S. Patent No. 7,000,271 discloses a tilting table capable of some movement to keep an iso-center at a fixed position.
  • U.S. Patent No.: 7,343,635 describes a multi-articulated tilting table which positions and supports a subject during examination and treatment.
  • U.S. Patent No. 7,502,641 to Breen discloses a device for controlling joint motion and minimizing the effects of muscle involvement in the joint motion being studied. This device minimizes variability among joint motion measurements across wide populations of subjects.
  • U.S. Pat. No. 5,505,208 to Toomin et al. developed a method for measuring muscle dysfunction by means of collecting muscle activity measurements using electrodes in a pattern across a subject's back while having the subject perform a series of poses where measurements are made at static periods within the movement. These electromyographical readings of “unhealthy” subjects were then compared to those of a “normal” population so as to be able to identify those subjects with abnormal readings, however does not provide for a method to report the results as a degree of departure from an ideal reading, instead can only say whether the reading is “abnormal”.
  • 6,280,395 added an additional advantage to this method for determining muscle dysfunction by using the same method, yet adding the ability to better normalize the data by employing a more accurate reading of the thickness of the adipose tissue and other general characteristics that might introduce variability into the readings, as well as the ability to quantify how abnormal a subject's electromyographical reading is as compared to a “normal” population.
  • Joint muscle activity has been evaluated using electromyography in combination with some type method or device to track the surface motion of the joint.
  • visual landmarks were used to help the subject more consistently reproduce a tested motion so as to standardize the joint motion and eliminate variability. (Lariviere, C 2000)
  • visual land marking methods to not yield as “standardized” a motion as can be achieved with motion that is mechanically controlled, and measurements of the motion of internal joint structures based on surface motion measurements are too variable to be of significant clinical utility.
  • Electromyographic measurements taken during weight-bearing joint motion with simultaneous recording of the motion of the body part using goniometers and also with simultaneous recordings of the motion of internal joint structures through the tracking of surgically-implanted metal markers, has been used to correlate muscle activity with the motion of j oints and internal joint structures (Kaigle, supra).
  • this approach studied joint motion that was uncontrolled and required an invasive surgical procedure to place the metal markers, and thus were neither useful nor feasible for clinical diagnostic application.
  • Electromyography has also been used in conjunction with a device that provides transient force perturbation so as to observe whether there is a difference between subjects with low back pain and those without low back pain to determine how their muscles respond to such a force.
  • the objective was to determine whether there is an altered muscle activation pattern when using a ramped effort.
  • This approach does not address the issue of which discrete muscle group or groups might account for the difference between activation patterns in subjects with joint dysfunctions and those without.
  • this method does not take into consideration the internal structural joint motions and thus provides an incomplete set of information upon which to draw diagnostic conclusions.
  • An imaging system that comprises a tracking system and an imaging system that communicate information through real-time or near real-time feedback loops and applies continuous adjustments to the imaging environment during an imaging session based upon the imported information from the tracking system.
  • the feedback can be configurable to dynamically change to adjust the range of motion to correspond to an achieved patient motion instead of a motion of the patient movement device.
  • the integrated imaging system integrates a hardware and software component and incorporates a tracking system for producing precise diagnostic information and image information for the purposes of producing an optimal imaging session, and an imaging apparatus with central control unit that communicates with each component and continuously adjusts so as to produce an most favorable imaging environment.
  • the integrated imaging system can be adapted and configured to import information about the testing session and adapt functional imaging settings based on the imported information.
  • Those skilled in the art will appreciate that the system described herein can be applied or incorporated into any imaging device available now and what will be available in the future.
  • the imaging system integrates a series of feedback loops that share information with respect to patient positioning and imaging quality and frequency.
  • the information can be transmitted through either a direct wire-based electronic connection between the two or more components, or through a wireless connection.
  • the information can be the type that is derived from computer programming or from operator or patient input, or from a combination of computer programmed information plus operator and/or patient input.
  • Methods, systems and devices register and track imaging information real-time or near real-time and provide a feedback mechanism which impacts further imaging. As a result of the feedback, patient exposure during imaging may be reduced and image capture may be enhanced.
  • An aspect of the disclosure is directed to a machine-readable medium that provides instructions which, when executed by a set of processors, causes the processors to provide instructions to at least one of a motion device and an imaging device comprising: receiving information from at least one of the motion device and the imaging device during an imaging session; analyzing the received information; instructing at least one of the motion device and imaging device to change the imaging environment during the imaging session.
  • the steps of receiving, analyzing and instructing are repeated a plurality of times during the imaging session.
  • the step of instructing is performed real-time or near real-time. Real-time can, for example be performed such that the analysis calculates the data quickly enough such that no data are excluded from the analysis.
  • real-time can be configured to received data, process it and respond within a time frame set by outside events and in such a manner so that no delay is perceived by an operator or patient.
  • Near real-time might include, for example a momentary lag time (seconds to minutes) within an imaging session while the information is processed and instructions are generated.
  • the instruction can change an aspect of an imaging field and/or change a movement of a motion device.
  • Suitable imaging devices include, but are not limited to, for example an X-ray tube and image intensifier with dosage control, a magnetic resonance scanner.
  • a suitable motion device can be configured to further comprise a laterally moveable platform, such as a movable platform is situated on a support which lies on an upper surface of the platform base.
  • the machine-readable medium can further comprise a processing system.
  • the motion device is adapted to communicate motion information to the imaging device during use.
  • continuous adjustments can be made to an imaging environment, including, for example, changes to a range of motion of the motion device is based on a selected target motion for a patient, such as a range of motion of the device is based on a gross motion of a patient.
  • Another aspect of the disclosure is directed to an apparatus for use in a shared computer network being able to carry real-time data streams the apparatus comprising: means for transmitting data packages to a data destination in at least one real-time or near real-time data stream over the shared computer network, wherein each of the data packets contains instructions for controlling at least one of an imaging device and a motion control device during an imaging session. Instructions can be streamed over the shared computer network a plurality of times during the imaging session. Additionally, instruction can be configured to change an aspect of an imaging field and/or movement of a motion device. Suitable imaging devices include, but are not limited to, an X-ray tube and image intensifier with dosage control and a magnetic resonance scanner.
  • the motion device can further comprises a laterally moveable platform, such as a movable platform is situated on a support which lies on an upper surface of the platform base.
  • a control arm can further be provided for driving movement of a moveable platform.
  • a processing system can be provided.
  • the motion device can further be adapted to communicate motion information to the imaging device during use. Furthermore continuous adjustments are made to an imaging environment.
  • a range of motion of the motion device can further be based on a selected target motion for a patient, or on a gross motion of a patient.
  • Still another aspect of the disclosure is directed to a system measuring skeletal joint motion in a subject comprising: a) a motion device adapted and configured to continuously move a joint of the subject, the motion device comprising: a platform base, and a motion platform further comprising a static platform connected to an upper surface of the platform base, a movable platform connected to at least one of the static platform or an upper surface of the platform base, wherein the static platform is adjacent the movable platform wherein movement of the movable platform is achieved in operation by a motor in communication with the moveable platform; b) an imaging device in communication with the motion device adapted and configured to obtain imaging data; and c) a computing system adapted and configured to analyze the obtained imaging data to generate an instruction and then communicate the instruction to at least one of the motion device and the imaging device.
  • Suitable imaging devices include, but are not limited to an X-ray tube and image intensifier with dosage control and a magnetic resonance scanner.
  • the platform can, for example, be a laterally moveable platform, such as a platform situated on a support which lies on the upper surface of the platform base.
  • a control arm can be provided for driving movement of the moveable platform.
  • a processing system can be provided.
  • the motion device can be adapted to communicate motion information to the imaging device during use.
  • the system can also provide continuous adjustments to the imaging environment.
  • Suitable instruction include, for example, changes an aspect of an imaging field and/or changes to a movement of a motion device.
  • the range of motion of the motion device is based on a selected target motion for a patient.
  • the range of motion of the device is based on a gross motion of a patient.
  • Another aspect of the disclosure is directed to a method for imaging skeletal structures in vivo comprising: i) positioning a subject on a motion device adapted and configured to move a joint during a use session; ii) imaging the subject positioned on the motion device during the use session with an imaging device; iii) collecting image data; iv) analyzing the collected image data; and v) communicating an instruction based on the analyzed image data to at least one of the motion device and imaging device prior to acquiring a subsequent image.
  • the step of communicating an instruction can, for example, include changing an aspect of an imaging field and/or changing a movement of a motion device.
  • the step of communicating an instruction can include, for example, transmitting a new instruction which changes one or more settings, transmitting an instruction which maintains the most recent one or more settings, transmitting an instruction that repeats an earlier one or more instructions, or providing no change in instruction in current instructions.
  • the system can be adapted and configured to automatically repeat the most recent instructions after a lag of a set amount of time from providing the image data for analysis.
  • suitable imaging devices include, but are not limited to an X-ray tube and image intensifier with dosage control and a magnetic resonance scanner.
  • the motion device can further comprises a movable platform situated on a support which lies on an upper surface of a platform base.
  • a calibration step is carried out prior to at least one of the method steps of i) to vi).
  • the relative motion of lumbar vertebrae L3 to L3, L3 to L4 and L4 to L5 can be tracked simultaneously or separately and/or the relative motion of lumbar vertebrae L3 to L3, L3 to L4 and L4 to L5 are tracked simultaneously or separately.
  • the method can be used for a diagnosis of a pseudoarthrosis in the subject, and the method comprising analyzing the relative motion of skeletal structures in the subject.
  • An additional step can be provided for presenting an output in graphical form.
  • Yet another aspect of the disclosure includes a process for capturing data and controlling skeletal joint motion of a subject comprising: (a) providing an apparatus adapted and configured to selectively cause and control joint motion of the subject having a base positioned in a first base plane, a fixable platform adapted and configured to engage the base at an attachment mechanism, the fixable platform having a first position in a first fixable platform plane and fixably adjustable to a second position, a dynamic platform having a first position in a first dynamic platform plane, adjustable to a second position and selectively rotatable about an axis, and a coupling member adapted and configured to connect the fixable platform to the dynamic platform or the base; (b) positioning the subject in a first position such that a first body part of the subject is at least partially positioned adjacent the static platform, and a second body part of the subject is at least partially positioned adjacent the motion platform; (c) capturing, with a medical diagnostic device, a first diagnostic data from the subject and the apparatus; (d) transmitting the first diagnostic data from the
  • the data capturing steps of the process can further comprise using a medical diagnostic device selected from the group consisting of X-ray scanner, X-ray tube with image intensifier tube, magnetic resonance scanner, infrared camera, computed tomography scanner, ultrasound scanner, electromyography sensor unit, digital camera and camera.
  • steps (b) through (i) can be repeated a plurality of times during a single imaging session. Instructions can be provided that change an aspect of an imaging field and/or change a movement of a motion device.
  • FIG. 1A is a diagram showing a representative example of a logic device through which an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIG. 1B is a block diagram of an exemplary computing environment through which an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIG. 1C is an illustrative architectural diagram showing some structure that can be employed by devices through which an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIG. 2 is an exemplary diagram of a server in an implementation suitable for use in a system where an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIG. 3 is an exemplary diagram of a master system in an implementation suitable for use in a system where an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIG. 4 is a block diagram showing the cooperation of exemplary components of a system suitable for use in a system where an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIGS. 5A and 5B show side and top view block diagrams of the horizontally configured motion control device consisting of the two sub-systems and attachment mechanisms of the preferred embodiment of the horizontally configured motion control device in a “default” configuration, according to one embodiment of the present invention
  • FIGS. 5C-E illustrate a device from different views
  • FIGS. 6A and 6B show side view block diagrams of the horizontally configured motion control device and related parts of the preferred embodiment in the “front-up” ( FIG. 5A ) and “front-down” ( FIG. 5B ) configurations suitable for use with the system disclosed;
  • FIGS. 7A and 7B show side and front view block diagrams, respectively, a vertically configured motion control device in a “default” configuration suitable for use with the system disclosed;
  • FIGS. 7C-E illustrate a device from different views;
  • FIGS. 8A, 8B, and 8C show side view diagrams of a vertically configured motion control device in a “default”, “top out” and “top in” configurations, respectively, according to one embodiment of the present invention
  • FIG. 9A is a simplified block diagram of the components comprising the integrated imaging system where the imaging apparatus and the tracking apparatus are integrated into the same apparatus and the central processing unit is a part of the apparatus;
  • FIG. 9B is a simplified block diagram of the components comprising the integrated imaging system where the imaging apparatus and the tracking apparatus are integrated into the same apparatus and the central processing unit is a separate unit that communicates either wirelessly or through a direct wired connection with the integrated imaging system;
  • FIG. 10 is a simplified block diagram of the components comprising the integrated imaging system where the imaging apparatus and the tracking apparatus are two separate units and communicate through a central processing unit that communicates with each the imaging and tracking apparatus through a wireless or a direct wired connection; and
  • FIG. 11 is a flow chart of the process by which the integrated imaging system operates.
  • An integrated imaging system that incorporates real time tracking algorithms and feedback loops for producing precise diagnostic information, and an imaging device that is adaptable in response to the integrated imaging system and feedback loops to produce an optimal imaging session.
  • the systems and methods described herein rely on a variety of computer systems, networks and/or digital devices for operation. In order to fully appreciate how the system operates an understanding of suitable computing systems is useful. The systems and methods disclosed herein are enabled as a result of application via a suitable computing system.
  • FIG. 1A is a block diagram showing a representative example logic device through which a browser can be accessed to implement the present invention.
  • a computer system (or digital device) 100 which may be understood as a logic apparatus adapted and configured to read instructions from media 114 and/or network port 106 , is connectable to a server 110 , and has a fixed media 116 .
  • the computer system 100 can also be connected to the Internet or an intranet.
  • the system includes central processing unit (CPU) 102 , disk drives 104 , optional input devices, illustrated as keyboard 118 and/or mouse 120 and optional monitor 108 .
  • Data communication can be achieved through, for example, communication medium 109 to a server 110 at a local or a remote location.
  • the communication medium 109 can include any suitable means of transmitting and/or receiving data.
  • the communication medium can be a network connection, a wireless connection or an internet connection. It is envisioned that data relating to the present invention can be transmitted over such networks or connections.
  • the computer system can be adapted to communicate with a participant and/or a device used by a participant.
  • the computer system is adaptable to communicate with other computers over the Internet, or with computers via a server.
  • FIG. 1B depicts another exemplary computing system 100 .
  • the computing system 100 is capable of executing a variety of computing applications 138 , including computing applications, a computing applet, a computing program, or other instructions for operating on computing system 100 to perform at least one function, operation, and/or procedure.
  • Computing system 100 is controllable by computer readable storage media for tangibly storing computer readable instructions, which may be in the form of software.
  • the computer readable storage media adapted to tangibly store computer readable instructions can contain instructions for computing system 100 for storing and accessing the computer readable storage media to read the instructions stored thereon themselves.
  • Such software may be executed within CPU 102 to cause the computing system 100 to perform desired functions.
  • CPU 102 In many known computer servers, workstations and personal computers CPU 102 is implemented by micro-electronic chips CPUs called microprocessors.
  • a co-processor distinct from the main CPU 102 , can be provided that performs additional functions or assists the CPU 102 .
  • the CPU 102 may be connected to co-processor through an interconnect.
  • One common type of coprocessor is the floating-point coprocessor, also called a numeric or math coprocessor, which is designed to perform numeric calculations faster and better than the general-purpose CPU 102 .
  • a computer readable medium stores computer data, which data can include computer program code that is executable by a computer, in machine readable form.
  • a computer readable medium may comprise computer readable storage media, for tangible or fixed storage of data, or communication media for transient interpretation of code-containing signals.
  • Computer readable storage media refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable storage media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data.
  • Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other physical or material medium which can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor
  • the CPU 102 fetches, decodes, and executes instructions, and transfers information to and from other resources via the computer's main data-transfer path, system bus 140 .
  • a system bus connects the components in the computing system 100 and defines the medium for data exchange.
  • Memory devices coupled to the system bus 140 include random access memory (RAM) 124 and read only memory (ROM) 126 .
  • RAM random access memory
  • ROM read only memory
  • Such memories include circuitry that allows information to be stored and retrieved.
  • the ROMs 126 generally contain stored data that cannot be modified. Data stored in the RAM 124 can be read or changed by CPU 102 or other hardware devices. Access to the RAM 124 and/or ROM 126 may be controlled by memory controller 122 .
  • the memory controller 122 may provide an address translation function that translates virtual addresses into physical addresses as instructions are executed.
  • the computing system 100 can contain peripherals controller 128 responsible for communicating instructions from the CPU 102 to peripherals, such as, printer 142 , keyboard 118 , mouse 120 , and data storage drive 143 .
  • Display 108 which is controlled by a display controller 163 , is used to display visual output generated by the computing system 100 . Such visual output may include text, graphics, animated graphics, and video.
  • the display controller 134 includes electronic components required to generate a video signal that is sent to display 108 .
  • the computing system 100 can contain network adaptor 136 which may be used to connect the computing system 100 to an external communications network 132 .
  • the Internet is a worldwide network of computer networks.
  • Today, the Internet is a public and self-sustaining network that is available to many millions of users.
  • the Internet uses a set of communication protocols called TCP/IP (i.e., Transmission Control Protocol/Internet Protocol) to connect hosts.
  • TCP/IP i.e., Transmission Control Protocol/Internet Protocol
  • the Internet has a communications infrastructure known as the Internet backbone. Access to the Internet backbone is largely controlled by Internet Service Providers (ISPs) that resell access to corporations and individuals.
  • ISPs Internet Service Providers
  • IP Internet Protocol
  • IP Internet Protocol
  • PDA Personal Digital Assistant
  • IP Internet Protocol
  • IPv4 IPv6
  • IPv6 IP Security
  • IPv6 IP Security
  • IPv6 IP Security
  • Each host device on the network has at least one IP address that is its own unique identifier and acts as a connectionless protocol. The connection between end points during a communication is not continuous.
  • packets Every packet is treated as an independent unit of data and routed to its final destination—but not necessarily via the same path.
  • the Open System Interconnection (OSI) model was established to standardize transmission between points over the Internet or other networks.
  • the OSI model separates the communications processes between two points in a network into seven stacked layers, with each layer adding its own set of functions. Each device handles a message so that there is a downward flow through each layer at a sending end point and an upward flow through the layers at a receiving end point.
  • the programming and/or hardware that provides the seven layers of function is typically a combination of device operating systems, application software, TCP/IP and/or other transport and network protocols, and other software and hardware.
  • the top four layers are used when a message passes from or to a user and the bottom three layers are used when a message passes through a device (e.g., an IP host device).
  • An IP host is any device on the network that is capable of transmitting and receiving IP packets, such as a server, a router or a workstation. Messages destined for some other host are not passed up to the upper layers but are forwarded to the other host.
  • the layers of the OSI model are listed below.
  • Layer 7 i.e., the application layer
  • Layer 6 i.e., the presentation layer
  • Layer 5 i.e., the session layer
  • Layer-4 i.e., the transport layer
  • Layer-3 i.e., the network layer
  • Layer-3 is a layer that, e.g., handles routing and forwarding, etc.
  • Layer-2 (i.e., the data-link layer) is a layer that, e.g., provides synchronization for the physical level, does bit-stuffing and furnishes transmission protocol knowledge and management, etc.
  • the Institute of Electrical and Electronics Engineers (IEEE) sub-divides the data-link layer into two further sub-layers, the MAC (Media Access Control) layer that controls the data transfer to and from the physical layer and the LLC (Logical Link Control) layer that interfaces with the network layer and interprets commands and performs error recovery.
  • Layer 1 (i.e., the physical layer) is a layer that, e.g., conveys the bit stream through the network at the physical level.
  • the IEEE sub-divides the physical layer into the PLCP (Physical Layer Convergence Procedure) sub-layer and the PMD (Physical Medium Dependent) sub-layer.
  • Wireless networks can incorporate a variety of types of mobile devices, such as, e.g., cellular and wireless telephones, PCs (personal computers), laptop computers, wearable computers, cordless phones, pagers, headsets, printers, PDAs, etc.
  • mobile devices may include digital systems to secure fast wireless transmissions of voice and/or data.
  • Typical mobile devices include some or all of the following components: a transceiver (for example a transmitter and a receiver, including a single chip transceiver with an integrated transmitter, receiver and, if desired, other functions); an antenna; a processor; display; one or more audio transducers (for example, a speaker or a microphone as in devices for audio communications); electromagnetic data storage (such as ROM, RAM, digital data storage, etc., such as in devices where data processing is provided); memory; flash memory; and/or a full chip set or integrated circuit; interfaces (such as universal serial bus (USB), coder-decoder (CODEC), universal asynchronous receiver-transmitter (UART), phase-change memory (PCM), etc.).
  • a transceiver for example a transmitter and a receiver, including a single chip transceiver with an integrated transmitter, receiver and, if desired, other functions
  • an antenna for example, a transceiver, including a single chip transceiver with an integrated transmitter, receiver and, if
  • Wireless LANs in which a mobile user can connect to a local area network (LAN) through a wireless connection may be employed for wireless communications.
  • Wireless communications can include communications that propagate via electromagnetic waves, such as light, infrared, radio, and microwave.
  • electromagnetic waves such as light, infrared, radio, and microwave.
  • WLAN standards There are a variety of WLAN standards that currently exist, such as Bluetooth®, IEEE 802.11, and the obsolete HomeRF.
  • Bluetooth products may be used to provide links between mobile computers, mobile phones, portable handheld devices, personal digital assistants (PDAs), and other mobile devices and connectivity to the Internet.
  • PDAs personal digital assistants
  • Bluetooth is a computing and telecommunications industry specification that details how mobile devices can easily interconnect with each other and with non-mobile devices using a short-range wireless connection. Bluetooth creates a digital wireless protocol to address end-user problems arising from the proliferation of various mobile devices that need to keep data synchronized and consistent from one device to another, thereby allowing equipment from different vendors to work seamlessly together.
  • IEEE 802.11 An IEEE standard, IEEE 802.11, specifies technologies for wireless LANs and devices. Using 802.11, wireless networking may be accomplished with each single base station supporting several devices. In some examples, devices may come pre-equipped with wireless hardware or a user may install a separate piece of hardware, such as a card, that may include an antenna.
  • devices used in 802.11 typically include three notable elements, whether or not the device is an access point (AP), a mobile station (STA), a bridge, a personal computing memory card International Association (PCMCIA) card (or PC card) or another device: a radio transceiver; an antenna; and a MAC (Media Access Control) layer that controls packet flow between points in a network.
  • AP access point
  • STA mobile station
  • bridge a personal computing memory card International Association (PCMCIA) card
  • PCMCIA personal computing memory card International Association
  • PC card or PC card
  • MAC Media Access Control
  • MIDs may be utilized in some wireless networks.
  • MIDs may contain two independent network interfaces, such as a Bluetooth interface and an 802.11 interface, thus allowing the MID to participate on two separate networks as well as to interface with Bluetooth devices.
  • the MID may have an IP address and a common IP (network) name associated with the IP address.
  • Wireless network devices may include, but are not limited to Bluetooth devices, WiMAX (Worldwide Interoperability for Microwave Access), Multiple Interface Devices (MIDs), 802.11x devices (IEEE 802.11 devices including, 802.11a, 802.11b and 802.11g devices), HomeRF (Home Radio Frequency) devices, Wi-Fi (Wireless Fidelity) devices, GPRS (General Packet Radio Service) devices, 3 G cellular devices, 2.5 G cellular devices, GSM (Global System for Mobile Communications) devices, EDGE (Enhanced Data for GSM Evolution) devices, TDMA type (Time Division Multiple Access) devices, or CDMA type (Code Division Multiple Access) devices, including CDMA2000.
  • WiMAX Worldwide Interoperability for Microwave Access
  • MIDs Multiple Interface Devices
  • 802.11x devices IEEE 802.11 devices including, 802.11a, 802.11b and 802.11g devices
  • HomeRF Home Radio Frequency
  • Wi-Fi Wireless Fidelity
  • GPRS General Packet Radio Service
  • Each network device may contain addresses of varying types including but not limited to an IP address, a Bluetooth Device Address, a Bluetooth Common Name, a Bluetooth IP address, a Bluetooth IP Common Name, an 802.11 IP Address, an 802.11 IP common Name, or an IEEE MAC address.
  • Wireless networks can also involve methods and protocols found in, Mobile IP (Internet Protocol) systems, in PCS systems, and in other mobile network systems. With respect to Mobile IP, this involves a standard communications protocol created by the Internet Engineering Task Force (IETF). With Mobile IP, mobile device users can move across networks while maintaining their IP Address assigned once. See Request for Comments (RFC) 3344. NB: RFCs are formal documents of the Internet Engineering Task Force (IETF). Mobile IP enhances Internet Protocol (IP) and adds a mechanism to forward Internet traffic to mobile devices when connecting outside their home network. Mobile IP assigns each mobile node a home address on its home network and a care-of-address (CoA) that identifies the current location of the device within a network and its subnets.
  • IP Internet Protocol
  • CoA care-of-address
  • a mobility agent on the home network can associate each home address with its care-of address.
  • the mobile node can send the home agent a binding update each time it changes its care-of address using Internet Control Message Protocol (ICMP).
  • ICMP Internet Control Message Protocol
  • node includes a connection point, which can include a redistribution point or an end point for data transmissions, and which can recognize, process and/or forward communications to other nodes.
  • Internet routers can look at an IP address prefix or the like identifying a device's network. Then, at a network level, routers can look at a set of bits identifying a particular subnet. Then, at a subnet level, routers can look at a set of bits identifying a particular device.
  • FIG. 1C depicts components that can be employed in system configurations enabling the systems and technical effect of this invention, including wireless access points to which client devices communicate.
  • FIG. 1C shows a wireless network 150 connected to a wireless local area network (WLAN) 152 .
  • the WLAN 152 includes an access point (AP) 154 and a number of user stations 156 , 156 ′.
  • the network 150 can include the Internet or a corporate data processing network.
  • the access point 154 can be a wireless router, and the user stations 156 , 156 ′ can be portable computers, personal desk-top computers, PDAs, portable voice-over-IP telephones and/or other devices.
  • the access point 154 has a network interface 158 linked to the network 150 , and a wireless transceiver in communication with the user stations 156 , 156 ′.
  • the wireless transceiver 160 can include an antenna 162 for radio or microwave frequency communication with the user stations 156 , 156 ′.
  • the access point 154 also has a processor 164 , a program memory 166 , and a random access memory 168 .
  • the user station 156 has a wireless transceiver 170 including an antenna 172 for communication with the access point station 154 .
  • the user station 156 ′ has a wireless transceiver 170 ′ and an antenna 172 for communication to the access point 154 .
  • an authenticator could be employed within such an access point (AP) and/or a supplicant or peer could be employed within a mobile node or user station.
  • Desktop 108 and key board 118 or input devices can also be provided with the user status.
  • the IEEE 802.21 standard defines extensible media access independent mechanisms that enable the optimization of handovers between heterogeneous 802 systems and may facilitate handovers between 802 systems and cellular systems.
  • “The scope of the IEEE 802.21 (Media Independent Handover) standard is to develop a specification that provides link layer intelligence and other related network information to upper layers to optimize handovers between heterogeneous media. This includes links specified by 3GPP, 3GPP2 and both wired and wireless media in the IEEE 802 family of standards.
  • media refers to method/mode of accessing a telecommunication system (e.g. cable, radio, satellite, etc.), as opposed to sensory aspects of communication (e.g. audio, video, etc.).” See 1.1 of I.E.E.E. P802.21/D.01.09, September 2006, entitled Draft IEEE Standard for Local and Metropolitan Area Networks: Media Independent Handover Services, the entire contents of which document is incorporated herein into and as part of this patent application. Other IEEE, or other such standards on protocols can be relied on as appropriate or desirable.
  • FIG. 2 is an exemplary diagram of a server 210 in an implementation consistent with the principles of the disclosure to achieve the desired technical effect and transformation.
  • Server 210 may include a bus 240 , a processor 202 , a local memory 244 , one or more optional input units 246 , one or more optional output units 248 , a communication interface 232 , and a memory interface 222 .
  • Bus 240 may include one or more conductors that permit communication among the components of chunk server 250 .
  • Processor 202 may include any type of conventional processor or microprocessor that interprets and executes instructions.
  • Local memory 244 may include a random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 202 and/or a read only memory (ROM) or another type of static storage device that stores static information and instructions for use by processor 202 .
  • RAM random access memory
  • ROM read only memory
  • Input unit 246 may include one or more conventional mechanisms that permit an operator to input information to a server 110 , such as a keyboard 118 , a mouse 120 (shown in FIG. 1 ), a pen, voice recognition and/or biometric mechanisms, etc.
  • Output unit 248 may include one or more conventional mechanisms that output information to the operator, such as a display 134 , a printer 130 (shown in FIG. 1 ), a speaker, etc.
  • Communication interface 232 may include any transceiver-like mechanism that enables chunk server 250 to communicate with other devices and/or systems. For example, communication interface 232 may include mechanisms for communicating with master and clients.
  • Memory interface 222 may include a memory controller 122 .
  • Memory interface 222 may connect to one or more memory devices, such as one or more local disks 274 , and control the reading and writing of chunk data to/from local disks 276 .
  • Memory interface 222 may access chunk data using a chunk handle and a byte range within that chunk.
  • FIG. 3 is an exemplary diagram of a master system 376 suitable for use in an implementation consistent with the principles of the disclosure to achieve the desired technical effect and transformation.
  • Master system 376 may include a bus 340 , a processor 302 , a main memory 344 , a ROM 326 , a storage device 378 , one or more input devices 346 , one or more output devices 348 , and a communication interface 332 .
  • Bus 340 may include one or more conductors that permit communication among the components of master system 374 .
  • Processor 302 may include any type of conventional processor or microprocessor that interprets and executes instructions.
  • Main memory 344 may include a RAM or another type of dynamic storage device that stores information and instructions for execution by processor 302 .
  • ROM 326 may include a conventional ROM device or another type of static storage device that stores static information and instructions for use by processor 302 .
  • Storage device 378 may include a magnetic and/or optical recording medium and its corresponding drive. For example, storage device 378 may include one or more local disks that provide persistent storage.
  • Input devices 346 used to achieve the desired technical effect and transformation may include one or more conventional mechanisms that permit an operator to input information to the master system 374 , such as a keyboard 118 , a mouse 120 , (shown in FIG. 1 ) a pen, voice recognition and/or biometric mechanisms, etc.
  • Output devices 348 may include one or more conventional mechanisms that output information to the operator, including a display 108 , a printer 142 (shown in FIG. 1 ), a speaker, etc.
  • Communication interface 332 may include any transceiver-like mechanism that enables master system 374 to communicate with other devices and/or systems. For example, communication interface 332 may include mechanisms for communicating with servers and clients as shown above.
  • Master system 376 used to achieve the desired technical effect and transformation may maintain file system metadata within one or more computer readable mediums, such as main memory 344 and/or storage device.
  • the computer implemented system provides a storage and delivery base which allows users to exchange services and information openly on the Internet used to achieve the desired technical effect and transformation.
  • a user will be enabled to operate as both a consumer and producer of any and all digital content or information through one or more master system servers.
  • a user executes a browser to view digital content items and can connect to the front end server via a network, which is typically the Internet, but can also be any network, including but not limited to any combination of a LAN, a MAN, a WAN, a mobile, wired or wireless network, a private network, or a virtual private network.
  • a network typically the Internet, but can also be any network, including but not limited to any combination of a LAN, a MAN, a WAN, a mobile, wired or wireless network, a private network, or a virtual private network.
  • a very large numbers e.g., millions
  • the user may include a variety of different computing devices. Examples of user devices include, but are not limited to, personal computers, digital assistants, personal digital assistants, cellular phones, mobile phones, smart phones or laptop computers.
  • the browser can include any application that allows users to access web pages on the World Wide Web. Suitable applications include, but are not limited to, Microsoft Internet Explorer®, Netscape Navigator®, Mozilla® Firefox, Apple® Safari or any application adapted to allow access to web pages on the World Wide Web.
  • the browser can also include a video player (e.g., FlashTM from Adobe Systems, Inc.), or any other player adapted for the video file formats used in the video hosting website. Alternatively, videos can be accessed by a standalone program separate from the browser.
  • a user can access a video from the website by, for example, browsing a catalog of digital content, conducting searches on keywords, reviewing aggregate lists from other users or the system administrator (e.g., collections of videos forming channels), or viewing digital content associated with particular user groups (e.g., communities).
  • FIG. 4 illustrates an exemplary illustrative networked computing environment 400 , with a server in communication with client computers via a communications network 450 . As shown in FIG.
  • server 410 may be interconnected via a communications network 450 (which may be either of, or a combination of a fixed-wire or wireless LAN, WAN, intranet, extranet, peer-to-peer network, virtual private network, the Internet, or other communications network) with a number of client computing environments such as tablet personal computer 402 , mobile telephone 404 , telephone 406 , personal computer 402 , and personal digital assistant 408 .
  • a communications network 450 which may be either of, or a combination of a fixed-wire or wireless LAN, WAN, intranet, extranet, peer-to-peer network, virtual private network, the Internet, or other communications network
  • client computing environments such as tablet personal computer 402 , mobile telephone 404 , telephone 406 , personal computer 402 , and personal digital assistant 408 .
  • server 410 can be dedicated computing environment servers operable to process and communicate data to and from client computing environments via any of a number of known protocols, such as, hypertext transfer protocol (HTTP), file transfer protocol (FTP), simple object access protocol (SOAP), or wireless application protocol (WAP).
  • HTTP hypertext transfer protocol
  • FTP file transfer protocol
  • SOAP simple object access protocol
  • WAP wireless application protocol
  • Other wireless protocols can be used without departing from the scope of the disclosure, including, for example Wireless Markup Language (WML), DoCoMo i-mode (used, for example, in Japan) and XHTML Basic.
  • networked computing environment 400 can utilize various data security protocols such as secured socket layer (SSL) or pretty good privacy (PGP).
  • SSL secured socket layer
  • PGP pretty good privacy
  • Each client computing environment can be equipped with operating system 438 operable to support one or more computing applications, such as a web browser (not shown), or other graphical user interface (not shown), or a mobile desktop environment (not shown) to gain access to server computing environment
  • a user may interact with a computing application running on a client computing environment to obtain desired data and/or computing applications.
  • the data and/or computing applications may be stored on server computing environment 400 and communicated to cooperating users through client computing environments over exemplary communications network 450 .
  • the computing applications described in more detail below, are used to achieve the desired technical effect and transformation set forth.
  • a participating user may request access to specific data and applications housed in whole or in part on server computing environment 400 . These data may be communicated between client computing environments and server computing environments for processing and storage.
  • Server computing environment 400 may host computing applications, processes and applets for the generation, authentication, encryption, and communication data and applications and may cooperate with other server computing environments (not shown), third party service providers (not shown), network attached storage (NAS) and storage area networks (SAN) to realize application/data transactions.
  • server computing environments not shown
  • third party service providers not shown
  • NAS network attached storage
  • SAN storage area networks
  • the communication network is adaptable and configurable to be in communication with one or more input devices 446 and/or one or more output devices 448 as discussed above.
  • input devices are those devices or components that provide information to the system and output devices are those devices or components that provide information from the system.
  • output devices are those devices or components that provide information from the system.
  • suitable input devices are, for example, those devices that input information into the system such as imaging devices and/or patient motion control devices as discussed herein.
  • Suitable output devices are, for example, those devices that receive information and/or data from one or more input devices, in a computing environment (such as shown in FIG. 4 ), process the received information and/or data, and generate a return real-time or near real-time signal to the input devices to achieve a technical effect of controlling the behavior or performance of the input devices to achieve a desired
  • the Media Independent Information Service provides a framework and corresponding mechanisms by which an MIHF entity may discover and obtain network information existing within a geographical area to facilitate handovers. Additionally or alternatively, neighboring network information discovered and obtained by this framework and mechanisms can also be used in conjunction with user and network operator policies for optimum initial network selection and access (attachment), or network re-selection in idle mode.
  • MIIS Media Independent Information Service
  • MIIS primarily provides a set of information elements (IEs), the information structure and its representation, and a query/response type of mechanism for information transfer.
  • the information can be present in some information server from which, e.g., an MIHF in the Mobile Node (MN) can access it.
  • MN Mobile Node
  • MIIS provides the capability for obtaining information about lower layers such as neighbor maps and other link layer parameters, as well as information about available higher layer services such as Internet connectivity.
  • the MIIS provides a generic mechanism to allow a service provider and a mobile user to exchange information on different handover candidate access networks.
  • the handover candidate information can include different access technologies such as IEEE 802 networks, 3GPP networks and 3GPP2 networks.
  • the MIIS also allows this collective information to be accessed from any single network. For example, by using an IEEE 802.11 access network, it can be possible to get information not only about all other IEEE 802 based networks in a particular region but also about 3GPP and 3GPP2 networks. Similarly, using, e.g., a 3GPP2 interface, it can be possible to get access to information about all IEEE 802 and 3GPP networks in a given region.
  • MIIS enables this functionality across all available access networks by providing a uniform way to retrieve heterogeneous network information in any geographical area.
  • Motion control device can be is represented by a large box that contains various subsystems. Suitable motion control devices can either be passive or active. As will be appreciated by those of skill in the art, the motion control device, can also be a horizontally configured motion control device, a vertically configured motion control device or a butterfly configured device. Motion control devices suitable for use with the systems include any of the motion control devices described herein as well as any other device suitable for controlling the motion of a target patient anatomy.
  • the diagnostic imaging hardware contains a field of imaging, which is a physical space in which objects imaged by the hardware must be located during the imaging process to produce images.
  • the field of imaging can contain a posture assistance device such as a table, bed, chair, or other device intended to bear all or some of the subject's weight and to provide physical support to a specific type of posture.
  • the field of imaging can contain no such devices if the subject can be situated directly onto the floor and/or the motion control device and does not require the use of an additional device to bear weight and/or support specific postures, according to one embodiment of the present invention.
  • the motion control device or sub-systems therein, occupy part or the entire field of imaging and is physically connected and supported either by resting on the floor itself, or by being physically and immovably attached to the imaging equipment or a posture-assistance devices within the field of imaging. All parts of the horizontally configured motion control device that are located within the field of imaging are constructed of materials that are either radiolucent in the case of use with videoflouroscopic and moving CT imaging systems, or alternatively compatible with MM images in the case of a moving MM imaging system, and therefore these parts of the motion control device do not obscure or produce artifacts on the diagnostic images.
  • the motion control device may also have the capacity to have pillows, cushions, and/or restraining devices attached to it at points where these pillows, cushions, and/or restraining devices aid in improving the comfort of the subject and/or in producing the correct posture and/or motion required for the test.
  • the motion control device as a unit is attachable and detachable by the operator within the field of imaging, according to one embodiment of the present invention.
  • a base is provided for the purpose of physically and immovably fixing and stabilizing the motion control device within the field of imaging to either the floor, the imaging equipment, and/or a posture-assistance device while the images and other measurements are being collected, and also for the purpose of providing an immoveable fixed structure on which to attach other sub-systems of the motion control device.
  • the base connects via attachment mechanisms at the points of contact between the base and either the floor, the imaging equipment, and/or a posture-assistance device.
  • the motion control device physically attaches to and therefore may bear its weight onto the base, and as the motion control device can be configured to also bear the entire weight of the subject, and with the subject moving during the testing process and therefore producing both static and dynamic forces, the base needs the structural integrity and gripping force required to remain static, stable, and fixed in the presence of such loads and forces.
  • the structural integrity is afforded by the use of rigid and strong materials such as plastics when radiolucent materials are desirable and in situations where compatibility with dynamic MRI systems is required, according to one embodiment of the present invention.
  • Said gripping force is afforded by the use of strong fixation mechanisms at the points of contact, and may be accomplished by either: (1) the weight of the motion control device itself, and the friction caused thereby and enhanced by the use of high-friction materials such as rubber at the points of contact, to fix and stabilize the motion control device; (2) screws, clamps, bolts, fasteners, straps, ties, cuffs, nuts, pins, or any other rigid or flexible fixation mechanism that provides immoveable fixation at the points of contact; and/or (3) some combination therein.
  • Base can be a highly configurable sub-system, adapted and configured to have several configurations and versions to accommodate the different types of postures; different types, sizes, and configurations of posture-assistance devices; different sizes and geometries of imaging equipment and imaging fields; different materials at the point of contact to which to connect between the base and either the floor, the imaging equipment, and/or a posture-assistance device; and different geometries and sizes of these points of contact.
  • the diagnostic imaging hardware contains a field of imaging, which is a physical space in which objects imaged by the hardware must be located during the imaging process to produce images.
  • the field of imaging can contain a posture assistance device such as a table, bed, chair, or other device intended to bear all or some of the subject's weight and to provide physical support to a specific type of posture.
  • the field of imaging can contain no such devices if the subject can be situated directly onto the floor and/or the motion control device and does not require the use of an additional device to bear weight and/or support specific postures.
  • the “butterfly” motion control device occupy part or the entire field of imaging and is physically connected and supported either by resting on the floor itself, or by being physically and immovably attached to the imaging equipment or to one of the above-mentioned posture-assistance devices within the field of imaging. All parts of the “butterfly” motion control device that are located within the field of imaging are constructed of materials that are either radiolucent in the case of use with videoflouroscopic and moving CT imaging systems, or alternatively compatible with on MM images in the case of a moving MRI imaging system, and therefore these parts of the “butterfly” motion control device do not obscure or produce artifacts on the diagnostic images.
  • the “butterfly” motion control device also has the capacity to have pillows, cushions, and/or restraining devices attached to it at points where these pillows, cushions, and/or restraining devices aid in improving the comfort of the subject and/or in producing the correct posture and/or motion required for the test.
  • the “butterfly” motion control device is attachable and detachable by the operator within the field of imaging.
  • other devices adapted and configured to control movement of a target patient anatomy can be used without departing from the scope of the disclosure.
  • the base 31 serves as the base for the horizontally configured motion control device 25 .
  • the device 25 can be adapted and configured such that all other sub-systems attach or engage the base in some way.
  • the base 31 can be optionally adapted and configured to detachably attach to either the floor, the imaging equipment, and/or a posture-assistance device 53 via the detachable anchoring device 55 . The operator can then remove the motion control device 25 from the field of imaging. Moving up from this base 31 , the next two physical sub-systems are the static platform 33 and the motion platform 35 .
  • the static platform 33 and the motion platform 35 are attached to each other by a suitable mechanism such as a hinging mechanism 73 .
  • a suitable mechanism such as a hinging mechanism 73 .
  • the device is locked such that the flat surfaces of both the motion platform 35 and static platform 33 reside within the same plane, but that still allows for the free rotation of the motion platform 35 within a plane (e.g., plane a-c) of its subject-facing surface about a fixed axis (b) of rotation.
  • a plane e.g., plane a-c
  • Other configurations or embodiments are possible that afford for the horizontal motion platform to move in a plane that is at an angle to the horizontal static platform.
  • the static platform 33 and motion platform 35 attach to the base 31 differently. See FIGS. 5A and 5B for a graphical description of how these sub-systems can be adapted to attach to each other.
  • the base 31 attaches to either the floor, imaging equipment, and/or posture assistance devices 53 via the detachable anchoring device 55 and also connects to the static platform 33 , which is held firm by a rigid immobilized static platform/member attachment mechanism 49 .
  • the base 31 and the motion platform 35 are attached by way of the motion platform attachment mechanism 51 that along with the hinging mechanism 73 allows for free rotation of the motion platform 35 within the plane of its flat subject-facing surface, while simultaneously allowing for the adjustment of the angle that this plane makes with the subject-facing surface of the static platform 33 , such that these two planes intersect along the line of the hinge which occupies the linear space defined by the edges of these two platforms that face and are adjacent to each other.
  • this angle is set to 180 degrees. In other “non-default” configurations, this angle can be adjusted to angles other than 180 degrees.
  • the radio-opaque protractor 74 is shown on FIG. 5A .
  • FIGS. 5C-E illustrate a configuration of a suitable device.
  • FIG. 6A and 6B illustrate the functionality of the motion platform attachment mechanism 51 and the hinging mechanism 73 .
  • FIG. 6A depicts the side view block diagram of attachment mechanisms and parts of the horizontally configured motion control device 25 in a “front up” configuration, where the hinging mechanism 73 connects the static platform 33 with the motion platform 35 along the edges of these platforms that face each other in such a way as to allow these two platforms to rotate about an axis c of the hinge.
  • the connection between the base 31 and the static platform 33 is held firm by the rigid immobilized static platform/member attachment mechanism 49 .
  • the motion platform attachment mechanism 51 between the base 31 and the motion platform 35 functions differently.
  • the motion platform attachment mechanism 51 is adapted and configured to lengthen within a plane (e.g., plane a-c) along an axis as well as the ability to change the angle of attachment to both the base 31 and the motion platform 35 such that the end of the motion platform 35 opposing the end adjacent to the static platform 33 can move up or down (along the b axis) so that the plane of the motion platform 35 is at an angle to the plane of the static platform 33 and that these two planes intersect along the line created by their common edge which is a space occupied by the hinging mechanism 73 .
  • a radiopaque protractor 74 (shown in FIG. 7B ) enables an assessment of movement of the spine during the imaging process.
  • FIG. 6B represents a side view block diagram of attachment mechanisms and parts of a horizontally configured motion control device 25 in a “front down” configuration.
  • the hinging mechanism 73 functions in the same way allowing for the static platform 33 and motion platform 35 to rotate about the axis c of the hinge such that it changes position from lying within a plane (e.g. c-a plane) to rotating about the c axis.
  • the rigid immobilized static platform/member attachment mechanism 49 in this configuration can be lengthened or shortened, but fixed at a right angle to the platform base 31 and the static platform 33 .
  • the motion platform attachment mechanism 51 can be lengthened or shortened such that the angle of attachment to the motion platform 35 and the platform base 31 is no longer a right angle, and instead any other angle dictated by the geometric configuration of the device indicated by the prescriber.
  • the frame 31 connects to the base 53 of vertically configured motion control device 27 at a rigid base to frame connection mechanism 69 .
  • the frame 31 is the frame to which all other sub-systems attach in some way. Moving out from this frame 31 , the next two physical sub-systems are the static member 33 and the motion member 35 .
  • the frame 31 attaches to the static member 33 by way of a rigid immobilized static platform/member attachment mechanism 49 like the one described for FIGS. 5A and 5B with the added capability of providing cantilevered support for the weight of the static member 33 and any of the attached subject body parts.
  • the frame 31 attaches to the motion member 35 by way of a motion member attachment mechanism 85 that allows free rotation around a fixed axis within the same plane as that of the subject facing surface of the static member, and provides for the cantilevered support for the weight of the motion member 35 and the subject body parts that could be connected to it.
  • the static member 33 and motion member 35 and are attached to each other by the vertically configured motion control device hinging mechanism 73 that when in the “default” position represented in FIGS. 7A and 7B , is locked such that the flat surfaces of both the static member 33 and the motion member 35 reside within the same plane, but still allows for the free rotation of the motion member 35 around a fixed axis within that plane.
  • a radio-opaque protractor 74 adaptable for use with any device disclosed or contemplated to be part of the system is shown on FIG. 7B .
  • FIGS. 7C-E illustrate a configuration of the device.
  • FIGS. 8A, 8B, and 8C represent the side view block diagram of the vertically configured motion control device 27 in the “default”, “top out” and “top in” configurations, respectively.
  • the “default” configuration given in FIG. 9A is as described in the previous paragraph.
  • the attachment mechanism 85 connects the static member 33 to the motion member 35 and can lengthen or shorten along the b axis and/or change the angle of attachment to frame 31 and motion member 35 such that the top of the motion member 35 can move away from the frame 31 so that the plane of the motion member 35 is at an angle to the plane of the static member 33 and that these two planes intersect along the line created by their common edge, the space of which is occupied by the motion control device hinging mechanism 73 . Furthermore, the motion member attachment mechanism 85 allows for the free rotation of the motion member 35 around a fixed axis within that plane while providing cantilevered supporting the weight of the motion member and any of the subject's body parts that are connected to it.
  • the motion member attachment mechanism 85 illustrates its ability to lengthen and shorten along the b axis and change the angle of attachment to the connecting frame 31 and motion member 35 . Additionally, in this configuration, the static platform/member attachment mechanism 49 can lengthen along b axis, pushing the static member 33 away from the frame 31 while keeping the static member 33 in a non-changing orientation with respect to the frame 31 .
  • FIG. 9 is a simplified block diagram of the integrated imaging system.
  • the imaging component is any suitable imaging device, such as those disclosed above.
  • the tracking component can be any suitable device such as those disclosed above that facilitates tracking and movement of a target patient anatomy.
  • the imaging and tracking component are part of the same machine, but it is possible that the imaging and tracking components are separate units connected through either a wireless connection or a direct wire connection to each other or through an external imaging control unit.
  • the components of the tracking system and the imaging apparatus communicate information through the imaging control unit and the information is used to direct adjustments in real time to the imaging device.
  • the purpose of this invention is to provide the best diagnostic imaging information to the physician and patient and to reduce radiation exposure to the patient and or physician during the testing session
  • the first component of the integrated imaging system is the imaging component. While the following description refers to fluoroscopic imaging in general, it is contemplated that the advantages also applies to many types of diagnostic imaging systems. Further information regarding fluoroscopic devices is provided, for example, in U.S. Pat. No. 6,424,731 for Method of Positioning a Radiographic Device; and U.S. Pat. No. 5,873,826 entitled Fluoroscopy method and X-ray CT apparatus.
  • the imaging apparatus includes a central image control unit that controls, among other things, the intensity of the imaging source, the duration of the scan, the optimal frequency of images (frames per second), the position of the imaging source, and the optimal collimation.
  • an image storage unit that acquires the information from the imaging session and stores the image or image sequence for review.
  • an image viewing station for real time viewing of the image or image sequence. The image viewing station also acts as the tracking apparatus monitor where an operator would input information with respect to the area or areas of interest within the field of view of the image.
  • the components of the diagnostic tracking system are a user input component and computer hardware and software component. While it is contemplated that the tracking display unit is also the image viewing station, those skilled in the art will appreciate that it can also be a separate viewing station independent of the image viewing station and which communicates with the central image control unit either through a secure wireless connection or through a direct wire connection.
  • the user input component provides the user with the capability of defining one or more areas of interest within the image based on the diagnostic test that is prescribed.
  • the computer hardware and software component calculates the location of the area of interest and records and communicates this information for several purposes.
  • One such purpose for the tracking information of the location of the area of interest is for the purpose of acquiring an imaging session that provides optimal diagnostic information and minimal exposure to radiation. This is accomplished through feedback loops to the central image control unit.
  • the information derived from the tracking apparatus direct changes with respect to the imaging session.
  • one such feedback loop connects a real time, or near real-time, tracking information with the positioning of the x-ray source so as to keep the area of interest within the field of view throughout the imaging session. This is accomplished through either a wireless connection or a direct wire connection that transmits information with respect to the position of the area of interest within the central area of the field of view of the image.
  • the information is received in real time by the central image control unit, processed, and the output is a continuous adjustment to the location of the imaging source with respect to the patient so as to keep the area of interest centered within the field of view throughout the image testing session.
  • Another feedback loop would use the tracking information to automatically and continuously, or semi-automatically if desired by a user, adjust the collimators so as to direct the radiation to the area of interest and at the same time decrease radiation exposure to the patient.
  • the tracking information transmits information with respect to the location of the area of interest within the field of view of the image to the image control unit through a wireless or a direct wired connection.
  • the image control unit processes the information and automatically and/or continuously adjusts the image settings with respect to the image apparatus collimators.
  • Another feedback loop would use the tracking information to adjust the intensity of the radiation so as to produce an image that is sufficiently defined for the purposes of the diagnostic testing requirements.
  • the tracking software transmits information either wirelessly or through a direct wired connection to the image control unit.
  • the image control unit then processes the information and automatically and/or continuously adjusts the image settings with respect to intensity of the radiation source.
  • One example of the image source settings that might be adjusted based on the imported information could be the fluoroscopic kv and mA. These types of adjustments might change the contrast levels in the image or reduce noise associated with radiation imaging.
  • Another feedback loop would transmit the tracking information to the image control unit so as to provide information with respect to the optimal distance of the image intensifier from the patient.
  • the tracking information would be transmitted either wirelessly or through a direct wire connection.
  • the imported information would be processed by the image control unit and direct continuous adjustments to the image intensifier in real time for the purpose of creating the optimal imaging session for the prescribed diagnostic testing session.
  • Another feedback loop would transmit the tracking information to the image control unit so as to provide information with respect to the optimal fluoroscopic exposure rate at the image intensifier.
  • the tracking information would be transmitted either wirelessly or through a direct wire connection.
  • the imported information would be processed by the image control unit and used to adjust the fluoroscopic exposure rate at the image intensifier continuously and in real time.
  • the rate of exposure with respect to frames per second might depend on the rate at which the area or areas of interest are moving during the testing session.
  • the system and/or one or more feedback loops can be adapted and configured to automatically repeat the most recent instructions after a lag of a set amount of time from providing the image data for analysis.
  • a feedback loop is processed and a period of n seconds passes the prior setting will remain in place.
  • FIG. 11 A process by which this disclosure can be implemented is shown in FIG. 11 .
  • the process begins with the patient being positioned in the starting position for the prescribed imaging test.
  • a spot image is acquired.
  • the operator or physician determines whether the spot image captures the field of view necessary for the prescribed imaging test. If the patient is not in the correct position, the operator or physician will reposition the patient and repeat the process.
  • the area of interest for the prescribed imaging test is within the field of view of the image, the operator or physician will manually defined the area of interest. For example, this could be defining one or more vertebra of the spine for a flexion extension testing session.
  • the imaging session begins. During the imaging session, the tracking apparatus tracks the area or areas of interest within the image in real time.
  • the information with respect to image quality and position of the area or areas of interest are communicated from the tracking apparatus to the central imaging control unit.
  • the central imaging control unit processes the information in real time, and automatically and continuously adjusts the imaging apparatus so as to maintain optimal imaging settings for the prescribed testing session.
  • the image results are displayed both as a video of the imaging session and the quantitative diagnostic information is also displayed. Finally, the information is stored and can be recovered for future reference.
  • this apparatus can incorporate the use of many different external devices that might be used in conjunction with a diagnostic imaging session. For example, if an apparatus that controls the motion of the patient during the testing session is used, the information from that device will also communicate with the central imaging control unit and implement the same feedback loops as described above.
  • the motion apparatus can be used in conjunction with the imaging system as a means of standardizing or supporting patient motion during the imaging session.
  • the motion apparatus is comprised of a patient motion control unit that interacts and moves a patient undergoing an imaging session in a controlled, predetermined motion path. It is contemplated that the motion apparatus communicates information with the imaging device with respect to the position of the table and/or the patient on the table during imaging.
  • the motion apparatus can adjust based on the position of the imaging device.
  • the motion table may adjust for example to maintain the same image field of view throughout the motion testing session. It is possible for each component of the motion apparatus to communicate or receive information through a central processing unit.
  • Functional tests of a target subject anatomy can be performed to minimize the radiation dose involved during a procedure with an imaging procedure based on real-time or near real-time feedback.
  • a patient would be positioned on the articulating patient handling device and prepared for an imaging study.
  • the imaging study would involve the capturing of images of the lumbar spine with a standard hospital fluoroscope, which is capable of capturing moving x-ray type video images of the lumbar spine.
  • the fluoroscope would then begin recording images as the patient handling device affects a controlled movement of the subject during the imaging session.
  • the field of imaging for standard fluoroscopes is typically a 9 or a 12 inch circle.
  • the vertebra For a lumbar vertebrae of interest the vertebra only occupies a small subset of this imaging field—on the order of 15-30% of the imaging field—however the entire imaging field is being irradiated and imaged.
  • the collimator would then be placed on the X-ray generator, and would be able to adjust the shape and trajectory of the collimation of the X-ray beam according to data received from the tracking system to prevent regions other than those of interest (i.e. the lumbar vertebral bodies) from being irradiated and imaged.
  • the data capture, analysis and feedback loop would then reduce the overall radiation dose to the patient proportional to the proportion of the beam that is being collimated.
  • Yet another example provides an improved functional test of a target anatomy allows smaller image intensifiers to be used.
  • standard hospital fluoroscopes have a 9 or a 12 inch image intensifier, and therefore a corresponding 9 or 12 inch field of view.
  • superior lumbar vertebrae such as L1 or L2 move out of the 9 or 12 inch field of view as a patient approaches a maximum lumbar bending angles.
  • Real-time or near real time adjustments to the anatomy contained within the field of view would make it possible to prevent or minimize the target anatomy from exiting the field of view during imaging through a movement sequence.
  • Two different mechanisms can be provided to prevent or eliminate movement of the target anatomy from the field of view.
  • the present invention provides for a feedback loop from the real time tracking system to either the positioning system for the fluoroscopy or the positioning system on which the articulating patient handling device rests.
  • By making adjustments to either the position of the image intensifier and/or to the articulating patient handling device assembly it would be possible to maintain all anatomy of interest within the field of view at all points during the movement and prevent any anatomy from exiting the field of view.
  • the effect of this is that less expensive fluoroscopy systems with smaller image intensifiers would be suitable for conducting imaging of, for example, the lumbar spine during controlled lumbar bending.
  • a problem associated with functional test of the spine is variability.
  • Clinical studies that have shown that there can be a wide degree of variability in measurement of in vivo lumbar vertebral motion.
  • One system to reduce the variability is to standardize the bending angle to which subjects bend during imaging.
  • the total amount of lumbar bending that is occurring can be measured by measuring the angulation between L1, the uppermost (or most superior) lumbar vertebra, and S 1 , the lowermost (or most inferior) lumbar vertebra. This overall L1-S1 angulation is measured by comparing two X-ray or fluoroscopic images of a subject taken from two different positions.
  • a subject might perform a gross lumbar bend and achieve 60 degrees of gross lumbar bending, but the L1-S1 angulation might only by 45 degrees. The difference is attributable to mechanical slack in other anatomy involved in the motion, such as the hips, thorax, arms, shoulders, etc.
  • the motion control device By standardizing the overall L1-S1 angulation of a subject instead of the motion control device, and by taking measurements of the motion of individual vertebral bodies at standardized L1-S1 angles as opposed to standardized gross bending angles, it is possible to further reduce intervertebral angulation measurement variability.
  • an advantage afforded by the present invention would be to have subjects bend to a standardized L1-S1 angle, letting the overall gross bending angle vary, as opposed to bending subjects to a standardized gross bending angle and letting the L1-S1 angle vary.
  • the real-time or near real-time feedback loop from the tracking system to the articulation control system of the patient handling device enables the device to discontinue imaging when an actual patient bending angle has been achieved.
  • Such a feedback loop would allow a diagnostician to select a range of motion and then for the subject to be guided through a standard bend that would continue until a selected or specified L1-S1 angle was achieved.
  • the real time tracking system would provide the means of signaling to the articulation control system that the subject has achieved a specific L1-S1 angle, and therefore that bending should be stopped and reversed in the opposite direction to return the subject to a neutral position. Such a system would provide the capability to produce lower variability intervertebral angulation measurements.

Landscapes

  • Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Engineering & Computer Science (AREA)
  • Medical Informatics (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Animal Behavior & Ethology (AREA)
  • Veterinary Medicine (AREA)
  • Biomedical Technology (AREA)
  • Heart & Thoracic Surgery (AREA)
  • Public Health (AREA)
  • Pathology (AREA)
  • Surgery (AREA)
  • Biophysics (AREA)
  • Nuclear Medicine, Radiotherapy & Molecular Imaging (AREA)
  • Radiology & Medical Imaging (AREA)
  • High Energy & Nuclear Physics (AREA)
  • Optics & Photonics (AREA)
  • Dentistry (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Orthopedic Medicine & Surgery (AREA)
  • Physical Education & Sports Medicine (AREA)
  • Physiology (AREA)
  • Rheumatology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Quality & Reliability (AREA)
  • Multimedia (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

An integrated imaging system for creating an optimal imaging session by importing information in real time from several sources over a network and using that information to automatically and continuously adjust the parameters of the imaging session so as to create the optimal session for the prescribed testing session.

Description

    CROSS-REFERENCE
  • This application is a divisional application of U.S. patent application Ser. No. 15/007,508 filed Jan. 27, 2016, now U.S. Pat. No. 9,554,752 issued Jan. 31, 2017, which is a continuation of U.S. patent application Ser. No. 14/828,077, now U.S. Pat. 9,277,879 B2 issued Mar. 8, 2016, which is a divisional application of U.S. patent application Ser. No. 13/497,386 filed on Jul. 9, 2012, under 35 USC §371, now U.S. Pat. No. 9,138,163 B2 issued Sep. 22, 2015 which claims the benefit of International Application PCT/US2010/050210 filed on Sep. 24, 2010, under 35 USC §365, which claims the benefit of U.S. Provisional Application No. 61/245,984 filed Sep. 25, 2009, which applications are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • One of the most prevalent joint problems is back pain, particularly in the “small of the back” or lumbosacral (L4-S1) region. In many cases, the pain severely limits a person's functional ability and quality of life. Such pain can result from a variety of spinal pathologies. Through disease or injury, the vertebral bodies, intervertebral discs, laminae, spinous process, articular processes, or facets of one or more spinal vertebrae can become damaged, such that the vertebrae no longer articulate or properly align with each other. This can result in an undesired anatomy, loss of mobility, and pain or discomfort. Duke University Medical Center researchers found that patients suffering from back pain in the United States consume more than $90 billion annually in health care expenses, with approximately $26 billion being directly attributable to treatment. Additionally, there is a substantial impact on the productivity of workers as a result of lost work days. Similar trends have also been observed in the United Kingdom and other countries. As a result of this problem, increased funding is being applied toward developing better and less invasive orthopedic intervention devices and procedures.
  • Over the years the increased funding has led to the development of various orthopedic interventions. These include interventions suitable for fixing the spine and/or sacral bone adjacent the vertebra, as well as attaching devices used for fixation, including: U.S. Pat. No. 6,290,703, to Ganem, for Device for Fixing the Sacral Bone to Adjacent Vertebrae During Osteosynthesis of the Backbone; U.S. Pat. No. 6,547,790, to Harkey, III, et al., for Orthopaedic Rod/Plate Locking Mechanisms and Surgical Methods; U.S. Pat. No. 6,074,391, to Metz-Stavenhagen, et al., for Receiving Part for a Retaining Component of a Vertebral Column Implant; U.S. Pat. No. 5,891,145, to Morrison, et al., for Multi-Axial Screw; U.S. Pat. No. 6,090,111, to Nichols, for Device for Securing Spinal Rods; U.S. Pat. No. 6,451,021, to Ralph, et al., for Polyaxial Pedicle Screw Having a Rotating Locking Element; U.S. Pat. No. 5,683,392, to Richelsoph, et al., for Multi-Planar Locking Mechanism for Bone Fixation; U.S. Pat. No. 5,863,293, to Richelsoph, for Spinal Implant Fixation Assembly; U.S. Pat. No. 5,964,760, to Richelsoph, for Spinal Implant Fixation Assembly; U.S. Pat. No. 6,010,503, to Richelsoph, et al., for Locking Mechanism; U.S. Pat. No. 6,019,759, to Rogozinski, for Multi-Directional Fasteners or Attachment Devices for Spinal Implant Elements; U.S. Pat. No. 6,540,749, to Schafer, et al., for Bone Screw; U.S. Pat. No. 6,077,262, to Schlapfer, for Posterior Spinal Implant; U.S. Pat. No. 6,248,105, to Schlapfer, et al., for Device for Connecting a Longitudinal Support with a Pedicle Screw; U.S. Pat. No. 6,524,315, to Selvitelli, et al., for Orthopaedic Rod/Plate Locking Mechanism; U.S. Pat. No. 5,797,911, to Sherman, et al., for Multi-Axial Bone Screw Assembly; U.S. Pat. No. 5,879,350, to Sherman, et al., for Multi-Axial Bone Screw Assembly; U.S. Pat. No. 5,885,285, to Simonson, For Spinal Implant Connection Assembly; U.S. Pat. No. 5,643,263, to Simonson for Spinal Implant Connection Assembly; U.S. Pat. No. 6,565,565, to Yuan, et al., for Device for Securing Spinal Rods; U.S. Pat. No. 5,725,527, to Biederman, et al., for Anchoring Member; U.S. Pat. No. 6,471,705, to Biederman, et al., for Bone Screw; U.S. Pat. No. 5,575,792, to Errico, et al., for Extending Hook and Polyaxial Coupling Element Device for Use with Top Loading Rod Fixation Devices; U.S. Pat. No. 5,688,274, to Errico, et al., for Spinal Implant Device having a Single Central Rod and Claw Hooks; U.S. Pat. No. 5,690,630, to Errico, et al., for Polyaxial Pedicle Screw; U.S. Pat. No. 6,022,350, to Ganem, for Bone Fixing Device, in Particular for Fixing to the Sacrum during Osteosynthesis of the Backbone; U.S. Pat. No. 4,805,602, to Puno, et al., for Transpedicular Screw and Rod System; U.S. Pat. No. 5,474,555, to Puno, et al., for Spinal Implant System; U.S. Pat. No. 4,611,581, to Steffee, for Apparatus for Straightening Spinal Columns; U.S. Pat. No. 5,129,900, to Asher, et al., for Spinal Column Retaining Method and Apparatus; U.S. Pat. No. 5,741,255, to Krag, et al., for Spinal Column Retaining Apparatus; U.S. Pat. No. 6,132,430, to Wagner, for Spinal Fixation System; U.S. Patent No. 7,780,703, and to Yuan, et al., for Device for Securing Spinal Rods.
  • Another type of orthopedic intervention is the spinal treatment decompressive laminectomy. Where spinal stenosis (or other spinal pathology) results in a narrowing of the spinal canal and/or the intervertebral foramen (through which the spinal nerves exit the spine), and neural impingement, compression and/or pain results, the tissue(s) (hard and/or soft tissues) causing the narrowing may need to be resected and/or removed. A procedure which involves excision of part or all of the laminae and other tissues to relieve compression of nerves is called a decompressive laminectomy. See, for example, U.S. Pat. No. 5,019,081, to Watanabe, for Laminectomy Surgical Process; U.S. Pat. No. 5,000,165, to Watanabe, for Lumbar Spine Rod Fixation System; and U.S. Pat. No. 4,210,317, to Spann, et al., for Apparatus for Supporting and Positioning the Arm and Shoulder. Depending upon the extent of the decompression, the removal of support structures such as the facet joints and/or connective tissues (either because these tissues are connected to removed structures or are resected to access the surgical site) may result in instability of the spine, necessitating some form of supplemental support such as spinal fusion, discussed above.
  • Other orthopedic interventional techniques and processes have also been developed to treat various spinal and joint pathologies. For example, U.S. Pat. No. 6,726,691 to Osorio for Methods and devices for treating fractured and/or diseased bone; U.S. Pat. No. 7,155,307 to Scribner for Systems and methods for placing materials into bone; U.S. 7,241,303 to Reiss for Devices and methods using an expandable body with internal restraint for compressing cancellous bone; and U.S. Patent Pubs. 2005/0240193 to Layne for Devices for creating voids in interior body regions and related methods; 2006/0149136 to Seto for Elongating balloon device and method for soft tissue expansion; 2007/0067034 to Chirico for Implantable Devices and Methods for Treating Micro-Architecture Deterioration of Bone Tissue; 2006/0264952 to Nelson for Methods of Using Minimally Invasive Actuable Bone Fixation Devices.
  • Health care providers rely on an understanding of joint anatomy and mechanics when evaluating a subject's suspected joint problem and/or biomechanical performance issue. Understanding anatomy and joint biomechanics assists in the diagnosis and evaluation of a subject for an orthopedic intervention. However, currently available diagnostic tools are limited in the level of detail and analysis that can be achieved. Typically, when treating joint problems, the intention is to address a specific structural or mechanical problem within the joint. For example, a surgeon might prescribe a spinal fusion procedure to physically immobilize the vertebra of a subject suffering from vertebral instability, or a physical therapist might prescribe exercises to strengthen a specific tendon or muscle that is responsible for a joint problem, etc.
  • It follows, therefore, that the extent to which a specific treatable joint defect can be identified and optimally treated directly impacts the success of any treatment protocol. Currently available orthopedic diagnostic methods are capable of detecting a limited number of specific and treatable defects. These techniques include X-Rays, MM, discography, and physical exams of the patient. In addition, spinal kinematic studies such as flexion/extension X-rays are used to specifically detect whether or not a joint has dysfunctional motion. These methods have become widely available and broadly adopted into the practice of treating joint problems and addressing joint performance issues. However, currently available diagnostic techniques provide measurement data that is imprecise and often inconclusive which results in an inability to detect many types of pathologies or accurately assess pathologies that might be considered borderline. As a result, a significant number of patients having joint problems remain undiagnosed and untreated using current techniques, or worse are misdiagnosed and mistreated due to the poor clinical efficacy of these techniques.
  • For example, currently available techniques for conducting spinal kinematic studies are often unable to determine whether a joint dysfunction is a result of the internal joint structure per se, or whether the dysfunction is a result of, or significantly impacted by, the surrounding muscular tissue. Additionally, there are no reliable techniques for identifying soft tissue injury. Muscle guarding is a well established concept that is hypothesized to be highly prevalent among sufferers of joint pain, specifically that of the neck and back. In muscle guarding, a subject responds to chronic pain by immobilizing the painful area through involuntary muscle involvement. The ability to isolate different muscle groups is desirable to determine which muscle group or combination of groups, if any, could be contributing to, or responsible for, any joint dysfunction.
  • Additionally, the level of entrenchment of muscle guarding behavior cannot currently be determined. With respect to treatment decisions, the operative question in determining the level of “entrenchment” of any observed muscle guarding is to determine if the muscle guarding behavior is one which conservative methods of therapy could address through non-surgical therapy, or alternatively determining that the muscle guarding behavior so “entrenched” that such efforts would be futile and surgery should be considered.
  • In some instances, joint dysfunctions may not always present themselves in the movements traditionally measured during spinal kinematic studies such as flexion-extension and side-bending in either “full” non-weight-bearing or “full” weight-bearing planes of movement, which correspond to lying down and standing up postures respectively. Certain painful movements occur during joint rotation when the plane of rotation is somewhere between these two postures. Certain other painful movements only occur when the subject is rotating his or her spine while in a bent posture. In the case of vertebral motion in full weight-bearing postures, gravitational forces are relatively evenly distributed across the surface area of the vertebrae. However in postures where the subject is standing with his/her spine bent, gravitational forces are concentrated on the sections of the vertebrae located toward the direction of the bend. Detecting motion dysfunctions that occur only when in a standing bent posture requires the replication of joint motion in that specific bent posture in a controlled, repeatable, and measurable manner during examination.
  • Further, assuming that a system of measuring the surface motion of joints and the motion between internal joint structures that accounts for various types of muscle involvements would be possible, there would be a need for investigational data from controlled clinical trials to be collected across a broad population of subjects to afford for comparative analyses between subjects. Such a comparative analysis across a broad population of subjects would be necessary for the purpose of defining “normal” and “unhealthy” ranges of such measurements, which would in turn form the basis for the diagnostic interpretation of such measurements.
  • There have been significant technological innovations to the field of orthopedic interventions over the last few decades, specifically with the use of prosthetic and therapeutic devices to correct mechanical and structural defects of the bones and joints and to restore proper joint function. There have also been significant advances in the application of chiropractic and physical therapy approaches to correct muscle-, ligament-, and tendon-related defects. There has not however, been a corresponding improvement in the diagnostic methods used to identify proper candidates for these interventions. As a result, the potential impact and utility of the improvements in orthopedic intervention has been limited.
  • Imaging is the cornerstone of all modern orthopedic diagnostics. The vast majority of diagnostic performance innovations have focused on static images. Static images are a small number of images of a joint structure taken at different points in the joint's range of motion, with the subject remaining still in each position while the image is being captured. Static imaging studies have focused mainly on detecting structural changes to the bones and other internal joint structures. An example of the diagnostic application of static imaging studies is with the detection of spinal disc degeneration by the use of plain X-rays, MR images and discograms. However, these applications yield poor diagnostic performance with an unacceptably high proportion of testing events yielding either inconclusive or false positive/false negative diagnostic results (Lawrence, J. S. (1969) Annals of Rheumatic Diseases 28: 121-37; Waddell, G. (1998) The Back Pain Revolution. Edinburgh, Churchill Livingstone Ch2 p22; Carragee et al. (2006) Spine 31(5): 505-509, McGregor et al. (1998) J Bone Joint Surg (Br) 80-B: 1009-1013; Fujiwara et al. (2000(a)) Journal of Spinal Disorders 13: 444-50).
  • Purely qualitative methods for visualizing joint motion have been available for some time using cine-radiography (Jones, M. D. (1962) Archives of Surgery 85: 974-81). More recently, computer edge extraction of vertebral images from fluoroscopy has been used to improve this visualization for use in animations (Zheng et al. (2003) Medical Engineering and Physics 25: 171-179). These references do not, however, provide for any form of measurement or identification of objectively defined motion abnormalities, and therefore is of very limited diagnostic value other than in the detection of grossly and visibly obvious abnormalities that would be detectable using static image analysis methods. Without any quantitative or objective measurement parameters defined, it is impossible to utilize such approaches in comparative analyses across wide populations of subjects, which is required for the purpose of the producing definitive diagnostic interpretations of the results as being either “normal” or “unhealthy”. Further, there have been no diagnostically useful validations of qualitative motion patterns that are generally absent in non-sufferers but present in subjects suffering from known and specific joint functional derangements or symptoms, or vice versa.
  • A method for determining vertebral body positions using skin markers was developed (Bryant (1989) Spine 14(3): 258-65), but could only measure joint motion at skin positions and could not measure the motion of structures within the joint. There have been many examples skin marker based spine motion measurement that have all been similarly flawed.
  • Methods have been developed to measure changes to the position of vertebrae under different loads in dead subjects, whose removed spines were fused and had markers inserted into the vertebrae (Esses et al. (1996) Spine 21(6): 676-84). The motion of these markers was then measured in the presence of different kinds of loads on the vertebrae. This method is, however, inherently impractical for clinical diagnostic use. Other methods with living subjects have been able to obtain a high degree of accuracy in measuring the motion of internal joint structures by placing internal markers on the bones of subjects and digitally marking sets of static images (Johnsson et al. (1990) Spine 15: 347-50), a technique known as roentgen stereophotogrammetry analysis (RSA). However RSA requires the surgical implantation of these markers into subjects' internal joint structures, requires the use of two radiographic units simultaneously, and requires a highly complicated calibration process for every single test, and therefore is too invasive and too cumbersome a process for practicable clinical application.
  • Cine-radiography of uncontrolled weight-bearing motion (Harada et al (2000) Spine 25: 1932-7; Takavanagi et al. (2001) Spine 26(17): 1858-1865) has been used to provide a set of static images to which digital markers have been attached and transformed to give quantitative measurement of joint motion. Similar measurement of joint motion has been achieved using videofluoroscopy (Breen et al. (1989) Journal of Biomedical Engineering 11: 224-8; Cholewicki et al. (1991) Clinical Biomechanics 6: 73-8; Breen et al. (1993) European Journal of Physical Medicine and Rehabilitation 3(5): 182-90; Brydges et al. 1993). This method has also been used to study the effects on joint motion of weightlifting (Cholewicki, J. and S. M. McGill (1992) Journal of Biomechanics 25(1): 17-28). The prior art using this method involves a manual process in which internal joint structures are marked by hand with digital landmarks on digital image files of consecutive frames of videoflouroscopy recordings of a subject's joint motion. A computer then automatically determines the frame-to-frame displacement between such digital landmarks to derive quantitative measurements of the motion of joint structures (Lee et al. (2002) Spine 27(8): E215-20). Even more recently, this approach has been accomplished using an automatic registration process (Wong et al. (2006) Spine 31(4): 414-419) that eliminates the manual marking process and thus reduces the laboriousness of the previous processes. However both of these methods, as well as all of the other methods mentioned in this paragraph, studied the motion of joints based on the imaging of uncontrolled, weight-bearing body motion.
  • Using uncontrolled, weight-bearing motion to derive quantitative measurements of joint motion confounds the diagnostic interpretation of such measurements so as to render them diagnostically useless. The diagnostic interpretation of such measurements would normally be based on a comparative analysis of joint motion measurements across a wide population of subjects, and would strive to identify statistically significant differences in these measurements between “normal” and “unhealthy” subjects, such that any given subject can be classified as “normal” or “unhealthy” based on that subject's joint motion measurement values. For such purposes, it is necessary to reduce the background variability of measurements across tested subjects as much as possible, so that any observed difference between “normal” and “unhealthy” subjects can be definitively attributable to a specific condition. Not controlling the motion that is being studied introduces variability into these comparative analyses due to differences that exist across testing subjects with respect to each subject's individual range of motion, symmetry of motion, and regularity of motion. These differences affect the joint motion of each subject differently, and collectively serve to create wide variability among joint motion measurements across subjects. Controlling for these factors by ensuring a consistent, regular, and symmetric body part motion during diagnostic testing serves to minimize the effects of these factors on a subject's relevant joint motion measurements, thereby reducing the variability of such measurements across subjects and therefore increasing the likelihood that such measurements will yield useful diagnostic results.
  • In addition to failing to control motion during testing, not accounting for the involvement and effects of muscles that are acting when a subject moves under their own muscular force while in a weight-bearing stance further adds to this variability by introducing such inherently variable factors such as the subject's muscle strength, level of pain, involuntary contraction of opposing muscle groups, and neuro-muscular co-ordination. Taken together, all of these sources of variability serve to confound diagnostic conclusions based on comparative analyses by making the ranges of “normal” and those of “abnormal” difficult to distinguish from one another other in a statistically significant way. Such an inability to distinguish between “normal” and “unhealthy” subjects based on a specific diagnostic measurement renders such a measurement diagnostically useless, as has been the case heretofore in the prior art which has focused on measurements of uncontrolled joint motion measured in subjects in weight-bearing postures and moving their joints through the power of their own muscles and in an uncontrolled fashion.
  • U.S. Patent No. 7,000,271 discloses a tilting table capable of some movement to keep an iso-center at a fixed position. U.S. Patent No.: 7,343,635 describes a multi-articulated tilting table which positions and supports a subject during examination and treatment. U.S. Patent No. 7,502,641 to Breen discloses a device for controlling joint motion and minimizing the effects of muscle involvement in the joint motion being studied. This device minimizes variability among joint motion measurements across wide populations of subjects. As a result, comparative analyses of such measurements can be performed to determine statistical differences between the motion of “normal” and “unhealthy” subjects which in turn can provide a basis for determining the statistical confidence with which any given subject could be considered “normal” or “unhealthy” based solely on joint motion measurements.
  • U.S. Pat. No. 5,505,208 to Toomin et al. developed a method for measuring muscle dysfunction by means of collecting muscle activity measurements using electrodes in a pattern across a subject's back while having the subject perform a series of poses where measurements are made at static periods within the movement. These electromyographical readings of “unhealthy” subjects were then compared to those of a “normal” population so as to be able to identify those subjects with abnormal readings, however does not provide for a method to report the results as a degree of departure from an ideal reading, instead can only say whether the reading is “abnormal”. U.S. Pat. No. 6,280,395 added an additional advantage to this method for determining muscle dysfunction by using the same method, yet adding the ability to better normalize the data by employing a more accurate reading of the thickness of the adipose tissue and other general characteristics that might introduce variability into the readings, as well as the ability to quantify how abnormal a subject's electromyographical reading is as compared to a “normal” population.
  • Joint muscle activity has been evaluated using electromyography in combination with some type method or device to track the surface motion of the joint. In one study, visual landmarks were used to help the subject more consistently reproduce a tested motion so as to standardize the joint motion and eliminate variability. (Lariviere, C 2000) However, visual land marking methods to not yield as “standardized” a motion as can be achieved with motion that is mechanically controlled, and measurements of the motion of internal joint structures based on surface motion measurements are too variable to be of significant clinical utility.
  • Another study used electromyography in conjunction with the use of a goniometer, a device that measures the surface motion of external body parts so as to link the muscle activity signals with precise surface motion measurements. (Kaigle et al. (1998) Journal of Spinal Disorders 11(2): 163-174). This method however does not take into consideration the motion of internal joint structures such that a determination as to the specific cause of j oint dysfunction cannot be evaluated.
  • Electromyographic measurements taken during weight-bearing joint motion, with simultaneous recording of the motion of the body part using goniometers and also with simultaneous recordings of the motion of internal joint structures through the tracking of surgically-implanted metal markers, has been used to correlate muscle activity with the motion of j oints and internal joint structures (Kaigle, supra). However this approach studied joint motion that was uncontrolled and required an invasive surgical procedure to place the metal markers, and thus were neither useful nor feasible for clinical diagnostic application.
  • Electromyography has also been used in conjunction with a device that provides transient force perturbation so as to observe whether there is a difference between subjects with low back pain and those without low back pain to determine how their muscles respond to such a force. (Stokes, Fox et al. 2006) The objective was to determine whether there is an altered muscle activation pattern when using a ramped effort. This approach however does not address the issue of which discrete muscle group or groups might account for the difference between activation patterns in subjects with joint dysfunctions and those without. Furthermore, this method does not take into consideration the internal structural joint motions and thus provides an incomplete set of information upon which to draw diagnostic conclusions.
  • SUMMARY OF THE INVENTION
  • An imaging system that comprises a tracking system and an imaging system that communicate information through real-time or near real-time feedback loops and applies continuous adjustments to the imaging environment during an imaging session based upon the imported information from the tracking system. The feedback can be configurable to dynamically change to adjust the range of motion to correspond to an achieved patient motion instead of a motion of the patient movement device.
  • The integrated imaging system integrates a hardware and software component and incorporates a tracking system for producing precise diagnostic information and image information for the purposes of producing an optimal imaging session, and an imaging apparatus with central control unit that communicates with each component and continuously adjusts so as to produce an most favorable imaging environment.
  • The integrated imaging system can be adapted and configured to import information about the testing session and adapt functional imaging settings based on the imported information. Those skilled in the art will appreciate that the system described herein can be applied or incorporated into any imaging device available now and what will be available in the future.
  • The imaging system integrates a series of feedback loops that share information with respect to patient positioning and imaging quality and frequency. The information can be transmitted through either a direct wire-based electronic connection between the two or more components, or through a wireless connection. The information can be the type that is derived from computer programming or from operator or patient input, or from a combination of computer programmed information plus operator and/or patient input.
  • Methods, systems and devices register and track imaging information real-time or near real-time and provide a feedback mechanism which impacts further imaging. As a result of the feedback, patient exposure during imaging may be reduced and image capture may be enhanced.
  • An aspect of the disclosure is directed to a machine-readable medium that provides instructions which, when executed by a set of processors, causes the processors to provide instructions to at least one of a motion device and an imaging device comprising: receiving information from at least one of the motion device and the imaging device during an imaging session; analyzing the received information; instructing at least one of the motion device and imaging device to change the imaging environment during the imaging session. In at least some aspects, the steps of receiving, analyzing and instructing are repeated a plurality of times during the imaging session. In other aspects, the step of instructing is performed real-time or near real-time. Real-time can, for example be performed such that the analysis calculates the data quickly enough such that no data are excluded from the analysis. Additionally, real-time can be configured to received data, process it and respond within a time frame set by outside events and in such a manner so that no delay is perceived by an operator or patient. Near real-time might include, for example a momentary lag time (seconds to minutes) within an imaging session while the information is processed and instructions are generated. The instruction can change an aspect of an imaging field and/or change a movement of a motion device. Suitable imaging devices include, but are not limited to, for example an X-ray tube and image intensifier with dosage control, a magnetic resonance scanner. Additionally, a suitable motion device can be configured to further comprise a laterally moveable platform, such as a movable platform is situated on a support which lies on an upper surface of the platform base. The machine-readable medium can further comprise a processing system. In at least some aspects the motion device is adapted to communicate motion information to the imaging device during use. Moreover, continuous adjustments can be made to an imaging environment, including, for example, changes to a range of motion of the motion device is based on a selected target motion for a patient, such as a range of motion of the device is based on a gross motion of a patient.
  • Another aspect of the disclosure is directed to an apparatus for use in a shared computer network being able to carry real-time data streams the apparatus comprising: means for transmitting data packages to a data destination in at least one real-time or near real-time data stream over the shared computer network, wherein each of the data packets contains instructions for controlling at least one of an imaging device and a motion control device during an imaging session. Instructions can be streamed over the shared computer network a plurality of times during the imaging session. Additionally, instruction can be configured to change an aspect of an imaging field and/or movement of a motion device. Suitable imaging devices include, but are not limited to, an X-ray tube and image intensifier with dosage control and a magnetic resonance scanner. Additionally, the motion device can further comprises a laterally moveable platform, such as a movable platform is situated on a support which lies on an upper surface of the platform base. A control arm can further be provided for driving movement of a moveable platform. Moreover, a processing system can be provided. The motion device can further be adapted to communicate motion information to the imaging device during use. Furthermore continuous adjustments are made to an imaging environment. A range of motion of the motion device can further be based on a selected target motion for a patient, or on a gross motion of a patient.
  • Still another aspect of the disclosure is directed to a system measuring skeletal joint motion in a subject comprising: a) a motion device adapted and configured to continuously move a joint of the subject, the motion device comprising: a platform base, and a motion platform further comprising a static platform connected to an upper surface of the platform base, a movable platform connected to at least one of the static platform or an upper surface of the platform base, wherein the static platform is adjacent the movable platform wherein movement of the movable platform is achieved in operation by a motor in communication with the moveable platform; b) an imaging device in communication with the motion device adapted and configured to obtain imaging data; and c) a computing system adapted and configured to analyze the obtained imaging data to generate an instruction and then communicate the instruction to at least one of the motion device and the imaging device. Suitable imaging devices include, but are not limited to an X-ray tube and image intensifier with dosage control and a magnetic resonance scanner. Additionally, the platform can, for example, be a laterally moveable platform, such as a platform situated on a support which lies on the upper surface of the platform base. Additionally, a control arm can be provided for driving movement of the moveable platform. Moreover a processing system can be provided. The motion device can be adapted to communicate motion information to the imaging device during use. The system can also provide continuous adjustments to the imaging environment. Suitable instruction include, for example, changes an aspect of an imaging field and/or changes to a movement of a motion device. The range of motion of the motion device is based on a selected target motion for a patient. Moreover, the range of motion of the device is based on a gross motion of a patient.
  • Another aspect of the disclosure is directed to a method for imaging skeletal structures in vivo comprising: i) positioning a subject on a motion device adapted and configured to move a joint during a use session; ii) imaging the subject positioned on the motion device during the use session with an imaging device; iii) collecting image data; iv) analyzing the collected image data; and v) communicating an instruction based on the analyzed image data to at least one of the motion device and imaging device prior to acquiring a subsequent image. The step of communicating an instruction can, for example, include changing an aspect of an imaging field and/or changing a movement of a motion device. As will be appreciated by those skilled in the art, the step of communicating an instruction can include, for example, transmitting a new instruction which changes one or more settings, transmitting an instruction which maintains the most recent one or more settings, transmitting an instruction that repeats an earlier one or more instructions, or providing no change in instruction in current instructions. Where no change instructions are provided, the system can be adapted and configured to automatically repeat the most recent instructions after a lag of a set amount of time from providing the image data for analysis. Additionally, suitable imaging devices include, but are not limited to an X-ray tube and image intensifier with dosage control and a magnetic resonance scanner. Additionally, the motion device can further comprises a movable platform situated on a support which lies on an upper surface of a platform base. In some cases a calibration step is carried out prior to at least one of the method steps of i) to vi). Additionally, the relative motion of lumbar vertebrae L3 to L3, L3 to L4 and L4 to L5 can be tracked simultaneously or separately and/or the relative motion of lumbar vertebrae L3 to L3, L3 to L4 and L4 to L5 are tracked simultaneously or separately. Moreover, the method can be used for a diagnosis of a pseudoarthrosis in the subject, and the method comprising analyzing the relative motion of skeletal structures in the subject. An additional step can be provided for presenting an output in graphical form.
  • Yet another aspect of the disclosure includes a process for capturing data and controlling skeletal joint motion of a subject comprising: (a) providing an apparatus adapted and configured to selectively cause and control joint motion of the subject having a base positioned in a first base plane, a fixable platform adapted and configured to engage the base at an attachment mechanism, the fixable platform having a first position in a first fixable platform plane and fixably adjustable to a second position, a dynamic platform having a first position in a first dynamic platform plane, adjustable to a second position and selectively rotatable about an axis, and a coupling member adapted and configured to connect the fixable platform to the dynamic platform or the base; (b) positioning the subject in a first position such that a first body part of the subject is at least partially positioned adjacent the static platform, and a second body part of the subject is at least partially positioned adjacent the motion platform; (c) capturing, with a medical diagnostic device, a first diagnostic data from the subject and the apparatus; (d) transmitting the first diagnostic data from the subject to a machine-readable medium; (e) analyzing the first diagnostic data; (f) generating an instructions from the analyzed first diagnostic data; (g) transmitting the instruction to the apparatus; (h) repositioning the apparatus such that the subject is placed in a second position different from the first position; and (i) capturing, with the medical diagnostic device, second diagnostic data from the subject and the apparatus in the second position. The data capturing steps of the process can further comprise using a medical diagnostic device selected from the group consisting of X-ray scanner, X-ray tube with image intensifier tube, magnetic resonance scanner, infrared camera, computed tomography scanner, ultrasound scanner, electromyography sensor unit, digital camera and camera. Moreover, steps (b) through (i) can be repeated a plurality of times during a single imaging session. Instructions can be provided that change an aspect of an imaging field and/or change a movement of a motion device.
  • Incorporation by Reference
  • All publications, patents, and patent applications mentioned in this specification are herein incorporated by reference to the same extent as if each individual publication, patent, or patent application was specifically and individually indicated to be incorporated by reference.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The novel features of the invention are set forth with particularity in the appended claims. A better understanding of the features and advantages of the present invention will be obtained by reference to the following detailed description that sets forth illustrative embodiments, in which the principles of the invention are utilized, and the accompanying drawings of which:
  • FIG. 1A is a diagram showing a representative example of a logic device through which an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIG. 1B is a block diagram of an exemplary computing environment through which an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIG. 1C is an illustrative architectural diagram showing some structure that can be employed by devices through which an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIG. 2 is an exemplary diagram of a server in an implementation suitable for use in a system where an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIG. 3 is an exemplary diagram of a master system in an implementation suitable for use in a system where an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIG. 4 is a block diagram showing the cooperation of exemplary components of a system suitable for use in a system where an imaging system can provide real-time or near real-time feedback information with respect to patient position to apply adjustments to the imaging environment to optimize imaging and data acquisition;
  • FIGS. 5A and 5B show side and top view block diagrams of the horizontally configured motion control device consisting of the two sub-systems and attachment mechanisms of the preferred embodiment of the horizontally configured motion control device in a “default” configuration, according to one embodiment of the present invention; FIGS. 5C-E illustrate a device from different views;
  • FIGS. 6A and 6B show side view block diagrams of the horizontally configured motion control device and related parts of the preferred embodiment in the “front-up” (FIG. 5A) and “front-down” (FIG. 5B) configurations suitable for use with the system disclosed;
  • FIGS. 7A and 7B show side and front view block diagrams, respectively, a vertically configured motion control device in a “default” configuration suitable for use with the system disclosed; FIGS. 7C-E illustrate a device from different views;
  • FIGS. 8A, 8B, and 8C show side view diagrams of a vertically configured motion control device in a “default”, “top out” and “top in” configurations, respectively, according to one embodiment of the present invention;
  • FIG. 9A is a simplified block diagram of the components comprising the integrated imaging system where the imaging apparatus and the tracking apparatus are integrated into the same apparatus and the central processing unit is a part of the apparatus;
  • FIG. 9B is a simplified block diagram of the components comprising the integrated imaging system where the imaging apparatus and the tracking apparatus are integrated into the same apparatus and the central processing unit is a separate unit that communicates either wirelessly or through a direct wired connection with the integrated imaging system;
  • FIG. 10 is a simplified block diagram of the components comprising the integrated imaging system where the imaging apparatus and the tracking apparatus are two separate units and communicate through a central processing unit that communicates with each the imaging and tracking apparatus through a wireless or a direct wired connection; and
  • FIG. 11 is a flow chart of the process by which the integrated imaging system operates.
  • DETAILED DESCRIPTION OF THE INVENTION
  • An integrated imaging system that incorporates real time tracking algorithms and feedback loops for producing precise diagnostic information, and an imaging device that is adaptable in response to the integrated imaging system and feedback loops to produce an optimal imaging session.
  • I. Computing Systems
  • The systems and methods described herein rely on a variety of computer systems, networks and/or digital devices for operation. In order to fully appreciate how the system operates an understanding of suitable computing systems is useful. The systems and methods disclosed herein are enabled as a result of application via a suitable computing system.
  • FIG. 1A is a block diagram showing a representative example logic device through which a browser can be accessed to implement the present invention. A computer system (or digital device) 100, which may be understood as a logic apparatus adapted and configured to read instructions from media 114 and/or network port 106, is connectable to a server 110, and has a fixed media 116. The computer system 100 can also be connected to the Internet or an intranet. The system includes central processing unit (CPU) 102, disk drives 104, optional input devices, illustrated as keyboard 118 and/or mouse 120 and optional monitor 108. Data communication can be achieved through, for example, communication medium 109 to a server 110 at a local or a remote location. The communication medium 109 can include any suitable means of transmitting and/or receiving data. For example, the communication medium can be a network connection, a wireless connection or an internet connection. It is envisioned that data relating to the present invention can be transmitted over such networks or connections. The computer system can be adapted to communicate with a participant and/or a device used by a participant. The computer system is adaptable to communicate with other computers over the Internet, or with computers via a server.
  • FIG. 1B depicts another exemplary computing system 100. The computing system 100 is capable of executing a variety of computing applications 138, including computing applications, a computing applet, a computing program, or other instructions for operating on computing system 100 to perform at least one function, operation, and/or procedure. Computing system 100 is controllable by computer readable storage media for tangibly storing computer readable instructions, which may be in the form of software. The computer readable storage media adapted to tangibly store computer readable instructions can contain instructions for computing system 100 for storing and accessing the computer readable storage media to read the instructions stored thereon themselves. Such software may be executed within CPU 102 to cause the computing system 100 to perform desired functions. In many known computer servers, workstations and personal computers CPU 102 is implemented by micro-electronic chips CPUs called microprocessors. Optionally, a co-processor, distinct from the main CPU 102, can be provided that performs additional functions or assists the CPU 102. The CPU 102 may be connected to co-processor through an interconnect. One common type of coprocessor is the floating-point coprocessor, also called a numeric or math coprocessor, which is designed to perform numeric calculations faster and better than the general-purpose CPU 102.
  • As will be appreciated by those skilled in the art, a computer readable medium stores computer data, which data can include computer program code that is executable by a computer, in machine readable form. By way of example, and not limitation, a computer readable medium may comprise computer readable storage media, for tangible or fixed storage of data, or communication media for transient interpretation of code-containing signals. Computer readable storage media, as used herein, refers to physical or tangible storage (as opposed to signals) and includes without limitation volatile and non-volatile, removable and non-removable storage media implemented in any method or technology for the tangible storage of information such as computer-readable instructions, data structures, program modules or other data. Computer readable storage media includes, but is not limited to, RAM, ROM, EPROM, EEPROM, flash memory or other solid state memory technology, CD-ROM, DVD, or other optical storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other physical or material medium which can be used to tangibly store the desired information or data or instructions and which can be accessed by a computer or processor
  • In operation, the CPU 102 fetches, decodes, and executes instructions, and transfers information to and from other resources via the computer's main data-transfer path, system bus 140. Such a system bus connects the components in the computing system 100 and defines the medium for data exchange. Memory devices coupled to the system bus 140 include random access memory (RAM) 124 and read only memory (ROM) 126. Such memories include circuitry that allows information to be stored and retrieved. The ROMs 126 generally contain stored data that cannot be modified. Data stored in the RAM 124 can be read or changed by CPU 102 or other hardware devices. Access to the RAM 124 and/or ROM 126 may be controlled by memory controller 122. The memory controller 122 may provide an address translation function that translates virtual addresses into physical addresses as instructions are executed.
  • In addition, the computing system 100 can contain peripherals controller 128 responsible for communicating instructions from the CPU 102 to peripherals, such as, printer 142, keyboard 118, mouse 120, and data storage drive 143. Display 108, which is controlled by a display controller 163, is used to display visual output generated by the computing system 100. Such visual output may include text, graphics, animated graphics, and video. The display controller 134 includes electronic components required to generate a video signal that is sent to display 108. Further, the computing system 100 can contain network adaptor 136 which may be used to connect the computing system 100 to an external communications network 132.
  • II. Networks and Internet Protocol
  • As is well understood by those skilled in the art, the Internet is a worldwide network of computer networks. Today, the Internet is a public and self-sustaining network that is available to many millions of users. The Internet uses a set of communication protocols called TCP/IP (i.e., Transmission Control Protocol/Internet Protocol) to connect hosts. The Internet has a communications infrastructure known as the Internet backbone. Access to the Internet backbone is largely controlled by Internet Service Providers (ISPs) that resell access to corporations and individuals.
  • The Internet Protocol (IP) enables data to be sent from one device (e.g., a phone, a Personal Digital Assistant (PDA), a computer, etc.) to another device on a network. There are a variety of versions of IP today, including, e.g., IPv4, IPv6, etc. Other IPs are no doubt available and will continue to become available in the future, any of which can be used without departing from the scope of the invention. Each host device on the network has at least one IP address that is its own unique identifier and acts as a connectionless protocol. The connection between end points during a communication is not continuous. When a user sends or receives data or messages, the data or messages are divided into components known as packets. Every packet is treated as an independent unit of data and routed to its final destination—but not necessarily via the same path.
  • The Open System Interconnection (OSI) model was established to standardize transmission between points over the Internet or other networks. The OSI model separates the communications processes between two points in a network into seven stacked layers, with each layer adding its own set of functions. Each device handles a message so that there is a downward flow through each layer at a sending end point and an upward flow through the layers at a receiving end point. The programming and/or hardware that provides the seven layers of function is typically a combination of device operating systems, application software, TCP/IP and/or other transport and network protocols, and other software and hardware.
  • Typically, the top four layers are used when a message passes from or to a user and the bottom three layers are used when a message passes through a device (e.g., an IP host device). An IP host is any device on the network that is capable of transmitting and receiving IP packets, such as a server, a router or a workstation. Messages destined for some other host are not passed up to the upper layers but are forwarded to the other host. The layers of the OSI model are listed below. Layer 7 (i.e., the application layer) is a layer at which, e.g., communication partners are identified, quality of service is identified, user authentication and privacy are considered, constraints on data syntax are identified, etc. Layer 6 (i.e., the presentation layer) is a layer that, e.g., converts incoming and outgoing data from one presentation format to another, etc. Layer 5 (i.e., the session layer) is a layer that, e.g., sets up, coordinates, and terminates conversations, exchanges and dialogs between the applications, etc. Layer-4 (i.e., the transport layer) is a layer that, e.g., manages end-to-end control and error-checking, etc. Layer-3 (i.e., the network layer) is a layer that, e.g., handles routing and forwarding, etc. Layer-2 (i.e., the data-link layer) is a layer that, e.g., provides synchronization for the physical level, does bit-stuffing and furnishes transmission protocol knowledge and management, etc. The Institute of Electrical and Electronics Engineers (IEEE) sub-divides the data-link layer into two further sub-layers, the MAC (Media Access Control) layer that controls the data transfer to and from the physical layer and the LLC (Logical Link Control) layer that interfaces with the network layer and interprets commands and performs error recovery. Layer 1 (i.e., the physical layer) is a layer that, e.g., conveys the bit stream through the network at the physical level. The IEEE sub-divides the physical layer into the PLCP (Physical Layer Convergence Procedure) sub-layer and the PMD (Physical Medium Dependent) sub-layer.
  • III. Wireless Networks
  • Wireless networks can incorporate a variety of types of mobile devices, such as, e.g., cellular and wireless telephones, PCs (personal computers), laptop computers, wearable computers, cordless phones, pagers, headsets, printers, PDAs, etc. For example, mobile devices may include digital systems to secure fast wireless transmissions of voice and/or data. Typical mobile devices include some or all of the following components: a transceiver (for example a transmitter and a receiver, including a single chip transceiver with an integrated transmitter, receiver and, if desired, other functions); an antenna; a processor; display; one or more audio transducers (for example, a speaker or a microphone as in devices for audio communications); electromagnetic data storage (such as ROM, RAM, digital data storage, etc., such as in devices where data processing is provided); memory; flash memory; and/or a full chip set or integrated circuit; interfaces (such as universal serial bus (USB), coder-decoder (CODEC), universal asynchronous receiver-transmitter (UART), phase-change memory (PCM), etc.). Other components can be provided without departing from the scope of the invention.
  • Wireless LANs (WLANs) in which a mobile user can connect to a local area network (LAN) through a wireless connection may be employed for wireless communications. Wireless communications can include communications that propagate via electromagnetic waves, such as light, infrared, radio, and microwave. There are a variety of WLAN standards that currently exist, such as Bluetooth®, IEEE 802.11, and the obsolete HomeRF.
  • By way of example, Bluetooth products may be used to provide links between mobile computers, mobile phones, portable handheld devices, personal digital assistants (PDAs), and other mobile devices and connectivity to the Internet. Bluetooth is a computing and telecommunications industry specification that details how mobile devices can easily interconnect with each other and with non-mobile devices using a short-range wireless connection. Bluetooth creates a digital wireless protocol to address end-user problems arising from the proliferation of various mobile devices that need to keep data synchronized and consistent from one device to another, thereby allowing equipment from different vendors to work seamlessly together.
  • An IEEE standard, IEEE 802.11, specifies technologies for wireless LANs and devices. Using 802.11, wireless networking may be accomplished with each single base station supporting several devices. In some examples, devices may come pre-equipped with wireless hardware or a user may install a separate piece of hardware, such as a card, that may include an antenna. By way of example, devices used in 802.11 typically include three notable elements, whether or not the device is an access point (AP), a mobile station (STA), a bridge, a personal computing memory card International Association (PCMCIA) card (or PC card) or another device: a radio transceiver; an antenna; and a MAC (Media Access Control) layer that controls packet flow between points in a network.
  • In addition, Multiple Interface Devices (MIDs) may be utilized in some wireless networks. MIDs may contain two independent network interfaces, such as a Bluetooth interface and an 802.11 interface, thus allowing the MID to participate on two separate networks as well as to interface with Bluetooth devices. The MID may have an IP address and a common IP (network) name associated with the IP address.
  • Wireless network devices may include, but are not limited to Bluetooth devices, WiMAX (Worldwide Interoperability for Microwave Access), Multiple Interface Devices (MIDs), 802.11x devices (IEEE 802.11 devices including, 802.11a, 802.11b and 802.11g devices), HomeRF (Home Radio Frequency) devices, Wi-Fi (Wireless Fidelity) devices, GPRS (General Packet Radio Service) devices, 3 G cellular devices, 2.5 G cellular devices, GSM (Global System for Mobile Communications) devices, EDGE (Enhanced Data for GSM Evolution) devices, TDMA type (Time Division Multiple Access) devices, or CDMA type (Code Division Multiple Access) devices, including CDMA2000. Each network device may contain addresses of varying types including but not limited to an IP address, a Bluetooth Device Address, a Bluetooth Common Name, a Bluetooth IP address, a Bluetooth IP Common Name, an 802.11 IP Address, an 802.11 IP common Name, or an IEEE MAC address.
  • Wireless networks can also involve methods and protocols found in, Mobile IP (Internet Protocol) systems, in PCS systems, and in other mobile network systems. With respect to Mobile IP, this involves a standard communications protocol created by the Internet Engineering Task Force (IETF). With Mobile IP, mobile device users can move across networks while maintaining their IP Address assigned once. See Request for Comments (RFC) 3344. NB: RFCs are formal documents of the Internet Engineering Task Force (IETF). Mobile IP enhances Internet Protocol (IP) and adds a mechanism to forward Internet traffic to mobile devices when connecting outside their home network. Mobile IP assigns each mobile node a home address on its home network and a care-of-address (CoA) that identifies the current location of the device within a network and its subnets. When a device is moved to a different network, it receives a new care-of address. A mobility agent on the home network can associate each home address with its care-of address. The mobile node can send the home agent a binding update each time it changes its care-of address using Internet Control Message Protocol (ICMP).
  • In basic IP routing (e.g., outside mobile IP), routing mechanisms rely on the assumptions that each network node always has a constant attachment point to the Internet and that each node's IP address identifies the network link it is attached to. In this document, the terminology “node” includes a connection point, which can include a redistribution point or an end point for data transmissions, and which can recognize, process and/or forward communications to other nodes. For example, Internet routers can look at an IP address prefix or the like identifying a device's network. Then, at a network level, routers can look at a set of bits identifying a particular subnet. Then, at a subnet level, routers can look at a set of bits identifying a particular device. With typical mobile IP communications, if a user disconnects a mobile device from the Internet and tries to reconnect it at a new subnet, then the device has to be reconfigured with a new IP address, a proper netmask and a default router. Otherwise, routing protocols would not be able to deliver the packets properly.
  • FIG. 1C depicts components that can be employed in system configurations enabling the systems and technical effect of this invention, including wireless access points to which client devices communicate. In this regard, FIG. 1C shows a wireless network 150 connected to a wireless local area network (WLAN) 152. The WLAN 152 includes an access point (AP) 154 and a number of user stations 156, 156′. For example, the network 150 can include the Internet or a corporate data processing network. The access point 154 can be a wireless router, and the user stations 156, 156′ can be portable computers, personal desk-top computers, PDAs, portable voice-over-IP telephones and/or other devices. The access point 154 has a network interface 158 linked to the network 150, and a wireless transceiver in communication with the user stations 156, 156′. For example, the wireless transceiver 160 can include an antenna 162 for radio or microwave frequency communication with the user stations 156, 156′. The access point 154 also has a processor 164, a program memory 166, and a random access memory 168. The user station 156 has a wireless transceiver 170 including an antenna 172 for communication with the access point station 154. In a similar fashion, the user station 156′ has a wireless transceiver 170′ and an antenna 172 for communication to the access point 154. By way of example, in some embodiments an authenticator could be employed within such an access point (AP) and/or a supplicant or peer could be employed within a mobile node or user station. Desktop 108 and key board 118 or input devices can also be provided with the user status.
  • IV. Media Independent Handover Services
  • In IEEE P802.21/D.01.09, September 2006, entitled Draft IEEE Standard for Local and Metropolitan Area Networks: Media Independent Handover Services, among other things, the document specifies 802 media access-independent mechanisms that optimize handovers between 802 systems and cellular systems. The IEEE 802.21 standard defines extensible media access independent mechanisms that enable the optimization of handovers between heterogeneous 802 systems and may facilitate handovers between 802 systems and cellular systems. “The scope of the IEEE 802.21 (Media Independent Handover) standard is to develop a specification that provides link layer intelligence and other related network information to upper layers to optimize handovers between heterogeneous media. This includes links specified by 3GPP, 3GPP2 and both wired and wireless media in the IEEE 802 family of standards. Note, in this document, unless otherwise noted, “media” refers to method/mode of accessing a telecommunication system (e.g. cable, radio, satellite, etc.), as opposed to sensory aspects of communication (e.g. audio, video, etc.).” See 1.1 of I.E.E.E. P802.21/D.01.09, September 2006, entitled Draft IEEE Standard for Local and Metropolitan Area Networks: Media Independent Handover Services, the entire contents of which document is incorporated herein into and as part of this patent application. Other IEEE, or other such standards on protocols can be relied on as appropriate or desirable.
  • FIG. 2 is an exemplary diagram of a server 210 in an implementation consistent with the principles of the disclosure to achieve the desired technical effect and transformation. Server 210 may include a bus 240, a processor 202, a local memory 244, one or more optional input units 246, one or more optional output units 248, a communication interface 232, and a memory interface 222. Bus 240 may include one or more conductors that permit communication among the components of chunk server 250.
  • Processor 202 may include any type of conventional processor or microprocessor that interprets and executes instructions. Local memory 244 may include a random access memory (RAM) or another type of dynamic storage device that stores information and instructions for execution by processor 202 and/or a read only memory (ROM) or another type of static storage device that stores static information and instructions for use by processor 202.
  • Input unit 246 may include one or more conventional mechanisms that permit an operator to input information to a server 110, such as a keyboard 118, a mouse 120 (shown in FIG. 1), a pen, voice recognition and/or biometric mechanisms, etc. Output unit 248 may include one or more conventional mechanisms that output information to the operator, such as a display 134, a printer 130 (shown in FIG. 1), a speaker, etc. Communication interface 232 may include any transceiver-like mechanism that enables chunk server 250 to communicate with other devices and/or systems. For example, communication interface 232 may include mechanisms for communicating with master and clients.
  • Memory interface 222 may include a memory controller 122. Memory interface 222 may connect to one or more memory devices, such as one or more local disks 274, and control the reading and writing of chunk data to/from local disks 276. Memory interface 222 may access chunk data using a chunk handle and a byte range within that chunk.
  • FIG. 3 is an exemplary diagram of a master system 376 suitable for use in an implementation consistent with the principles of the disclosure to achieve the desired technical effect and transformation. Master system 376 may include a bus 340, a processor 302, a main memory 344, a ROM 326, a storage device 378, one or more input devices 346, one or more output devices 348, and a communication interface 332. Bus 340 may include one or more conductors that permit communication among the components of master system 374.
  • Processor 302 may include any type of conventional processor or microprocessor that interprets and executes instructions. Main memory 344 may include a RAM or another type of dynamic storage device that stores information and instructions for execution by processor 302. ROM 326 may include a conventional ROM device or another type of static storage device that stores static information and instructions for use by processor 302. Storage device 378 may include a magnetic and/or optical recording medium and its corresponding drive. For example, storage device 378 may include one or more local disks that provide persistent storage.
  • Input devices 346 used to achieve the desired technical effect and transformation may include one or more conventional mechanisms that permit an operator to input information to the master system 374, such as a keyboard 118, a mouse 120, (shown in FIG. 1) a pen, voice recognition and/or biometric mechanisms, etc. Output devices 348 may include one or more conventional mechanisms that output information to the operator, including a display 108, a printer 142 (shown in FIG. 1), a speaker, etc. Communication interface 332 may include any transceiver-like mechanism that enables master system 374 to communicate with other devices and/or systems. For example, communication interface 332 may include mechanisms for communicating with servers and clients as shown above.
  • Master system 376 used to achieve the desired technical effect and transformation may maintain file system metadata within one or more computer readable mediums, such as main memory 344 and/or storage device.
  • The computer implemented system provides a storage and delivery base which allows users to exchange services and information openly on the Internet used to achieve the desired technical effect and transformation. A user will be enabled to operate as both a consumer and producer of any and all digital content or information through one or more master system servers.
  • A user executes a browser to view digital content items and can connect to the front end server via a network, which is typically the Internet, but can also be any network, including but not limited to any combination of a LAN, a MAN, a WAN, a mobile, wired or wireless network, a private network, or a virtual private network. As will be understood a very large numbers (e.g., millions) of users are supported and can be in communication with the website at any time. The user may include a variety of different computing devices. Examples of user devices include, but are not limited to, personal computers, digital assistants, personal digital assistants, cellular phones, mobile phones, smart phones or laptop computers.
  • The browser can include any application that allows users to access web pages on the World Wide Web. Suitable applications include, but are not limited to, Microsoft Internet Explorer®, Netscape Navigator®, Mozilla® Firefox, Apple® Safari or any application adapted to allow access to web pages on the World Wide Web. The browser can also include a video player (e.g., Flash™ from Adobe Systems, Inc.), or any other player adapted for the video file formats used in the video hosting website. Alternatively, videos can be accessed by a standalone program separate from the browser. A user can access a video from the website by, for example, browsing a catalog of digital content, conducting searches on keywords, reviewing aggregate lists from other users or the system administrator (e.g., collections of videos forming channels), or viewing digital content associated with particular user groups (e.g., communities).
  • V. Computer Network Environment
  • Computing system 100, described above, can be deployed as part of a computer network used to achieve the desired technical effect and transformation. In general, the above description for computing environments applies to both server computers and client computers deployed in a network environment. FIG. 4 illustrates an exemplary illustrative networked computing environment 400, with a server in communication with client computers via a communications network 450. As shown in FIG. 4, server 410 may be interconnected via a communications network 450 (which may be either of, or a combination of a fixed-wire or wireless LAN, WAN, intranet, extranet, peer-to-peer network, virtual private network, the Internet, or other communications network) with a number of client computing environments such as tablet personal computer 402, mobile telephone 404, telephone 406, personal computer 402, and personal digital assistant 408. In a network environment in which the communications network 450 is the Internet, for example, server 410 can be dedicated computing environment servers operable to process and communicate data to and from client computing environments via any of a number of known protocols, such as, hypertext transfer protocol (HTTP), file transfer protocol (FTP), simple object access protocol (SOAP), or wireless application protocol (WAP). Other wireless protocols can be used without departing from the scope of the disclosure, including, for example Wireless Markup Language (WML), DoCoMo i-mode (used, for example, in Japan) and XHTML Basic. Additionally, networked computing environment 400 can utilize various data security protocols such as secured socket layer (SSL) or pretty good privacy (PGP). Each client computing environment can be equipped with operating system 438 operable to support one or more computing applications, such as a web browser (not shown), or other graphical user interface (not shown), or a mobile desktop environment (not shown) to gain access to server computing environment 400.
  • In operation, a user (not shown) may interact with a computing application running on a client computing environment to obtain desired data and/or computing applications. The data and/or computing applications may be stored on server computing environment 400 and communicated to cooperating users through client computing environments over exemplary communications network 450. The computing applications, described in more detail below, are used to achieve the desired technical effect and transformation set forth. A participating user may request access to specific data and applications housed in whole or in part on server computing environment 400. These data may be communicated between client computing environments and server computing environments for processing and storage. Server computing environment 400 may host computing applications, processes and applets for the generation, authentication, encryption, and communication data and applications and may cooperate with other server computing environments (not shown), third party service providers (not shown), network attached storage (NAS) and storage area networks (SAN) to realize application/data transactions.
  • The communication network is adaptable and configurable to be in communication with one or more input devices 446 and/or one or more output devices 448 as discussed above. In general input devices are those devices or components that provide information to the system and output devices are those devices or components that provide information from the system. As will be appreciated by those skilled in the device a single device can, at times, be capable of operating as both an input device and an output device. For purposes of appreciating the context of the disclosure, suitable input devices are, for example, those devices that input information into the system such as imaging devices and/or patient motion control devices as discussed herein. Suitable output devices are, for example, those devices that receive information and/or data from one or more input devices, in a computing environment (such as shown in FIG. 4), process the received information and/or data, and generate a return real-time or near real-time signal to the input devices to achieve a technical effect of controlling the behavior or performance of the input devices to achieve a desired
  • Vi. Media Independent Information Service
  • The Media Independent Information Service (MIIS) provides a framework and corresponding mechanisms by which an MIHF entity may discover and obtain network information existing within a geographical area to facilitate handovers. Additionally or alternatively, neighboring network information discovered and obtained by this framework and mechanisms can also be used in conjunction with user and network operator policies for optimum initial network selection and access (attachment), or network re-selection in idle mode.
  • MIIS primarily provides a set of information elements (IEs), the information structure and its representation, and a query/response type of mechanism for information transfer. The information can be present in some information server from which, e.g., an MIHF in the Mobile Node (MN) can access it.
  • Depending on the type of mobility, support for different types of information elements may be necessary for performing handovers. MIIS provides the capability for obtaining information about lower layers such as neighbor maps and other link layer parameters, as well as information about available higher layer services such as Internet connectivity.
  • MIIS provides a generic mechanism to allow a service provider and a mobile user to exchange information on different handover candidate access networks. The handover candidate information can include different access technologies such as IEEE 802 networks, 3GPP networks and 3GPP2 networks. The MIIS also allows this collective information to be accessed from any single network. For example, by using an IEEE 802.11 access network, it can be possible to get information not only about all other IEEE 802 based networks in a particular region but also about 3GPP and 3GPP2 networks. Similarly, using, e.g., a 3GPP2 interface, it can be possible to get access to information about all IEEE 802 and 3GPP networks in a given region. This capability allows the MN to use its currently active access network and inquire about other available access networks in a geographical region. Thus, a MN is freed from the burden of powering up each of its individual radios and establishing network connectivity for the purpose of retrieving heterogeneous network information. MIIS enables this functionality across all available access networks by providing a uniform way to retrieve heterogeneous network information in any geographical area.
  • VII. Devices
  • Motion control device can be is represented by a large box that contains various subsystems. Suitable motion control devices can either be passive or active. As will be appreciated by those of skill in the art, the motion control device, can also be a horizontally configured motion control device, a vertically configured motion control device or a butterfly configured device. Motion control devices suitable for use with the systems include any of the motion control devices described herein as well as any other device suitable for controlling the motion of a target patient anatomy.
  • The diagnostic imaging hardware contains a field of imaging, which is a physical space in which objects imaged by the hardware must be located during the imaging process to produce images. The field of imaging can contain a posture assistance device such as a table, bed, chair, or other device intended to bear all or some of the subject's weight and to provide physical support to a specific type of posture. Alternatively, the field of imaging can contain no such devices if the subject can be situated directly onto the floor and/or the motion control device and does not require the use of an additional device to bear weight and/or support specific postures, according to one embodiment of the present invention. The motion control device, or sub-systems therein, occupy part or the entire field of imaging and is physically connected and supported either by resting on the floor itself, or by being physically and immovably attached to the imaging equipment or a posture-assistance devices within the field of imaging. All parts of the horizontally configured motion control device that are located within the field of imaging are constructed of materials that are either radiolucent in the case of use with videoflouroscopic and moving CT imaging systems, or alternatively compatible with MM images in the case of a moving MM imaging system, and therefore these parts of the motion control device do not obscure or produce artifacts on the diagnostic images. The motion control device may also have the capacity to have pillows, cushions, and/or restraining devices attached to it at points where these pillows, cushions, and/or restraining devices aid in improving the comfort of the subject and/or in producing the correct posture and/or motion required for the test. The motion control device as a unit is attachable and detachable by the operator within the field of imaging, according to one embodiment of the present invention.
  • A base is provided for the purpose of physically and immovably fixing and stabilizing the motion control device within the field of imaging to either the floor, the imaging equipment, and/or a posture-assistance device while the images and other measurements are being collected, and also for the purpose of providing an immoveable fixed structure on which to attach other sub-systems of the motion control device. The base connects via attachment mechanisms at the points of contact between the base and either the floor, the imaging equipment, and/or a posture-assistance device.
  • As the motion control device physically attaches to and therefore may bear its weight onto the base, and as the motion control device can be configured to also bear the entire weight of the subject, and with the subject moving during the testing process and therefore producing both static and dynamic forces, the base needs the structural integrity and gripping force required to remain static, stable, and fixed in the presence of such loads and forces. The structural integrity is afforded by the use of rigid and strong materials such as plastics when radiolucent materials are desirable and in situations where compatibility with dynamic MRI systems is required, according to one embodiment of the present invention. Said gripping force is afforded by the use of strong fixation mechanisms at the points of contact, and may be accomplished by either: (1) the weight of the motion control device itself, and the friction caused thereby and enhanced by the use of high-friction materials such as rubber at the points of contact, to fix and stabilize the motion control device; (2) screws, clamps, bolts, fasteners, straps, ties, cuffs, nuts, pins, or any other rigid or flexible fixation mechanism that provides immoveable fixation at the points of contact; and/or (3) some combination therein.
  • Base can be a highly configurable sub-system, adapted and configured to have several configurations and versions to accommodate the different types of postures; different types, sizes, and configurations of posture-assistance devices; different sizes and geometries of imaging equipment and imaging fields; different materials at the point of contact to which to connect between the base and either the floor, the imaging equipment, and/or a posture-assistance device; and different geometries and sizes of these points of contact.
  • As applied to a butterfly motion control device, the diagnostic imaging hardware contains a field of imaging, which is a physical space in which objects imaged by the hardware must be located during the imaging process to produce images. The field of imaging can contain a posture assistance device such as a table, bed, chair, or other device intended to bear all or some of the subject's weight and to provide physical support to a specific type of posture. Alternatively, the field of imaging can contain no such devices if the subject can be situated directly onto the floor and/or the motion control device and does not require the use of an additional device to bear weight and/or support specific postures. The “butterfly” motion control device, or sub-systems therein, occupy part or the entire field of imaging and is physically connected and supported either by resting on the floor itself, or by being physically and immovably attached to the imaging equipment or to one of the above-mentioned posture-assistance devices within the field of imaging. All parts of the “butterfly” motion control device that are located within the field of imaging are constructed of materials that are either radiolucent in the case of use with videoflouroscopic and moving CT imaging systems, or alternatively compatible with on MM images in the case of a moving MRI imaging system, and therefore these parts of the “butterfly” motion control device do not obscure or produce artifacts on the diagnostic images. The “butterfly” motion control device also has the capacity to have pillows, cushions, and/or restraining devices attached to it at points where these pillows, cushions, and/or restraining devices aid in improving the comfort of the subject and/or in producing the correct posture and/or motion required for the test. The “butterfly” motion control device is attachable and detachable by the operator within the field of imaging. As will be appreciated by those skilled in the art, other devices adapted and configured to control movement of a target patient anatomy can be used without departing from the scope of the disclosure.
  • Turning now to FIGS. 5A and 5B, an illustration of a configuration of a horizontally configured motion control device 25 is provided for purposes of illustration. The base 31 serves as the base for the horizontally configured motion control device 25. The device 25 can be adapted and configured such that all other sub-systems attach or engage the base in some way. The base 31 can be optionally adapted and configured to detachably attach to either the floor, the imaging equipment, and/or a posture-assistance device 53 via the detachable anchoring device 55. The operator can then remove the motion control device 25 from the field of imaging. Moving up from this base 31, the next two physical sub-systems are the static platform 33 and the motion platform 35. The static platform 33 and the motion platform 35 are attached to each other by a suitable mechanism such as a hinging mechanism 73. When the device is in the “default” position, shown in FIGS. 5A and 5B, the device is locked such that the flat surfaces of both the motion platform 35 and static platform 33 reside within the same plane, but that still allows for the free rotation of the motion platform 35 within a plane (e.g., plane a-c) of its subject-facing surface about a fixed axis (b) of rotation. Other configurations or embodiments are possible that afford for the horizontal motion platform to move in a plane that is at an angle to the horizontal static platform. These “non-default” configurations are described in detail later in subsequent drawings.
  • The static platform 33 and motion platform 35 attach to the base 31 differently. See FIGS. 5A and 5B for a graphical description of how these sub-systems can be adapted to attach to each other. In this device, the base 31 attaches to either the floor, imaging equipment, and/or posture assistance devices 53 via the detachable anchoring device 55 and also connects to the static platform 33, which is held firm by a rigid immobilized static platform/member attachment mechanism 49. The base 31 and the motion platform 35 are attached by way of the motion platform attachment mechanism 51 that along with the hinging mechanism 73 allows for free rotation of the motion platform 35 within the plane of its flat subject-facing surface, while simultaneously allowing for the adjustment of the angle that this plane makes with the subject-facing surface of the static platform 33, such that these two planes intersect along the line of the hinge which occupies the linear space defined by the edges of these two platforms that face and are adjacent to each other. In the “default” configuration represented in FIGS. 5A and 5B, this angle is set to 180 degrees. In other “non-default” configurations, this angle can be adjusted to angles other than 180 degrees. The radio-opaque protractor 74 is shown on FIG. 5A. FIGS. 5C-E illustrate a configuration of a suitable device.
  • FIG. 6A and 6B illustrate the functionality of the motion platform attachment mechanism 51 and the hinging mechanism 73. FIG. 6A depicts the side view block diagram of attachment mechanisms and parts of the horizontally configured motion control device 25 in a “front up” configuration, where the hinging mechanism 73 connects the static platform 33 with the motion platform 35 along the edges of these platforms that face each other in such a way as to allow these two platforms to rotate about an axis c of the hinge. In this configuration, the connection between the base 31 and the static platform 33 is held firm by the rigid immobilized static platform/member attachment mechanism 49. However, the motion platform attachment mechanism 51 between the base 31 and the motion platform 35 functions differently. The motion platform attachment mechanism 51 is adapted and configured to lengthen within a plane (e.g., plane a-c) along an axis as well as the ability to change the angle of attachment to both the base 31 and the motion platform 35 such that the end of the motion platform 35 opposing the end adjacent to the static platform 33 can move up or down (along the b axis) so that the plane of the motion platform 35 is at an angle to the plane of the static platform 33 and that these two planes intersect along the line created by their common edge which is a space occupied by the hinging mechanism 73. A radiopaque protractor 74 (shown in FIG. 7B) enables an assessment of movement of the spine during the imaging process.
  • FIG. 6B represents a side view block diagram of attachment mechanisms and parts of a horizontally configured motion control device 25 in a “front down” configuration. In this configuration, the hinging mechanism 73 functions in the same way allowing for the static platform 33 and motion platform 35 to rotate about the axis c of the hinge such that it changes position from lying within a plane (e.g. c-a plane) to rotating about the c axis. The rigid immobilized static platform/member attachment mechanism 49 in this configuration can be lengthened or shortened, but fixed at a right angle to the platform base 31 and the static platform 33. The motion platform attachment mechanism 51 can be lengthened or shortened such that the angle of attachment to the motion platform 35 and the platform base 31 is no longer a right angle, and instead any other angle dictated by the geometric configuration of the device indicated by the prescriber.
  • As reflected in FIGS. 7A and 7B, the frame 31 connects to the base 53 of vertically configured motion control device 27 at a rigid base to frame connection mechanism 69. The frame 31 is the frame to which all other sub-systems attach in some way. Moving out from this frame 31, the next two physical sub-systems are the static member 33 and the motion member 35. The frame 31 attaches to the static member 33 by way of a rigid immobilized static platform/member attachment mechanism 49 like the one described for FIGS. 5A and 5B with the added capability of providing cantilevered support for the weight of the static member 33 and any of the attached subject body parts. The frame 31 attaches to the motion member 35 by way of a motion member attachment mechanism 85 that allows free rotation around a fixed axis within the same plane as that of the subject facing surface of the static member, and provides for the cantilevered support for the weight of the motion member 35 and the subject body parts that could be connected to it. The static member 33 and motion member 35 and are attached to each other by the vertically configured motion control device hinging mechanism 73 that when in the “default” position represented in FIGS. 7A and 7B, is locked such that the flat surfaces of both the static member 33 and the motion member 35 reside within the same plane, but still allows for the free rotation of the motion member 35 around a fixed axis within that plane. A radio-opaque protractor 74 adaptable for use with any device disclosed or contemplated to be part of the system is shown on FIG. 7B. FIGS. 7C-E illustrate a configuration of the device.
  • FIGS. 8A, 8B, and 8C represent the side view block diagram of the vertically configured motion control device 27 in the “default”, “top out” and “top in” configurations, respectively. The “default” configuration given in FIG. 9A is as described in the previous paragraph. In FIG. 9B, the “top out” configuration, the attachment mechanism 85 connects the static member 33 to the motion member 35 and can lengthen or shorten along the b axis and/or change the angle of attachment to frame 31 and motion member 35 such that the top of the motion member 35 can move away from the frame 31 so that the plane of the motion member 35 is at an angle to the plane of the static member 33 and that these two planes intersect along the line created by their common edge, the space of which is occupied by the motion control device hinging mechanism 73. Furthermore, the motion member attachment mechanism 85 allows for the free rotation of the motion member 35 around a fixed axis within that plane while providing cantilevered supporting the weight of the motion member and any of the subject's body parts that are connected to it.
  • In FIG. 8C, the “top in” configuration, the motion member attachment mechanism 85 illustrates its ability to lengthen and shorten along the b axis and change the angle of attachment to the connecting frame 31 and motion member 35. Additionally, in this configuration, the static platform/member attachment mechanism 49 can lengthen along b axis, pushing the static member 33 away from the frame 31 while keeping the static member 33 in a non-changing orientation with respect to the frame 31.
  • VIII. Software Programs Implementable in the Computing and Network Environments to Achieve a Desired Technical Effect or Transformation
  • FIG. 9 is a simplified block diagram of the integrated imaging system. There is an imaging component, and a tracking component both of which communicate through the central imaging control unit through a series of feedback loops. The imaging component is any suitable imaging device, such as those disclosed above. The tracking component can be any suitable device such as those disclosed above that facilitates tracking and movement of a target patient anatomy.
  • It is contemplated that the imaging and tracking component are part of the same machine, but it is possible that the imaging and tracking components are separate units connected through either a wireless connection or a direct wire connection to each other or through an external imaging control unit.
  • The components of the tracking system and the imaging apparatus communicate information through the imaging control unit and the information is used to direct adjustments in real time to the imaging device. The purpose of this invention is to provide the best diagnostic imaging information to the physician and patient and to reduce radiation exposure to the patient and or physician during the testing session
  • The first component of the integrated imaging system is the imaging component. While the following description refers to fluoroscopic imaging in general, it is contemplated that the advantages also applies to many types of diagnostic imaging systems. Further information regarding fluoroscopic devices is provided, for example, in U.S. Pat. No. 6,424,731 for Method of Positioning a Radiographic Device; and U.S. Pat. No. 5,873,826 entitled Fluoroscopy method and X-ray CT apparatus.
  • Referring to FIG. 9, the basic components of the imaging system are shown. The imaging apparatus includes a central image control unit that controls, among other things, the intensity of the imaging source, the duration of the scan, the optimal frequency of images (frames per second), the position of the imaging source, and the optimal collimation. There is also an image storage unit that acquires the information from the imaging session and stores the image or image sequence for review. There is also an image viewing station for real time viewing of the image or image sequence. The image viewing station also acts as the tracking apparatus monitor where an operator would input information with respect to the area or areas of interest within the field of view of the image.
  • The components of the diagnostic tracking system are a user input component and computer hardware and software component. While it is contemplated that the tracking display unit is also the image viewing station, those skilled in the art will appreciate that it can also be a separate viewing station independent of the image viewing station and which communicates with the central image control unit either through a secure wireless connection or through a direct wire connection.
  • The user input component provides the user with the capability of defining one or more areas of interest within the image based on the diagnostic test that is prescribed. In real time, the computer hardware and software component calculates the location of the area of interest and records and communicates this information for several purposes.
  • One such purpose for the tracking information of the location of the area of interest is for the purpose of acquiring an imaging session that provides optimal diagnostic information and minimal exposure to radiation. This is accomplished through feedback loops to the central image control unit. The information derived from the tracking apparatus direct changes with respect to the imaging session.
  • For example, one such feedback loop connects a real time, or near real-time, tracking information with the positioning of the x-ray source so as to keep the area of interest within the field of view throughout the imaging session. This is accomplished through either a wireless connection or a direct wire connection that transmits information with respect to the position of the area of interest within the central area of the field of view of the image. The information is received in real time by the central image control unit, processed, and the output is a continuous adjustment to the location of the imaging source with respect to the patient so as to keep the area of interest centered within the field of view throughout the image testing session.
  • Another feedback loop would use the tracking information to automatically and continuously, or semi-automatically if desired by a user, adjust the collimators so as to direct the radiation to the area of interest and at the same time decrease radiation exposure to the patient. The tracking information transmits information with respect to the location of the area of interest within the field of view of the image to the image control unit through a wireless or a direct wired connection. The image control unit processes the information and automatically and/or continuously adjusts the image settings with respect to the image apparatus collimators. Those skilled in the art will appreciate that this system can be adapted and configured for use with different types of collimators that are already available, and with those that are not yet developed.
  • Another feedback loop would use the tracking information to adjust the intensity of the radiation so as to produce an image that is sufficiently defined for the purposes of the diagnostic testing requirements. With respect to this feedback loop, the tracking software transmits information either wirelessly or through a direct wired connection to the image control unit. The image control unit then processes the information and automatically and/or continuously adjusts the image settings with respect to intensity of the radiation source. One example of the image source settings that might be adjusted based on the imported information could be the fluoroscopic kv and mA. These types of adjustments might change the contrast levels in the image or reduce noise associated with radiation imaging.
  • Another feedback loop would transmit the tracking information to the image control unit so as to provide information with respect to the optimal distance of the image intensifier from the patient. The tracking information would be transmitted either wirelessly or through a direct wire connection. The imported information would be processed by the image control unit and direct continuous adjustments to the image intensifier in real time for the purpose of creating the optimal imaging session for the prescribed diagnostic testing session.
  • Another feedback loop would transmit the tracking information to the image control unit so as to provide information with respect to the optimal fluoroscopic exposure rate at the image intensifier. The tracking information would be transmitted either wirelessly or through a direct wire connection. The imported information would be processed by the image control unit and used to adjust the fluoroscopic exposure rate at the image intensifier continuously and in real time. The rate of exposure with respect to frames per second might depend on the rate at which the area or areas of interest are moving during the testing session.
  • For all of these situations, more accurate information with respect to image source location, intensity and collimation will reduce radiation exposure to both the patient and to the operator. It is contemplated that there are other aspects of the imaging session that can be continuously adjusted in real time during the imaging session so as to improve the quality of the images, create the optimal imaging environment, produce the best diagnostic imaging information and minimize the radiation exposure to the patient.
  • As will be appreciated by those skilled in the art, for all of these feedback loops automatically and continuously communicating instructions based on the feedback loops and the images can include, for example, transmitting a new instruction which changes one or more settings, transmitting an instruction which maintains the most recent one or more settings, transmitting an instruction that repeats an earlier one or more instructions, or providing no change in instruction in current instructions. Where no change instructions are provided, the system and/or one or more feedback loops can be adapted and configured to automatically repeat the most recent instructions after a lag of a set amount of time from providing the image data for analysis. Thus for example, if a feedback loop is processed and a period of n seconds passes the prior setting will remain in place.
  • IX. EXAMPLES
  • A process by which this disclosure can be implemented is shown in FIG. 11. The process begins with the patient being positioned in the starting position for the prescribed imaging test. A spot image is acquired. The operator or physician determines whether the spot image captures the field of view necessary for the prescribed imaging test. If the patient is not in the correct position, the operator or physician will reposition the patient and repeat the process. Once the area of interest for the prescribed imaging test is within the field of view of the image, the operator or physician will manually defined the area of interest. For example, this could be defining one or more vertebra of the spine for a flexion extension testing session. Once the areas of interest are defined, the imaging session begins. During the imaging session, the tracking apparatus tracks the area or areas of interest within the image in real time. The information with respect to image quality and position of the area or areas of interest are communicated from the tracking apparatus to the central imaging control unit. The central imaging control unit processes the information in real time, and automatically and continuously adjusts the imaging apparatus so as to maintain optimal imaging settings for the prescribed testing session. At the completion of the imaging testing session, the image results are displayed both as a video of the imaging session and the quantitative diagnostic information is also displayed. Finally, the information is stored and can be recovered for future reference.
  • It is contemplated that this apparatus can incorporate the use of many different external devices that might be used in conjunction with a diagnostic imaging session. For example, if an apparatus that controls the motion of the patient during the testing session is used, the information from that device will also communicate with the central imaging control unit and implement the same feedback loops as described above. The motion apparatus can be used in conjunction with the imaging system as a means of standardizing or supporting patient motion during the imaging session. The motion apparatus is comprised of a patient motion control unit that interacts and moves a patient undergoing an imaging session in a controlled, predetermined motion path. It is contemplated that the motion apparatus communicates information with the imaging device with respect to the position of the table and/or the patient on the table during imaging. It is also contemplated that the motion apparatus can adjust based on the position of the imaging device. The motion table may adjust for example to maintain the same image field of view throughout the motion testing session. It is possible for each component of the motion apparatus to communicate or receive information through a central processing unit.
  • While the previous description describes a motion control device used in conjunction with the integrated imaging system, those skilled in the art will appreciate that this is only one example, and there are many types of external apparatus' that can be used during an imaging session, and it is contemplated that each such device could be incorporated into this invention.
  • Functional tests of a target subject anatomy, such as the spine, can be performed to minimize the radiation dose involved during a procedure with an imaging procedure based on real-time or near real-time feedback. A patient would be positioned on the articulating patient handling device and prepared for an imaging study. The imaging study would involve the capturing of images of the lumbar spine with a standard hospital fluoroscope, which is capable of capturing moving x-ray type video images of the lumbar spine. The fluoroscope would then begin recording images as the patient handling device affects a controlled movement of the subject during the imaging session. During a traditional imaging session, the field of imaging for standard fluoroscopes is typically a 9 or a 12 inch circle. For a lumbar vertebrae of interest the vertebra only occupies a small subset of this imaging field—on the order of 15-30% of the imaging field—however the entire imaging field is being irradiated and imaged. By providing a real-time or near real-time tracking system the exact position and trajectory of the target anatomy can be determined, then fed back into a collimator. The collimator would then be placed on the X-ray generator, and would be able to adjust the shape and trajectory of the collimation of the X-ray beam according to data received from the tracking system to prevent regions other than those of interest (i.e. the lumbar vertebral bodies) from being irradiated and imaged. The data capture, analysis and feedback loop would then reduce the overall radiation dose to the patient proportional to the proportion of the beam that is being collimated.
  • Yet another example provides an improved functional test of a target anatomy allows smaller image intensifiers to be used. Currently, as discussed above, standard hospital fluoroscopes have a 9 or a 12 inch image intensifier, and therefore a corresponding 9 or 12 inch field of view. When imaging the lumbar spine in motion, it is often the case that, in the case when a subject is moving their torso to affect a lumbar bend, superior lumbar vertebrae such as L1 or L2 move out of the 9 or 12 inch field of view as a patient approaches a maximum lumbar bending angles. Real-time or near real time adjustments to the anatomy contained within the field of view would make it possible to prevent or minimize the target anatomy from exiting the field of view during imaging through a movement sequence.
  • Two different mechanisms can be provided to prevent or eliminate movement of the target anatomy from the field of view. By providing a real-time tracking capability that allows a computer implemented system to precisely monitor the position of the vertebral bodies during imaging, the present invention provides for a feedback loop from the real time tracking system to either the positioning system for the fluoroscopy or the positioning system on which the articulating patient handling device rests. By making adjustments to either the position of the image intensifier and/or to the articulating patient handling device assembly, it would be possible to maintain all anatomy of interest within the field of view at all points during the movement and prevent any anatomy from exiting the field of view. The effect of this is that less expensive fluoroscopy systems with smaller image intensifiers would be suitable for conducting imaging of, for example, the lumbar spine during controlled lumbar bending.
  • A problem associated with functional test of the spine is variability. Clinical studies that have shown that there can be a wide degree of variability in measurement of in vivo lumbar vertebral motion. One system to reduce the variability is to standardize the bending angle to which subjects bend during imaging. However even when standardizing bending angle, there can still be variability with respect to the total amount of lumbar bending that is occurring during a particular imaging session. The total amount of lumbar bending that is occurring can be measured by measuring the angulation between L1, the uppermost (or most superior) lumbar vertebra, and S1, the lowermost (or most inferior) lumbar vertebra. This overall L1-S1 angulation is measured by comparing two X-ray or fluoroscopic images of a subject taken from two different positions. For example, a subject might perform a gross lumbar bend and achieve 60 degrees of gross lumbar bending, but the L1-S1 angulation might only by 45 degrees. The difference is attributable to mechanical slack in other anatomy involved in the motion, such as the hips, thorax, arms, shoulders, etc. By standardizing the overall L1-S1 angulation of a subject instead of the motion control device, and by taking measurements of the motion of individual vertebral bodies at standardized L1-S1 angles as opposed to standardized gross bending angles, it is possible to further reduce intervertebral angulation measurement variability. Therefore an advantage afforded by the present invention would be to have subjects bend to a standardized L1-S1 angle, letting the overall gross bending angle vary, as opposed to bending subjects to a standardized gross bending angle and letting the L1-S1 angle vary. The real-time or near real-time feedback loop from the tracking system to the articulation control system of the patient handling device enables the device to discontinue imaging when an actual patient bending angle has been achieved. Such a feedback loop would allow a diagnostician to select a range of motion and then for the subject to be guided through a standard bend that would continue until a selected or specified L1-S1 angle was achieved. The real time tracking system would provide the means of signaling to the articulation control system that the subject has achieved a specific L1-S1 angle, and therefore that bending should be stopped and reversed in the opposite direction to return the subject to a neutral position. Such a system would provide the capability to produce lower variability intervertebral angulation measurements.
  • While preferred embodiments of the present invention have been shown and described herein, it will be obvious to those skilled in the art that such embodiments are provided by way of example only. Numerous variations, changes, and substitutions will now occur to those skilled in the art without departing from the invention. It should be understood that various alternatives to the embodiments of the invention described herein may be employed in practicing the invention. It is intended that the following claims define the scope of the invention and that methods and structures within the scope of these claims and their equivalents be covered thereby.

Claims (14)

What is claimed is:
1. An apparatus for use in a shared computer network being able to carry real-time data streams the apparatus comprising:
a non-transitory machine-readable medium that provides instructions over the shared computer network that, when executed by a set of processors, causes the processors to provide instructions to at least one of a motion device and an imaging device which are connected to the shared computer network comprising:
receiving information from at least one of the motion device and the imaging device during an imaging session over the shared computer network;
analyzing the received information during the imaging session;
determining whether a target anatomy is within a field of view during the imaging session;
evaluating a position of the imaging device, a position of the motion device, a radiographic imaging technique, and a geometric configuration of a collimator;
determining whether a change to one or more of a position of the imaging device, a position of the motion device, a radiographic imaging technique, and a geometric configuration of a collimator will bring the target anatomy within the field of view; and
instructing at least one of the motion device and imaging device over the shared computer network to change an imaging environment automatically and continuously during the imaging session to maintain the target anatomy within a field of view by one or more of changing the position of the imaging device, changing the position of the motion device, changing the radiographic imaging technique, and changing the geometric configuration of the collimator.
2. The apparatus of claim 1 wherein the instructions are streamed over the shared computer network a plurality of times during the imaging session.
3. The apparatus of claim 1, wherein the imaging device is an X-ray tube and image intensifier with dosage control.
4. The apparatus of claim 1, wherein the imaging device is a magnetic resonance scanner.
5. The apparatus of claim 1, wherein motion device further comprises a laterally moveable platform.
6. The apparatus of claim 5, wherein the movable platform is situated on a support which lies on an upper surface of the platform base.
7. The apparatus of claim 6, wherein the imaging device is an X-ray tube and image intensifier with dosage control.
8. The apparatus of claim 5, wherein the imaging device is a magnetic resonance scanner.
9. The apparatus of claim 1, wherein the motion device further comprises a control arm for driving movement of a moveable platform.
10. The apparatus of claim 9 further comprising a processing system.
11. The apparatus of claim 1 wherein the motion device is adapted to communicate motion information to the imaging device during use.
12. The apparatus of claim 1 wherein continuous adjustments are made to an imaging environment.
13. The apparatus of claim 1 wherein a range of motion of the motion device is based on a selected target motion for a patient.
14. The apparatus of claim 1 wherein a range of motion of the device is based on a gross motion of a patient.
US15/409,856 2009-09-25 2017-01-19 Networked imaging system with real-time feedback loop Abandoned US20170128026A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/409,856 US20170128026A1 (en) 2009-09-25 2017-01-19 Networked imaging system with real-time feedback loop

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US24598409P 2009-09-25 2009-09-25
PCT/US2010/050210 WO2011038236A2 (en) 2009-09-25 2010-09-24 Systems and devices for an integrated imaging system with real-time feedback loops and methods therefor
US201213497386A 2012-07-09 2012-07-09
US14/828,077 US9277879B2 (en) 2009-09-25 2015-08-17 Systems and devices for an integrated imaging system with real-time feedback loops and methods therefor
US15/007,508 US9554752B2 (en) 2009-09-25 2016-01-27 Skeletal measuring means
US15/409,856 US20170128026A1 (en) 2009-09-25 2017-01-19 Networked imaging system with real-time feedback loop

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/007,508 Division US9554752B2 (en) 2009-09-25 2016-01-27 Skeletal measuring means

Publications (1)

Publication Number Publication Date
US20170128026A1 true US20170128026A1 (en) 2017-05-11

Family

ID=43796506

Family Applications (4)

Application Number Title Priority Date Filing Date
US13/497,386 Active 2031-11-24 US9138163B2 (en) 2009-09-25 2010-09-24 Systems and devices for an integrated imaging system with real-time feedback loop and methods therefor
US14/828,077 Active US9277879B2 (en) 2009-09-25 2015-08-17 Systems and devices for an integrated imaging system with real-time feedback loops and methods therefor
US15/007,508 Active US9554752B2 (en) 2009-09-25 2016-01-27 Skeletal measuring means
US15/409,856 Abandoned US20170128026A1 (en) 2009-09-25 2017-01-19 Networked imaging system with real-time feedback loop

Family Applications Before (3)

Application Number Title Priority Date Filing Date
US13/497,386 Active 2031-11-24 US9138163B2 (en) 2009-09-25 2010-09-24 Systems and devices for an integrated imaging system with real-time feedback loop and methods therefor
US14/828,077 Active US9277879B2 (en) 2009-09-25 2015-08-17 Systems and devices for an integrated imaging system with real-time feedback loops and methods therefor
US15/007,508 Active US9554752B2 (en) 2009-09-25 2016-01-27 Skeletal measuring means

Country Status (2)

Country Link
US (4) US9138163B2 (en)
WO (1) WO2011038236A2 (en)

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8676293B2 (en) 2006-04-13 2014-03-18 Aecc Enterprises Ltd. Devices, systems and methods for measuring and evaluating the motion and function of joint structures and associated muscles, determining suitability for orthopedic intervention, and evaluating efficacy of orthopedic intervention
US20090099481A1 (en) 2007-10-10 2009-04-16 Adam Deitz Devices, Systems and Methods for Measuring and Evaluating the Motion and Function of Joints and Associated Muscles
US9138163B2 (en) * 2009-09-25 2015-09-22 Ortho Kinematics, Inc. Systems and devices for an integrated imaging system with real-time feedback loop and methods therefor
AU2011344107A1 (en) 2010-12-13 2013-06-27 Ortho Kinematics, Inc. Methods, systems and devices for clinical data reporting and surgical navigation
US9449380B2 (en) * 2012-03-20 2016-09-20 Siemens Medical Solutions Usa, Inc. Medical image quality monitoring and improvement system
US20140081659A1 (en) 2012-09-17 2014-03-20 Depuy Orthopaedics, Inc. Systems and methods for surgical and interventional planning, support, post-operative follow-up, and functional recovery tracking
WO2014123556A1 (en) * 2013-02-05 2014-08-14 Sound Technology Inc. Ultrasound device
US10758210B2 (en) * 2014-08-22 2020-09-01 Oncura Partners Diagnostics, Llc Ultrasound remote monitoring, operating and training system
US20160354161A1 (en) 2015-06-05 2016-12-08 Ortho Kinematics, Inc. Methods for data processing for intra-operative navigation systems
US10026187B2 (en) * 2016-01-12 2018-07-17 Hand Held Products, Inc. Using image data to calculate an object's weight
JP6727423B2 (en) 2016-09-15 2020-07-22 マイクロ・シー,エルエルシー Improved imaging system and method
US11707203B2 (en) 2016-10-11 2023-07-25 Wenzel Spine, Inc. Systems for generating image-based measurements during diagnosis
CN106580361B (en) * 2016-12-08 2018-03-23 王国良 A kind of portable orthopaedics detection means based on AR VR technologies 4D imagings
CN107320128A (en) * 2017-07-28 2017-11-07 陆爱清 Piece instrument is taken the photograph in backbone power position
EP3829444A4 (en) * 2018-08-01 2022-05-18 OXOS Medical, Inc. Improved imaging systems and methods

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5220614A (en) * 1991-02-22 1993-06-15 Professional Coin Grading Service, Inc. Automated coin grading system
US20020057828A1 (en) * 2000-11-06 2002-05-16 Fuji Photo Film Co., Ltd. Apparatus for automatically setting measurement reference element and measuring geometric feature of image
US20020085681A1 (en) * 2000-12-28 2002-07-04 Jensen Vernon Thomas Method and apparatus for obtaining and displaying computed tomography images using a fluoroscopy imaging system
US20030230723A1 (en) * 2002-06-12 2003-12-18 Koninklijke Philips Electronics N.V. Gamma camera workflow automation
US20040003981A1 (en) * 2002-07-05 2004-01-08 Sunplus Technology Co., Ltd. Apparatus and method for recognizing currency
US6819410B2 (en) * 2002-01-16 2004-11-16 National Rejectors, Inc. Gmbh Process for identifying an embossed image of a coin in an automatic coin tester
US20050203373A1 (en) * 2004-01-29 2005-09-15 Jan Boese Method and medical imaging system for compensating for patient motion
US20050259794A1 (en) * 2002-07-09 2005-11-24 Alan Breen Method for imaging the relative motion of skeletal segments
US20060032726A1 (en) * 2004-08-10 2006-02-16 Vook Dietrich W Optical inspection system for reconstructing three-dimensional images of coins and for sorting coins
US20080125649A1 (en) * 2006-09-18 2008-05-29 Andreas Meyer Automatic object tracking in a region of interest
US7418076B2 (en) * 2005-11-16 2008-08-26 General Electric Company System and method for cross table tomosynthesis imaging for trauma applications
US20090099481A1 (en) * 2007-10-10 2009-04-16 Adam Deitz Devices, Systems and Methods for Measuring and Evaluating the Motion and Function of Joints and Associated Muscles
US20100260316A1 (en) * 2009-04-13 2010-10-14 Jay Stein Integrated Breast X-Ray and Molecular Imaging System
US7916281B2 (en) * 2008-04-18 2011-03-29 Coinsecure, Inc. Apparatus for producing optical signatures from coinage
US7980378B2 (en) * 2006-03-23 2011-07-19 Cummins-Allison Corporation Systems, apparatus, and methods for currency processing control and redemption
US20120301009A1 (en) * 2010-09-15 2012-11-29 Identicoin, Inc. Coin Identification Method and Apparatus
US8838199B2 (en) * 2002-04-04 2014-09-16 Medtronic Navigation, Inc. Method and apparatus for virtual digital subtraction angiography
US9138163B2 (en) * 2009-09-25 2015-09-22 Ortho Kinematics, Inc. Systems and devices for an integrated imaging system with real-time feedback loop and methods therefor

Family Cites Families (149)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3678190A (en) 1966-12-21 1972-07-18 Bunker Ramo Automatic photo comparision system
US4210317A (en) 1979-05-01 1980-07-01 Dorothy Sherry Apparatus for supporting and positioning the arm and shoulder
US4404590A (en) 1981-08-06 1983-09-13 The Jackson Laboratory Video blink comparator
US4611581A (en) 1983-12-16 1986-09-16 Acromed Corporation Apparatus for straightening spinal columns
JPH0621769B2 (en) 1985-12-13 1994-03-23 大日本スクリ−ン製造株式会社 Pattern defect detection method and device
US4805602A (en) 1986-11-03 1989-02-21 Danninger Medical Technology Transpedicular screw and rod system
US5019081A (en) 1986-12-10 1991-05-28 Watanabe Robert S Laminectomy surgical process
US4922909A (en) 1987-07-17 1990-05-08 Little James H Video monitoring and reapposition monitoring apparatus and methods
US5058602A (en) 1988-09-30 1991-10-22 Brody Stanley R Paraspinal electromyography scanning
US5047952A (en) 1988-10-14 1991-09-10 The Board Of Trustee Of The Leland Stanford Junior University Communication system for deaf, deaf-blind, or non-vocal individuals using instrumented glove
US5099859A (en) 1988-12-06 1992-03-31 Bell Gene D Method and apparatus for comparative analysis of videofluoroscopic joint motion
SE463600B (en) 1989-05-11 1990-12-17 Ken Petersen METHOD AND APPARATUS FOR LED CONTROLLED TRAINING OF VARIOUS MOTOR DEVICES
US5000165A (en) 1989-05-15 1991-03-19 Watanabe Robert S Lumbar spine rod fixation system
US5203346A (en) 1990-03-30 1993-04-20 Whiplash Analysis, Inc. Non-invasive method for determining kinematic movement of the cervical spine
US5360431A (en) 1990-04-26 1994-11-01 Cross Medical Products Transpedicular screw system and method of use
US5129900B1 (en) 1990-07-24 1998-12-29 Acromed Corp Spinal column retaining method and apparatus
US5090042A (en) 1990-12-24 1992-02-18 Bejjani Fadi J Videofluoroscopy system for in vivo motion analysis
US5320640A (en) 1991-01-14 1994-06-14 United Apothecary, Inc. Continuous passive motion cervical spine therapy device
DE69231942T2 (en) 1991-11-22 2002-04-04 Eastman Kodak Co., Rochester Method and device for controlling the rapid display of multiple images from a digital image database
US6697659B1 (en) 1991-12-04 2004-02-24 Bonutti 2003 Trust-A Method of imaging a joint in a body of patient
US6044289A (en) 1991-12-04 2000-03-28 Bonutti; Peter M. Apparatus and method for controlling bending of a joint of a patient during imaging
US5349956A (en) 1991-12-04 1994-09-27 Apogee Medical Products, Inc. Apparatus and method for use in medical imaging
US5384862A (en) 1992-05-29 1995-01-24 Cimpiter Corporation Radiographic image evaluation apparatus and method
AU670311B2 (en) 1992-07-06 1996-07-11 Immersion Corporation Determination of kinematically constrained multi-articulated structures
US5545165A (en) 1992-10-09 1996-08-13 Biedermann Motech Gmbh Anchoring member
FI101037B (en) 1992-10-02 1998-04-15 Teuvo Sihvonen Procedure for measuring the function of the conductors and of these connected muscles
US5445152A (en) 1992-11-23 1995-08-29 Resonex Holding Company Kinematic device for producing precise incremental flexing of the knee
US5707643A (en) 1993-02-26 1998-01-13 Santen Pharmaceutical Co., Ltd. Biodegradable scleral plug
US5724970A (en) * 1993-04-06 1998-03-10 Fonar Corporation Multipositional MRI for kinematic studies of movable joints
US5316018A (en) 1993-04-12 1994-05-31 O'brien Todd D Dynamic ambulatory method and apparatus for assessment of joints of the human body
US5590271A (en) 1993-05-21 1996-12-31 Digital Equipment Corporation Interactive visualization environment with improved visual programming interface
US6077262A (en) 1993-06-04 2000-06-20 Synthes (U.S.A.) Posterior spinal implant
US5417213A (en) 1993-06-07 1995-05-23 Prince; Martin R. Magnetic resonance arteriography with dynamic intravenous contrast agents
US5427116A (en) 1993-07-13 1995-06-27 William Vanarthos Device for maintaining a desired load on a joint during observation under magnetic resonance imaging
US5548326A (en) 1993-10-06 1996-08-20 Cognex Corporation Efficient image registration
US5400800A (en) 1993-10-13 1995-03-28 Baltimore Therapeutic Equipment Co. Device for measuring lumbar spinal movement
US5443505A (en) 1993-11-15 1995-08-22 Oculex Pharmaceuticals, Inc. Biocompatible ocular implants
US5505208A (en) 1993-12-10 1996-04-09 Toomin Research Group Method for determining muscle dysfunction
US5483960A (en) 1994-01-03 1996-01-16 Hologic, Inc. Morphometric X-ray absorptiometry (MXA)
US6241734B1 (en) 1998-08-14 2001-06-05 Kyphon, Inc. Systems and methods for placing materials into bone
US20030032963A1 (en) 2001-10-24 2003-02-13 Kyphon Inc. Devices and methods using an expandable body with internal restraint for compressing cancellous bone
US5419649A (en) 1994-02-10 1995-05-30 Simpson Strong-Tie Co., Inc. Intermediate rail to post connection
US5715334A (en) 1994-03-08 1998-02-03 The University Of Connecticut Digital pixel-accurate intensity processing method for image information enhancement
US5748703A (en) 1994-03-22 1998-05-05 Cosman; Eric R. Dynamic collimator for a linear accelerator
JPH07284020A (en) 1994-04-13 1995-10-27 Hitachi Ltd Bone density measuring method
US5582186A (en) 1994-05-04 1996-12-10 Wiegand; Raymond A. Spinal analysis system
US5640200A (en) 1994-08-31 1997-06-17 Cognex Corporation Golden template comparison using efficient image registration
US5582189A (en) 1994-10-24 1996-12-10 Pannozzo; Anthony N. Method for diagnosing the subluxation of a skeletal articulation
US5564248A (en) 1994-11-10 1996-10-15 United Steel Products Company Construction hanger and method of making the same
US6269565B1 (en) 1994-11-28 2001-08-07 Smartlight Ltd. Display device
US5669911A (en) 1995-04-13 1997-09-23 Fastenetix, L.L.C. Polyaxial pedicle screw
US5603580A (en) 1995-05-30 1997-02-18 Simpson Strong-Tie Company, Inc. Positive angle fastener device
US7110587B1 (en) 1995-05-31 2006-09-19 Ge Medical Systems Israel Ltd. Registration of nuclear medicine images
US5575792A (en) 1995-07-14 1996-11-19 Fastenetix, L.L.C. Extending hook and polyaxial coupling element device for use with top loading rod fixation devices
US5643263A (en) 1995-08-14 1997-07-01 Simonson; Peter Melott Spinal implant connection assembly
US5683392A (en) 1995-10-17 1997-11-04 Wright Medical Technology, Inc. Multi-planar locking mechanism for bone fixation
US5688274A (en) 1995-10-23 1997-11-18 Fastenetix Llc. Spinal implant device having a single central rod and claw hooks
US5772592A (en) 1996-01-08 1998-06-30 Cheng; Shu Lin Method for diagnosing and monitoring osteoporosis
US5909218A (en) 1996-04-25 1999-06-01 Matsushita Electric Industrial Co., Ltd. Transmitter-receiver of three-dimensional skeleton structure motions and method thereof
FR2748387B1 (en) 1996-05-13 1998-10-30 Stryker France Sa BONE FIXATION DEVICE, IN PARTICULAR TO THE SACRUM, IN OSTEOSYNTHESIS OF THE SPINE
US5741255A (en) 1996-06-05 1998-04-21 Acromed Corporation Spinal column retaining apparatus
US5838759A (en) 1996-07-03 1998-11-17 Advanced Research And Applications Corporation Single beam photoneutron probe and X-ray imaging system for contraband detection and identification
US6075905A (en) 1996-07-17 2000-06-13 Sarnoff Corporation Method and apparatus for mosaic image construction
US6019759A (en) 1996-07-29 2000-02-01 Rogozinski; Chaim Multi-Directional fasteners or attachment devices for spinal implant elements
US5879350A (en) 1996-09-24 1999-03-09 Sdgi Holdings, Inc. Multi-axial bone screw assembly
US5797911A (en) 1996-09-24 1998-08-25 Sdgi Holdings, Inc. Multi-axial bone screw assembly
US5964760A (en) 1996-10-18 1999-10-12 Spinal Innovations Spinal implant fixation assembly
US5863293A (en) 1996-10-18 1999-01-26 Spinal Innovations Spinal implant fixation assembly
US6416515B1 (en) 1996-10-24 2002-07-09 Spinal Concepts, Inc. Spinal fixation system
US5784431A (en) 1996-10-29 1998-07-21 University Of Pittsburgh Of The Commonwealth System Of Higher Education Apparatus for matching X-ray images with reference images
EP0883817B1 (en) 1996-12-18 2005-06-29 Koninklijke Philips Electronics N.V. Mr method for the imaging of jointed movable parts
US5792077A (en) 1997-03-31 1998-08-11 Bel-Art Products, Inc. Feedback goniometer for measuring flexibility of member movement
US6110130A (en) 1997-04-21 2000-08-29 Virtual Technologies, Inc. Exoskeleton device for directly measuring fingertip position and inferring finger joint angle
WO1998047426A1 (en) 1997-04-21 1998-10-29 Virtual Technologies, Inc. Goniometer-based body-tracking device and method
US6248105B1 (en) 1997-05-17 2001-06-19 Synthes (U.S.A.) Device for connecting a longitudinal support with a pedicle screw
DE29710484U1 (en) 1997-06-16 1998-10-15 Howmedica GmbH, 24232 Schönkirchen Receiving part for a holding component of a spinal implant
US5891145A (en) 1997-07-14 1999-04-06 Sdgi Holdings, Inc. Multi-axial screw
US5954674A (en) 1997-10-13 1999-09-21 Kinex Iha Corporation Apparatus for gathering biomechanical parameters
US5891060A (en) 1997-10-13 1999-04-06 Kinex Iha Corp. Method for evaluating a human joint
AUPO981997A0 (en) 1997-10-15 1997-11-06 Lions Eye Institute Of Western Australia Incorporated, The Stereo optic disc analyser
US6049740A (en) 1998-03-02 2000-04-11 Cyberoptics Corporation Printed circuit board testing system with page scanner
US6010503A (en) 1998-04-03 2000-01-04 Spinal Innovations, Llc Locking mechanism
US6565565B1 (en) 1998-06-17 2003-05-20 Howmedica Osteonics Corp. Device for securing spinal rods
US6090111A (en) 1998-06-17 2000-07-18 Surgical Dynamics, Inc. Device for securing spinal rods
US6298259B1 (en) 1998-10-16 2001-10-02 Univ Minnesota Combined magnetic resonance imaging and magnetic stereotaxis surgical apparatus and processes
US6427022B1 (en) 1998-11-10 2002-07-30 Western Research Company, Inc. Image comparator system and method for detecting changes in skin lesions
US6421420B1 (en) 1998-12-01 2002-07-16 American Science & Engineering, Inc. Method and apparatus for generating sequential beams of penetrating radiation
US6434264B1 (en) 1998-12-11 2002-08-13 Lucent Technologies Inc. Vision comparison inspection system
SE520847C2 (en) * 1999-02-10 2003-09-02 Scan Coin Ind Ab Coin-separating device, coin-handling apparatus including such device and a method for separating coins
US6155993A (en) 1999-03-31 2000-12-05 Queen's University At Kingston Kinesiological instrument for limb movements
US6351547B1 (en) 1999-04-28 2002-02-26 General Electric Company Method and apparatus for formatting digital images to conform to communications standard
JP3609285B2 (en) * 1999-05-19 2005-01-12 ローレルバンクマシン株式会社 Coin discrimination device
DE19936286C2 (en) 1999-08-02 2002-01-17 Lutz Biedermann bone screw
US6469717B1 (en) 1999-10-27 2002-10-22 Dejarnette Research Systems, Inc. Computerized apparatus and method for displaying X-rays and the like for radiological analysis including image shift
DE60023540T2 (en) 1999-11-01 2006-06-08 Arthrovision, Inc., Montreal INVESTIGATION OF DISEASE DEVELOPMENT THROUGH THE USE OF NUCLEAR RESONANCE TOMOGRAPHY
US6907280B2 (en) 1999-12-02 2005-06-14 The General Hospital Corporation Method and apparatus for objectively measuring pain, pain treatment and other related techniques
US7027650B2 (en) 1999-12-10 2006-04-11 Christian Williame Dynamic computing imagery, especially for visceral osteopathy and for articular kinetics
WO2001045551A1 (en) 1999-12-22 2001-06-28 The Trustees Of The University Of Pennsylvania Judging changes in images of the eye
US6280395B1 (en) 2000-01-19 2001-08-28 Mpr Health Systems, Inc. System and method for determining muscle dysfunction
JP2003520658A (en) 2000-01-27 2003-07-08 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Method and system for extracting spinal geometric data
AU4262601A (en) 2000-03-31 2001-10-15 British Telecommunications Public Limited Company Image processing
DE60117524T2 (en) 2000-04-05 2006-08-17 Kyphon Inc., Sunnyvale DEVICES FOR THE TREATMENT OF BROKEN AND / OR DISEASED BONE
DE10027337C1 (en) 2000-06-02 2001-08-30 Rolf Schindler Positioning aid for tomographic functional examination of the lumbar spine
US6524315B1 (en) 2000-08-08 2003-02-25 Depuy Acromed, Inc. Orthopaedic rod/plate locking mechanism
US6608916B1 (en) 2000-08-14 2003-08-19 Siemens Corporate Research, Inc. Automatic detection of spine axis and spine boundary in digital radiography
US6608917B1 (en) 2000-08-14 2003-08-19 Siemens Corporate Research, Inc. Detection of vertebra endplates in digital radiography
CA2420038C (en) 2000-08-30 2010-11-09 John Hopkins University Devices for intraocular drug delivery
AU2001290887B2 (en) 2000-09-14 2006-06-08 The Board Of Trustees Of The Leland Stanford Junior University Assessing condition of a joint and cartilage loss
JP4234311B2 (en) 2000-09-19 2009-03-04 富士フイルム株式会社 Image alignment method
DE10048847A1 (en) 2000-10-02 2002-04-18 Basf Coatings Ag Dual-cure multi-component system, e.g. for painting cars, comprises a polyisocyanate component and a mixture containing compounds with isocyanate-reactive groups and light-activatable bonds
JP2002125958A (en) 2000-10-25 2002-05-08 Fuji Photo Film Co Ltd Measurement processor for geometrical measurement of image
SE517276C2 (en) 2000-11-13 2002-05-21 Dynamed Intressenter Ab Compressible foot plate for recording pressure
US6499484B1 (en) 2000-12-21 2002-12-31 Koninklijke Philips Electronics, N.V. Crucial ligaments loading device
CA2432225C (en) 2001-01-03 2008-01-15 Michael J. Brubaker Sustained release drug delivery devices with prefabricated permeable plugs
US6996261B2 (en) * 2001-01-30 2006-02-07 Decharms R Christopher Methods for physiological monitoring, training, exercise and regulation
BR0116855B1 (en) 2001-02-07 2012-06-12 process for establishing a virtual three-dimensional representation of a bone or bone fragment from x-ray images.
US6451021B1 (en) 2001-02-15 2002-09-17 Third Millennium Engineering, Llc Polyaxial pedicle screw having a rotating locking element
DE10108965B4 (en) 2001-02-17 2006-02-23 DePuy Spine Sàrl bone screw
DE10114013B4 (en) 2001-03-22 2005-06-23 Siemens Ag magnetic resonance system
US6934574B1 (en) 2001-06-21 2005-08-23 Fonar Corporation MRI scanner and method for modular patient handling
US7127090B2 (en) 2001-07-30 2006-10-24 Accuimage Diagnostics Corp Methods and systems for combining a plurality of radiographic images
US8724865B2 (en) 2001-11-07 2014-05-13 Medical Metrics, Inc. Method, computer software, and system for tracking, stabilizing, and reporting motion between vertebrae
US20030086596A1 (en) 2001-11-07 2003-05-08 Medical Metrics, Inc. Method, computer software, and system for tracking, stabilizing, and reporting motion between vertebrae
US6890312B1 (en) 2001-12-03 2005-05-10 William B. Priester Joint angle indication system
JP3639826B2 (en) 2002-04-03 2005-04-20 キヤノン株式会社 Radiation imaging apparatus, program, computer-readable storage medium, and radiation imaging system
JP3697233B2 (en) 2002-04-03 2005-09-21 キヤノン株式会社 Radiation image processing method and radiation image processing apparatus
US6963768B2 (en) 2002-05-16 2005-11-08 General Electric Company Whole body MRI scanning with moving table and interactive control
JP4002165B2 (en) 2002-11-12 2007-10-31 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー Table system
JP3859071B2 (en) 2002-11-25 2006-12-20 ジーイー・メディカル・システムズ・グローバル・テクノロジー・カンパニー・エルエルシー Parallel link table and tomographic imaging apparatus
US6889695B2 (en) * 2003-01-08 2005-05-10 Cyberheart, Inc. Method for non-invasive heart treatment
DE102004020783A1 (en) 2004-04-27 2005-11-24 Ilan Elias diagnostic device
EP1643905A2 (en) 2003-07-10 2006-04-12 Neurocom International, Inc Apparatus and method for characterizing contributions of forces associated with a body part of a subject
US20050107681A1 (en) 2003-07-23 2005-05-19 Griffiths David M. Wireless patient monitoring device for magnetic resonance imaging
JP4555293B2 (en) 2003-09-03 2010-09-29 カイフォン・ソシエテ・ア・レスポンサビリテ・リミテ Device and associated method for creating a cavity in an internal body region
DE10353110B4 (en) 2003-11-12 2006-02-16 Delta Engineering Gmbh Actuator platform for guiding medical instruments in minimally invasive interventions
US20050148948A1 (en) 2003-12-19 2005-07-07 Caputa Steven G. Sutureless ophthalmic drug delivery system and method
US8195273B2 (en) 2004-02-02 2012-06-05 Esaote S.P.A. Magnetic resonance imaging apparatus
DE102004055234B4 (en) 2004-11-16 2014-01-02 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for determining at least one characteristic point of a joint to be orthopedically measured
US20060149136A1 (en) 2004-12-22 2006-07-06 Kyphon Inc. Elongating balloon device and method for soft tissue expansion
US7152261B2 (en) 2005-02-22 2006-12-26 Jackson Roger P Modular multi-articulated patient support system
US7206251B1 (en) 2005-03-08 2007-04-17 Altera Corporation Dual port PLD embedded memory block to support read-before-write in one clock cycle
EP1865850B1 (en) * 2005-03-29 2012-12-05 Koninklijke Philips Electronics N.V. Method and apparatus for the observation of a catheter in a vessel system
AU2006247498A1 (en) 2005-05-18 2006-11-23 Sonoma Orthopedic Products, Inc. Minimally invasive actuable bone fixation devices, systems and methods of use
DE102005030646B4 (en) * 2005-06-30 2008-02-07 Siemens Ag A method of contour visualization of at least one region of interest in 2D fluoroscopic images
US20070067034A1 (en) 2005-08-31 2007-03-22 Chirico Paul E Implantable devices and methods for treating micro-architecture deterioration of bone tissue
US8676293B2 (en) 2006-04-13 2014-03-18 Aecc Enterprises Ltd. Devices, systems and methods for measuring and evaluating the motion and function of joint structures and associated muscles, determining suitability for orthopedic intervention, and evaluating efficacy of orthopedic intervention
CN101594824B (en) * 2006-06-28 2012-01-11 皇家飞利浦电子股份有限公司 Optimal rotational trajectory determination for ra based on pre-determined optimal view map
US8107695B2 (en) * 2007-06-27 2012-01-31 General Electric Company Methods and systems for assessing patient movement in diagnostic imaging
AU2011344107A1 (en) 2010-12-13 2013-06-27 Ortho Kinematics, Inc. Methods, systems and devices for clinical data reporting and surgical navigation

Patent Citations (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5220614A (en) * 1991-02-22 1993-06-15 Professional Coin Grading Service, Inc. Automated coin grading system
US20020057828A1 (en) * 2000-11-06 2002-05-16 Fuji Photo Film Co., Ltd. Apparatus for automatically setting measurement reference element and measuring geometric feature of image
US20020085681A1 (en) * 2000-12-28 2002-07-04 Jensen Vernon Thomas Method and apparatus for obtaining and displaying computed tomography images using a fluoroscopy imaging system
US6819410B2 (en) * 2002-01-16 2004-11-16 National Rejectors, Inc. Gmbh Process for identifying an embossed image of a coin in an automatic coin tester
US8838199B2 (en) * 2002-04-04 2014-09-16 Medtronic Navigation, Inc. Method and apparatus for virtual digital subtraction angiography
US20030230723A1 (en) * 2002-06-12 2003-12-18 Koninklijke Philips Electronics N.V. Gamma camera workflow automation
US20040003981A1 (en) * 2002-07-05 2004-01-08 Sunplus Technology Co., Ltd. Apparatus and method for recognizing currency
US20050259794A1 (en) * 2002-07-09 2005-11-24 Alan Breen Method for imaging the relative motion of skeletal segments
US20050203373A1 (en) * 2004-01-29 2005-09-15 Jan Boese Method and medical imaging system for compensating for patient motion
US20060032726A1 (en) * 2004-08-10 2006-02-16 Vook Dietrich W Optical inspection system for reconstructing three-dimensional images of coins and for sorting coins
US7418076B2 (en) * 2005-11-16 2008-08-26 General Electric Company System and method for cross table tomosynthesis imaging for trauma applications
US7980378B2 (en) * 2006-03-23 2011-07-19 Cummins-Allison Corporation Systems, apparatus, and methods for currency processing control and redemption
US20110270695A1 (en) * 2006-03-23 2011-11-03 Cummins-Allison Corporation System, Apparatus, and Methods for Currency Processing Control and Redemption
US20080125649A1 (en) * 2006-09-18 2008-05-29 Andreas Meyer Automatic object tracking in a region of interest
US20090099481A1 (en) * 2007-10-10 2009-04-16 Adam Deitz Devices, Systems and Methods for Measuring and Evaluating the Motion and Function of Joints and Associated Muscles
US20120130285A1 (en) * 2007-10-10 2012-05-24 Aecc Enterprises Limited Devices, systems, and methods for measuring and evaluating the motion and function of joints and associated muscles
US8777878B2 (en) * 2007-10-10 2014-07-15 Aecc Enterprises Limited Devices, systems, and methods for measuring and evaluating the motion and function of joints and associated muscles
US7916281B2 (en) * 2008-04-18 2011-03-29 Coinsecure, Inc. Apparatus for producing optical signatures from coinage
US20100260316A1 (en) * 2009-04-13 2010-10-14 Jay Stein Integrated Breast X-Ray and Molecular Imaging System
US9138163B2 (en) * 2009-09-25 2015-09-22 Ortho Kinematics, Inc. Systems and devices for an integrated imaging system with real-time feedback loop and methods therefor
US9277879B2 (en) * 2009-09-25 2016-03-08 Ortho Kinematics, Inc. Systems and devices for an integrated imaging system with real-time feedback loops and methods therefor
US9554752B2 (en) * 2009-09-25 2017-01-31 Ortho Kinematics, Inc. Skeletal measuring means
US8615123B2 (en) * 2010-09-15 2013-12-24 Identicoin, Inc. Coin identification method and apparatus
US20120301009A1 (en) * 2010-09-15 2012-11-29 Identicoin, Inc. Coin Identification Method and Apparatus

Also Published As

Publication number Publication date
US20160135753A1 (en) 2016-05-19
US20150351667A1 (en) 2015-12-10
US20120321168A1 (en) 2012-12-20
US9554752B2 (en) 2017-01-31
WO2011038236A3 (en) 2011-06-16
US9277879B2 (en) 2016-03-08
WO2011038236A2 (en) 2011-03-31
US9138163B2 (en) 2015-09-22

Similar Documents

Publication Publication Date Title
US9554752B2 (en) Skeletal measuring means
US9491415B2 (en) Methods, systems and devices for spinal surgery position optimization
AU2007238017B2 (en) Method for imaging the motion of joints
Schopper et al. Higher stability and more predictive fixation with the Femoral Neck System versus Hansson Pins in femoral neck fractures Pauwels II
Bucke et al. Validity of the digital inclinometer and iphone when measuring thoracic spine rotation
US8777878B2 (en) Devices, systems, and methods for measuring and evaluating the motion and function of joints and associated muscles
Anderst et al. Cervical motion segment contributions to head motion during flexion\extension, lateral bending, and axial rotation
US20200205900A1 (en) Dynamic 3d motion capture for surgical implant orientation
Breen et al. Measurement of intervertebral motion using quantitative fluoroscopy: report of an international forum and proposal for use in the assessment of degenerative disc disease in the lumbar spine
Cao et al. In vivo kinematics of functional ankle instability patients and lateral ankle sprain copers during stair descent
Chleboun et al. Measurement of segmental lumbar spine flexion and extension using ultrasound imaging
Chang et al. The segmental distribution of cervical range of motion: a comparison of ACDF versus TDR-C
Yoshida et al. Three‐dimensional shoulder kinematics: Upright four‐dimensional computed tomography in comparison with an optical three‐dimensional motion capture system
Prinold et al. The influence of extreme speeds on scapula kinematics and the importance of controlling the plane of elevation
Lou et al. Validation of a novel handheld 3D ultrasound system for imaging scoliosis–phantom study
Inui et al. The influence of three-dimensional scapular kinematics on arm elevation angle in healthy subjects
Desroches Assessing the Use of Ultrasound to Quantify Spine Kinematics

Legal Events

Date Code Title Description
AS Assignment

Owner name: ORTHO KINEMATICS, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:DEITZ, ADAM;REEL/FRAME:041016/0358

Effective date: 20101206

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION

AS Assignment

Owner name: STATERA SPINE, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ORTHO KINEMATICS, INC.;REEL/FRAME:048486/0596

Effective date: 20181101

AS Assignment

Owner name: WENZEL SPINE, INC., TEXAS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:STATERA SPINE, INC.;REEL/FRAME:053868/0471

Effective date: 20200731