WO2013136093A2 - Image data storage and sharing - Google Patents
Image data storage and sharing Download PDFInfo
- Publication number
- WO2013136093A2 WO2013136093A2 PCT/GB2013/050671 GB2013050671W WO2013136093A2 WO 2013136093 A2 WO2013136093 A2 WO 2013136093A2 GB 2013050671 W GB2013050671 W GB 2013050671W WO 2013136093 A2 WO2013136093 A2 WO 2013136093A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- computer
- implemented method
- image data
- data structure
- pixels
- Prior art date
Links
- 238000013500 data storage Methods 0.000 title description 2
- 238000000034 method Methods 0.000 claims abstract description 102
- 238000002059 diagnostic imaging Methods 0.000 claims abstract description 37
- 230000007704 transition Effects 0.000 claims description 10
- 238000003708 edge detection Methods 0.000 claims description 5
- 230000000877 morphologic effect Effects 0.000 claims description 5
- 230000003044 adaptive effect Effects 0.000 claims description 4
- 230000010339 dilation Effects 0.000 claims description 4
- 238000002604 ultrasonography Methods 0.000 description 29
- 238000012545 processing Methods 0.000 description 25
- 230000006870 function Effects 0.000 description 12
- 238000001514 detection method Methods 0.000 description 6
- 238000010586 diagram Methods 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000000007 visual effect Effects 0.000 description 3
- 238000004364 calculation method Methods 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 230000000717 retained effect Effects 0.000 description 2
- 238000010200 validation analysis Methods 0.000 description 2
- 244000025254 Cannabis sativa Species 0.000 description 1
- 241000826860 Trapezium Species 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 230000008901 benefit Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 239000002131 composite material Substances 0.000 description 1
- 230000001010 compromised effect Effects 0.000 description 1
- 229940079593 drug Drugs 0.000 description 1
- 239000003814 drug Substances 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 238000002595 magnetic resonance imaging Methods 0.000 description 1
- 230000006855 networking Effects 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000035935 pregnancy Effects 0.000 description 1
- 230000009237 prenatal development Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 239000000523 sample Substances 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 238000003325 tomography Methods 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/77—Retouching; Inpainting; Scratch removal
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/12—Edge-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/62—Text, e.g. of license plates, overlay texts or captions on TV images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00838—Preventing unauthorised reproduction
- H04N1/00856—Preventive measures
- H04N1/00864—Modifying the reproduction, e.g. outputting a modified copy of a scanned original
- H04N1/00872—Modifying the reproduction, e.g. outputting a modified copy of a scanned original by image quality reduction, e.g. distortion or blacking out
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10132—Ultrasound image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30004—Biomedical image processing
-
- G—PHYSICS
- G16—INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
- G16H—HEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
- G16H30/00—ICT specially adapted for the handling or processing of medical images
- G16H30/20—ICT specially adapted for the handling or processing of medical images for handling medical images, e.g. DICOM, HL7 or PACS
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/00838—Preventing unauthorised reproduction
- H04N1/0084—Determining the necessity for prevention
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2201/00—Indexing scheme relating to scanning, transmission or reproduction of documents or the like, and to details thereof
- H04N2201/0077—Types of the still picture apparatus
- H04N2201/0079—Medical imaging device
Definitions
- This invention relates to a method of storing image data comprising images of a subject generated by a medical imaging device and to a method of sharing image data comprising images of a subject generated by a medical imaging device. It also relates to a method of detecting text features in image data.
- the invention relates to any medical imaging device such as X-ray, computerised tomography (CT), and magnetic resonance imaging (MR! scanners, it finds particular use with ultrasound scanning equipment.
- CT computerised tomography
- MR magnetic resonance imaging
- Ultrasound scanning is used for a wide variety of medical imaging purposes, in particular for obtaining images of foetuses during gestation to monitor their prenatal development.
- legacy ultrasound equipment and indeed with modem portable equipment, the images are transitory, simply being displayed on a monitor screen whilst the ultrasound probe is in contact with the patient.
- Many scanners allow a record of a static image to be made on a thermal printer and some allow a video to be made on a DVD or stored in a video file format on a memory stick.
- Image data can also be extracted from some ultrasound scanners digitally, for example using the Digital Imaging and Communications in Medicine (DICOM) standard.
- DICOM Digital Imaging and Communications in Medicine
- Images from an ultrasound scan are normally labelled with text identifying the patient, including for example, their name, date of birth, patient identification number, and other similar items. It is a breach of data security protocols in many countries for images bearing such information to be stored and/or distributed to third parties without appropriate consent being obtained from the patient beforehand. This limits the possibilities for using such images for clinical and educational purposes as mentioned above without laborious editing of the images to remove the identifying text.
- a computer-implemented method of storing image data comprising images of a subject generated by a medical imaging device comprising:
- the features identifying the subject are usually text features, such as graphical text or a visual representation of text or a caption including text.
- the image data may be captured either directly from the medical imaging device or indirectly via an intermediate device.
- step (c) is carried out automatically using a computer device.
- the medical imaging device is an ultrasound scanner
- the subject record typically further comprises a node identification number allocated to the medical imaging device for uniquely identifying the medical imaging device.
- the subject identification metadata preferably comprises one or more of the subject's name, the subject's e-mail address and a unique identification number allocated to the subject.
- the or each selected element of the image data may comprise at least one video object selected by a user.
- this video object may be a video file captured from the medical imaging device in one or more of a variety of formats, such as DV or MPEG-4.
- the or each selected element of the image data may comprise at least one still image object selected by a user.
- this still image object may be an image file captured from the medical imaging device in one or more of a variety of formats, such as JPEG.
- step (d) further comprises transmitting the subject record to a remote server.
- the subject record can then be stored by the server.
- the server is accessible on the Internet so that the subject can access their subject record for sharing purposes.
- Step (d) may comprise storing the subject identification metadata including the unique subject record identification number in a database record along with one or more uniform resource locators indicating the location of the image data in a separate file system.
- the method further comprises constructing a manifest data object, which specifies the or each modified selected element of the image data included in the subject record and including the manifest in the subject record.
- the manifest is typically a list, for example stored in a file, of the video and/or still image objects that comprise the image data.
- the method normally further comprises validating the received subject record, prior to storing the subject record, by confirming that the subject record comprises all the image data specified in the manifest and/or the validity of the image data and/or the validity of a node identification number allocated to the medical imaging device for uniquely identifying the medical imaging device.
- step (c) comprises:
- edge mask comprising only edge pixels in the image data
- edge mask for each row of the edge mask, forming a data structure representing candidate text elements consisting of the start and end positions in the row of contiguous horizontal edges
- step (c) replacing the data structure with two data structures, each consisting of one of the portions of the data structure on either side of the largest gap and then repeating step (c), if the confidence values do not meet a predetermined set of confidence criteria; or
- the edge mask is preferably formed in step (i) by applying an edge detection algorithm, such as Canny edge detector, to the image data.
- an edge detection algorithm such as Canny edge detector
- the method may further comprise performing an adaptive thresholding algorithm on the edge mask prior to step (ii) such that the edge pixels in the edge mask have a first binary value, all other pixels having a second binary value.
- the start position of a contiguous horizontal edge may be detected in step (ii) by detecting the transition along a row between pixels having the second binary value to pixels having the first binary value and/or by detecting a pixel having the first binary value in the left-hand most position along the row.
- the end position of a contiguous horizontal edge may be detected in step (ii) by detecting the transition along a row between pixels having the first binary value to pixels having the second binary value and/or by detecting a pixel having the first binary value in the right-hand most position along the row.
- the method preferably further comprises detecting regions of connected pixels having the first binary value and removing detected regions from the edge mask that do not meet predefined size criteria.
- the size criteria include the height and aspect ratio of a minimum containing rectangle around a detected region. This not only speeds up the processing but improves the quality of the results of text element detection as irrelevant data is not included in the processing, reducing the likelihood of false detection.
- the confidence values do not meet the predetermined set of confidence criteria if either:
- the confidence value for the data structure is less than a first threshold and the confidence value for either portion of the data structure on either side of the largest gap exceeds the confidence value for the data structure;
- the method further comprises forming an image mask from the data structure representing detected text elements by setting pixels in the image mask to a value depending on the confidence value for each data structure representing candidate text elements added to the data structure representing detected text elements.
- the pixels in the image mask may be set to the values depending on the confidence value for each data structure representing candidate text elements by multiplying 255 by the confidence value.
- the confidence value typically ranges from a value of 0 to 1 and therefore this procedure embeds the confidence value in the image mask.
- the method further comprises removing connections between rows of pixels in the image mask that fall below a threshold value and/or removing gaps between rows of pixels in the image mask that fall below a threshold value.
- the method may further comprise performing a thresholding algorithm on the image mask.
- the method may further comprise performing a morphological dilation algorithm on the threshoided image mask.
- the method further comprises removing or obscuring text elements in the image data by modifying pixels in the image data relating to detected text elements according to the data structure representing detected text elements.
- the pixels are preferably modified by an inpainting algorithm.
- the method further comprises sharing a selected part of the image data in the stored subject record by:
- the selected part of the image data may be all of the image data.
- the method may further comprise transmitting the selected part of image data to one or more e-mail addresses specified in the subject record.
- the sharing services may comprise one or more social media services and/or e-mail.
- the skilled person will be aware of such social media services (e.g. Youtube, Twitter and Facebook) and how to share the image data over these services using the application programming interfaces (APIs) provided by the operators of these services for that purpose.
- APIs application programming interfaces
- the login details received from the user will typically be in the form of a username and password. This is preferably provided to the user by e-mail.
- the user's e-mail address may be obtained when the image data is captured.
- the username and password may be provided to the user in a short message service (SMS) message, for which purpose a mobile phone number belonging to the user may be obtained when the image data is captured.
- SMS short message service
- MMS multimedia messaging service
- the login details may be encoded into a barcode.
- the barcode can be on a sticker that may be stuck to the user's medical notes.
- the barcode may be scanned to link the captured image data to the user.
- a computer- implemented method of sharing image data comprising images of a subject generated by a medical imaging device, the method comprising:
- This method allows a straightforward way for a subject to share image data generated by a medical imaging device, such as an ultrasound scanner, with friends and family. It is vastly more efficient than the distribution of hard copies on paper or DVD mentioned above. Furthermore, since the image data has been previously modified to remove any features identifying the subject there is no breach of data security protocols with this method, even where the subject record is received at a third party server.
- the steps of the method are carried out on a computer device.
- the features identifying the subject are usually text features, such as graphical text or a visual representation of text or a caption including text.
- the medical imaging device is an ultrasound scanner.
- the subject record typically further comprises a node identification number allocated to the medical imaging device for uniquely identifying the medical imaging device.
- the subject identification metadata preferably comprises one or more of the subject's name, the subject's e-mail address and a unique identification number allocated to the subject.
- the image data may comprise at least one video object selected by a user.
- this video object may be a video file captured from the medical imaging device in one or more of a variety of formats, such as DV or MPEG-4.
- the image data may comprise at least one still image object selected by a user.
- this still image object may be an image file captured from the medical imaging device in one or more of a variety of formats, such as JPEG.
- the method further comprises validating the received subject record, prior to storing the subject record, by confirming that the subject record comprises all the image data specified in a manifest and/or the validity of the image data and/or the validity of a node identification number allocated to the medical imaging device for uniquely identifying the medical imaging device. If validation fails, the subject record is usually placed in a queue for manual investigation.
- the manifest is usually part of the subject record and is typically a list, for example stored in a file, of the video and/or still image objects that comprise the image data.
- step (b) comprises storing a unique subject record identification number in a database record along with the subject identification data and one or more uniform resource locators indicating the location of the image data in a separate file system.
- the separate file system could be a cloud-based file system, such as Amazon's S3 service. It could alternatively be a local file system.
- the method further comprises transmitting the image data from the subject record to one or more e-mail addresses specified in the subject record.
- the sharing services typically comprise one or more social media services and/or e-mail.
- the skilled person will be aware of such social media services (e.g. Youtube, Twitter and Facebook) and how to share the image data over these services using the application programming interfaces (APIs) provided by the operators of these services for that purpose.
- APIs application programming interfaces
- the login details received from the user will typically be in the form of a usemame and password. This is preferably provided to the user by e-mail.
- the user's e-mail address may be obtained when the image data is captured.
- the username and password may be provided to the user in a short message service (SMS) message, for which purpose a mobile phone number belonging to the user may be obtained when the image data is captured.
- SMS short message service
- MMS multimedia messaging service
- the login details may be encoded into a barcode.
- the barcode can be on a sticker that may be stuck to the user's medical notes.
- the barcode may be scanned to link the captured image data to the user.
- the user may simply be provided with their username and password in printed form.
- a method of sharing image data representing images of a subject generated by a medical imaging device comprising a combination of a method according to the first aspect of the invention followed by a method according to the second aspect of the invention.
- a system comprising one or more capture devices adapted to perform a method according to the first aspect of the invention, each of which is coupled to a respective medical imaging device in use.
- a system comprising one or more capture devices as defined by claim 24.
- a system comprising one or more capture devices adapted to perform a method according to the first aspect of the invention, each of which is coupled to a respective medical imaging device, in use, and a remote storage device adapted to perform a method according to the second aspect of the invention, the remote storage device and the or each capture device together forming a network.
- a system comprising one or more capture devices, each of which is coupled to a respective medical imaging device, in use, and a remote storage device, the remote storage device and the or each capture device together forming a network and together being adapted to perform a method according to the first aspect of the invention.
- a system comprising one or more capture devices, each of which is coupled to a respective medical imaging device, in use, and a remote storage device, the remote storage device and the or each capture device together forming a network as defined by claim 25.
- a method of detecting text elements in image data comprising:
- step (c) replacing the data structure with two data structures, each consisting of one of the portions of the data structure on either side of the largest gap and then repeating step (c), if the confidence values do not meet a predetermined set of confidence criteria; or
- This method provides a straightforward way of detecting text elements in image data, which finds a multitude of uses in image processing situations.
- One such use is to find text elements in image data comprising images of a subject generated by medical imaging devices, such as an ultrasound scanner, that might identify the subject.
- the text features are, for example, graphical text or a visual representation of text or a caption including text.
- each data structure representing candidate text elements is placed in a stack in step (b). Then, in step (c)(i), it is possible to replace the data structure with two data structures consisting of the portions of the data structure on either side of the largest gap by popping the data structure representing candidate text elements and pushing the two data structures consisting of the portions of the data structure on either side of the largest gap onto the stack. In this way, the two data structures consisting of the portions of the data structure on either side of the largest gap are in the right place on the stack (i.e. at the top) for calculation of their confidence intervals when step (c) is repeated.
- the edge mask is typically formed in step (a) by applying an edge detection algorithm, such as Canny edge detector, to the image data.
- an edge detection algorithm such as Canny edge detector
- the method further comprises performing an adaptive thresholding algorithm on the edge mask prior to step (b) such that the edge pixels in the edge mask have a first binary value, all other pixels having a second binary value.
- the first binary value represents white pixels and the second binary value represents black pixels.
- the largest gap between adjacent contiguous horizontal edges is, in this case, the largest expanse of black pixels on a row between white pixels.
- the start position of a contiguous horizontal edge may be detected in step (b) by detecting the transition along a row between pixels having the second binary value to pixels having the first binary value and/or by detecting a pixel having the first binary value in the left-hand most position along the row.
- the end position of a contiguous horizontal edge may be detected in step (b) by detecting the transition along a row between pixels having the first binary value to pixels having the second binary value and/or by detecting a pixel having the first binary value in the right-hand most position along the row.
- the method preferably further comprises detecting regions of connected pixels having the first binary value and removing detected regions from the edge mask that do not meet predefined size criteria.
- the size criteria include the height and aspect ratio of a minimum containing rectangle around a detected region. This not only speeds up the processing but improves the quality of the results of text element detection as irrelevant data is not included in the processing, reducing the likelihood of false detection.
- the confidence values do not meet the predetermined set of confidence criteria if either:
- the confidence value for the data structure is less than a first threshold and the confidence value for either portion of the data structure on either side of the largest gap exceeds the confidence value for the data structure;
- the method further comprises forming an image mask from the data structure representing detected text elements by setting pixels in the image mask to a value depending on the confidence value for each data structure representing candidate text elements added to the data structure representing detected text elements.
- the pixels in the image mask may be set to the values depending on the confidence value for each data structure representing candidate text elements by multiplying 255 by the confidence value.
- the confidence value typically ranges from a value of 0 to 1 and therefore this procedure embeds the confidence value in the image mask.
- the method further comprises removing connections between rows of pixels in the image mask that fall below a threshold value and/or removing gaps between rows of pixels in the image mask that fall below a threshold value.
- the method further comprises performing a thresholding algorithm on the image mask.
- a thresholding algorithm on the image mask. This results in an image mask in which all pixels with a confidence value higher than the threshold are present (e.g. by making them white) whereas those with a confidence value lower than the threshold are not present (e.g. by making them black). It is straightforward then to identify the text elements in the original image data using this mask.
- the method further comprises performing a morphological dilation algorithm on the thresholded image mask.
- the method further comprises removing or obscuring text elements in the image data by modifying pixels in the image data relating to detected text elements according to the data structure representing detected text elements.
- the pixels are typically modified by an inpainting algorithm
- Figure 1 shows a block diagram of a system for carrying out the invention
- Figure 2 shows a flow diagram of an implementation of the invention
- Figure 3 shows a flow diagram of a text removal technique
- Figure 4 shows a detailed flow chart of one module used in the text removal technique for identifying text elements in image data.
- an ultrasound scanner 1 is shown.
- the ultrasound scanner 1 is connected to a video capture device 2.
- This comprises a computer device with a network connection for connection to the Internet 3.
- the video capture device 2 captures image data from the ultrasound scanner 1 based on user input received from a touch screen input device 5 as will be explained below.
- the image data is captured in digital format over a network from the scanner, for example using the DICOM medical imaging information exchange format.
- the image data is captured by way of an analogue-to-digital video converter, such as the ADVC55 from Grass Valley USA, LLC. This receives analogue video directly from the ultrasound scanner 1 , for example in S-Video or composite video formats, and converts it to digital form, for example DV format, for further processing by the computer device.
- a second ultrasound scanner 6 is also shown in Figure 1 along with a respective video capture device 7 and touch screen input device 8. These are identical to the ultrasound scanner 1 , video capture device 2 and touch screen input device 5. They may be situated in the same hospital or clinic as ultrasound scanner 1 and its appended video capture device 2 or in another, totally unrelated hospital or clinic. They are shown merely to illustrate that the invention is scalable for use with an unlimited number of ultrasound scanners. The only difference is that each video capture device 2 is programmed with a unique node identification number when it is installed. This serves the purpose of being able to track the source of captured image data to a particular ultrasound scanner 1.
- FIG. 1 Also shown in Figure 1 are a laptop 9 and a server 10, the function of which will be explained below.
- FIG. 2 shows a block diagram of the method performed by the video capture device 2 (or 7). All of the interaction with a clinician or other user is performed using the touch screen input device 5 (or 8).
- the method starts in step 20 when a clinician logs in. This is done in one of the conventional ways, for example using a username and password. Assuming that the login is successful, the clinician enters the patient identification details into the touch screen input device 5.
- the patient identification details may be the patient's name or a number assigned to them, for example a patient number allocated by the hospital to that particular patient.
- the patient identification details may be entered manually using a keyboard displayed on the touch screen or by scanning a barcode printed on the patient's notes.
- a warning message may also be displayed in step 21 to remind the clinician to advise the patient that their personal data will leave the control of the hospital or clinic during the process.
- the patient may also be required to confirm their acceptance of this by entering a secret password they have previously been allocated for this purpose.
- the video capture device 2 captures all the video images output by the ultrasound scanner 1. As mentioned above, this is captured from the ultrasound scanner 1 either digitally, for example using the DICOM protocol, or using an analogue-to- digital video converter.
- the resulting image data is displayed in step 22 on the touch screen input device 5 to the clinician and/or patient using video player software running on the computer device within the video capture device 2.
- the clinician and/or patient can then, in step 22, select either the whole captured video sequence or portions of it or both. Each selected portion may be either a section of video or a still image.
- the clinician may then enter, in step 23, one or more e-mail addresses to which the selected portions of the image data or a notification that the selected portions are available for sharing should be sent in due course.
- e-mail addresses will typically be the patient's e-mail address and the clinician's e-mail address. They may also be a predefined group of e- mail addresses identified by a group identifier.
- the captured image data is video image data in DV format.
- the software running on the computer device within video capture device 2 extracts, in step 24, a JPEG file for each selected still image portion and a DV file for each selected video sequence.
- a metadata file is constructed. This is a text file indicating the name and e-mail address of the patient, the node identification number allocated to the ultrasound scanner 1 , the clinician's identification number (e.g. their username entered above), the start and end time of the scan, a manifest of all the files for the still images and video sequences selected, and any e-mail addresses selected in step 23.
- Each of the DV and JPEG files is then subjected, in step 26, to an image processing method for removing or obscuring any text features present in the image data that could be used to identify the patient. This will be explained in detail below with reference to Figure 3.
- the image data is, as a result of this method, anonymised so that it can be sent from the hospital where the ultrasound scanner 1 is located without breaching data security protocols.
- each of the DV files is converted to MPEG-4 and the resulting bundle of MPEG-4 files, JPEG files and text metadata file is zipped, for example using Lempel-Ziv or similar method.
- the conversion to MPEG-4 and zipping are carried out to compress the data.
- the zipped bundle of files is then transmitted over the Internet 3 to a remote server 10 in step 28.
- This is done using a file replication process, such as rsync. over a virtual private network (VPN), which is encrypted to protect the data in transit.
- VPN virtual private network
- the remote server 10 receives the transmitted bundle of files and validates these.
- the validation process involves checking that each of the JPEG and MPEG-4 files specified in the manifest is actually present in the transmitted bundle of files and that it is not corrupted. It also involves checking that a valid node identification number has been included in the text metadata file (e.g. that the node identification number is one that has been allocated and is still in use).
- Each of the MPEG-4 files is then converted to a variety of different formats to suit the different types of devices to which the video sequences might need to be shared.
- the files are converted to appropriate formats to ensure that the video sequences are viewable on Flash and HTML5 browsers, and on iPhone and Android smartphones.
- a database is then updated by inserting a new record in step 32.
- This record includes a unique subject record identification number allocated by the server 10 along with the subject identification data and a uniform resource locator (URL) indicating the location of each of the JPEG files and converted video files in a separate file system.
- the separate file system is Amazon's S3 cloud-based system, although any other file system could be used.
- the unique subject record identification number is used rather than the patient's name or other identifying information so that if the system is compromised, there is no indication that the video or still image files correspond to any particular patient.
- the JPEG and video files are stored in the locations referred to by the URLs in the database record.
- step 33 e-mails are sent to the patient and/or clinician if their e-mail addresses were included in the text metadata file. This e-mail will normally simply indicate that the images and video sequences are now available for sharing.
- a user logs in to start the sharing process. This will typically be a patient or clinician and they will log in using a username and password. A clinician would be granted access to any uploaded records that he or she is associated with, whereas a patient would only be granted access to their own particular records. The system displays the accessible records and allows the user to select one of these.
- step 35 the user selects an image file or video file and a sharing option for that.
- a clinician they will only be allowed to download it or e-mail it to other authorised clinicians.
- they will be allowed to share it via e-mail or a social networking service, such as Youtube, Twitter or Facebook.
- the server will interface with the selected service using the APIs provided for uploading data to these services. In this way, the patient can easily share the image and/or video files with their friends and family.
- FIG. 3 shows the image processing method used to detect and remove text elements in the image data.
- the method starts in step 40 by loading a file of image data for processing.
- the file is analysed to determine whether it is a video object, a still image object or an unknown format. If it is an unknown format, processing finishes. If it is a still image object then processing continues in a still image processing branch commencing with step 42, whereas if it is a video object then processing continues in a video processing branch commencing with step 43.
- the two processing branches make use of similar techniques. Indeed, the modules used in the still image processing branch are a subset of those used in the video processing branch. However, the video image processing branch is more complicated, operating over two passes, to cater for the more complicated nature of a video sequence.
- step 42 a pair of predefined masks is loaded. These masks may be created by a user to define areas of image data that they know will always contain text and that they know will always be free of text. A positive image mask indicates areas (in white) where it is assumed that there will always be text, whereas a negative mask indicates areas (again in white) where it is assumed that there will never be text. The use of these masks is optional and they can simply be left empty (i.e. black) if not required.
- step 44 a module is used to detect text features in the still image.
- This module returns an image mask indicating areas (in white) where text elements have been detected in the image data.
- step 45 the image mask returned in step 44 is modified using the predefined image masks loaded in step 42 by combining the image mask of step 44 with the positive image mask and the complement of the negative image mask. This ensures that all areas where a user has indicated that there are always text elements are included on the resultant image mask and that the resultant image mask does not include areas where a user has indicated that there are never text elements.
- the text elements in the original still image data corresponding to the areas of text indicated in the resultant image mask are then removed in step 46 using an inpainting algorithm, for example from the OpenCV library.
- This inpainting procedure obscures the detected text using pixels from areas around the detected text. This makes the obscuring relatively unnoticeable.
- the modified image data is then saved to replace the original still image data.
- step 43 the predefined masks are loaded in step 43. This is identical to step 42 in the still image processing branch and need not be described further.
- step 47 a set of history accumulators are initialised for use later. Their purpose will be explained below.
- step 48 the next frame in the sequence is loaded.
- step 49 text features in the frame are detected using the same module as in step 44 of the still image processing branch. The detailed operation of this module will be described below with reference to Figure 4.
- step 50 the resultant image mask returned by step 49 is added to the history accumulators initialised in step 47.
- step 51 the history accumulators are analysed in step 51. This looks for small anomalies in the image masks between frames.
- the masks for single frames or small groups of frames in the video sequence indicate the existence of text elements when frames either side do not. It also detects where the masks for single frames or small groups of frames in the video sequence indicate the absence of text elements when frames either side indicate they are present. These anomalies are removed by modifying the masks either to remove the spurious indication of text elements or to include the text elements where they are spuriously absent.
- step 53 the next frame in the sequence is loaded.
- step 54 the corresponding image mask modified in accordance with the history accumulators in step 51 is loaded and modified using the predefined masks in step 55.
- the predefined masks are used in precisely the same way as in step 45.
- step 56 the inpainting procedure (discussed above with reference to step 46) is used on the frame loaded in step 53 to remove detected text elements in accordance with the image masks modified as appropriate in step 55.
- the modified video sequence is then saved to replace the original.
- the image data (which may represent a still image or a frame in a video sequence) is processed by an edge detection algorithm, such as Canny edge detector and then adaptively thresholded and transformed to a binary image.
- the binary image contains only black and white pixels.
- the white pixels are the pixels of further interest.
- the aspect ratio is considered to meet the criteria if it exceeds a ratio of 2.5 (measured as the ratio of width to height).
- the height criterion is considered met if the ratio of the height of the connected region of pixels to the image height is within a range of 0.011 to 0.04.
- a horizontal transform is applied to the remaining regions to pick out the high-frequency alternation between background and foreground pixels, which is a significant feature of text elements in image data.
- the transform is performed separately for each row of pixels in the image data.
- the transitions along the row from black to white pixels are detected.
- the white pixels adjacent to these transitions are marked as separators.
- white pixels in the left-hand most position and white pixels in the right-hand most position are detected and marked as separators.
- each separator marks the beginning or end of a contiguous horizontal region along the row of pixels.
- a data structure is formed for each row by forming an array indicating the position (i.e. the column position) along the row of each of the separators.
- the data structure represents (by their location) candidate text elements consisting of the start and end positions in the row of contiguous horizontal edges.
- Each data structure is placed on a stack for further processing.
- words Two metrics relating to these data structures, known as "words", are calculated.
- the first is the length, which is equal to the number of separators minus 1.
- the second is the maximum gap, which is the maximum number of black pixels between two adjacent separators.
- the maximum gap represents the largest gap between adjacent contiguous horizontal edges in a row.
- Confidence values for each "word” (i.e. data structure formed in step 62) and for each portion of each "word” either side of its maximum gap are then calculated in step 63.
- Trapezoidal fuzzy numbers are used for this to determine the likelihood that each "word” (or portion) is part of a real word depending on the length and maximum gap calculated above.
- the trapezoidal fuzzy numbers are calculated from these two metrics, which are used because the length correlates with the number of letters in real words typically found on ultrasound images and the maximum gap corresponds to the maximum distance found between letters in a real word.
- the confidence value of the "word” (or portion) is calculated using fuzzy set theory as the minimum value between two confidence criteria.
- the membership function m y is determined on the set of real numbers and has the form of a trapezium, the shape of which is taken from assumptions about the "word" object's length on the image.
- Its membership function m z is determined from the set of real numbers and has a trapezoidal form, the shape of which is taken from assumptions about the gaps in "word” objects.
- the value m x (x) s the confidence value for x, where x is the "word” object.
- step 64 an assessment is made as to whether the required confidence criteria are met. If they are not met then the data structure is replaced on the stack by each portion of the data structure on either side of its maximum gap. In other words, the data structure ("word") is split. Processing then proceeds back to step 63 where the confidence values will be calculated again. However this time the confidence values are calculated on the first portion of the split "word” and the portion on either side of its maximum gap. This loop continues until the confidence criteria is met when the data structure from the stack is added in step 66 to an output array.
- the confidence criteria are considered not to be met if either the confidence value for the data structure is less than a first threshold and the confidence value for either portion of the data structure on either side of the largest gap exceeds the confidence value for the data structure; or if all of the confidence values are below a second threshold.
- Suitable values for the threshold in a general text detection processing for example suitable for use in detecting text on ultrasound scan image data, are 0.75 for the first threshold and 0.25 for the second threshold.
- T1 e.g. 0.75
- T2 e.g. 0.25
- T2 ⁇ T1 indicating a confidence value that is too small
- the output array is then used to build an image mask in step 67. This is done by copying all pixels from the first separator to the last one to the corresponding places on the mask.
- the pixel values are set to be equal to 255 multiplied by the "word's" confidence. Initially, the mask is totally black. In other words, all its pixels have an initial value of zero, which is unchanged if not modified by copying pixels to the mask from the output array.
- the image mask is then processed to remove vertical gaps and vertical connections between pixels in the mask with a non-zero confidence (i.e. non-black pixels) that fall below a threshold.
- a non-zero confidence i.e. non-black pixels
- trapezoidal fuzzy numbers are used to determine whether the vertical gaps and connections fall below the threshold based on the assumption that text height typically lies within a certain range of values.
- the confidence value of a group of pixels is recalculated after a vertical gap or connection is removed. If the confidence value increases then the removal is deemed correct. Otherwise, it is reversed.
- the goal of this stage is to remove false vertical connections and fill false vertical gaps on the mask.
- the first step is to transpose the mask to place the columns of the mask in rows. This is not an essential step, and is only performed to make subsequent calculations faster. The following sequence of operations is executed for each row of the transposed mask separately.
- a "column” object is formed containing the contiguous sequence of pixels from the row of the transposed mask along with the confidence values.
- the length of the "column” object is defined as the total number of pixels that are contained in it.
- Uc be the universal set of "column” objects.
- coljen L/ c ⁇ R be the function which for each c from U c returns its length.
- X c ⁇ U c , m xc > be the fuzzy set of "column" objects that satisfy a text height criterion to some degree determined by the membership function m xc .
- x c ⁇ R, m xC > be the trapezoidal fuzzy number whose membership function is defined from assumptions made about the height of text areas on the image to search.
- a merge operation is then used on neighbouring "column” objects. If two "column” objects are neighbours on the same row of the transposed mask, the merge operation between them returns a new "column” object that satisfies the following requirements. First, the resultant "column” object contains all the pixels from both "columns” being merged and pixels which lie between them. Second, the confidence values of pixels between "columns” being merged are assigned to the minimal confidence value among all the pixels of the "columns" being merged.
- a row has the following sequence of pixels designated by their confidence values:
- the pixels that previously belonged to the gap between the "columns” are assigned to the minimum confidence value among the pixels of the "columns" being merged.
- each pixel's confidence is recalculated as the minimum value of the current pixel's confidence and the confidence values of the "column” which it belongs to. For example, if for the "column" object:
- the non-binary image mask of step 67 is then thresholded to turn it into a binary image mask in step 68.
- a morphological dilate operation is then performed and the resultant image mask returned by the module.
- the resultant image mask can then be used to determine which pixels in the original image data should be obscured by the inpainting process referred to above. In this way, text elements can be detected and obscured to anonymise image data.
Landscapes
- Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Security & Cryptography (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Apparatus For Radiation Diagnosis (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
- Processing Or Creating Images (AREA)
- Ultra Sonic Daignosis Equipment (AREA)
- Image Processing (AREA)
Abstract
A computer-implemented method of storing image data comprising images of a subject generated by a medical imaging device is disclosed. The method comprises: a) capturing the image data; b) receiving subject identification metadata; c) analysing at least one selected element of the image data to detect features identifying the subject and modifying the or each selected element of the image data by removing or obscuring any such detected features; and d) storing a subject record comprising the or each modified selected element of the image data and the subject identification metadata.
Description
IMAGE DATA STORAGE AND SHARING
This invention relates to a method of storing image data comprising images of a subject generated by a medical imaging device and to a method of sharing image data comprising images of a subject generated by a medical imaging device. It also relates to a method of detecting text features in image data.
Whilst the invention relates to any medical imaging device such as X-ray, computerised tomography (CT), and magnetic resonance imaging (MR!) scanners, it finds particular use with ultrasound scanning equipment.
Ultrasound scanning is used for a wide variety of medical imaging purposes, in particular for obtaining images of foetuses during gestation to monitor their prenatal development. With legacy ultrasound equipment, and indeed with modem portable equipment, the images are transitory, simply being displayed on a monitor screen whilst the ultrasound probe is in contact with the patient. Many scanners allow a record of a static image to be made on a thermal printer and some allow a video to be made on a DVD or stored in a video file format on a memory stick. Image data can also be extracted from some ultrasound scanners digitally, for example using the Digital Imaging and Communications in Medicine (DICOM) standard.
It is desirable for the images generated by an ultrasound scanner to be stored and shared for a variety of reasons. Firstly, clinicians may wish to make use of the images for diagnostic purposes after the scan has been taken, to share them with colleagues (possibly in other clinics) to obtain a second opinion, and for teaching students. Another popular use is by the patients themselves, who are often keen to share the images from an ultrasound scan of an unborn child with family and friends.
There are problems with the current approaches to this, however, from a data security perspective. Images from an ultrasound scan are normally labelled with text identifying the patient, including for example, their name, date of birth, patient identification number, and other similar items. It is a breach of data security protocols in many countries for images bearing such information to be stored and/or distributed to third parties without appropriate consent being obtained from the patient beforehand. This limits the possibilities for using such images for clinical and educational
purposes as mentioned above without laborious editing of the images to remove the identifying text.
From the patient's perspective, whilst they are free to distribute the images as they wish, the use of hard copy media is simply inefficient to share with family and friends who may live significant distances from and seldom see the patient.
In accordance with a first aspect of the invention, there is provided a computer-implemented method of storing image data comprising images of a subject generated by a medical imaging device, the method comprising:
a) capturing the image data;
b) receiving subject identification metadata;
c) analysing at least one selected element of the image data to detect features identifying the subject and modifying the or each selected element of the image data by removing or obscuring any such detected features; and
d) storing a subject record comprising the or each modified selected element of the image data and the subject identification metadata.
By storing a record that comprises image data from which features identifying the subject have been removed, it is possible to distribute the image data for educational and clinical purposes without breaching data security protocols. The above-mentioned problems are thereby overcome.
The features identifying the subject are usually text features, such as graphical text or a visual representation of text or a caption including text.
The image data may be captured either directly from the medical imaging device or indirectly via an intermediate device.
The steps of the method are carried out by one or more computer devices. For example, step (c) is carried out automatically using a computer device.
In a preferred embodiment, the medical imaging device is an ultrasound scanner,
The subject record typically further comprises a node identification number allocated to the medical imaging device for uniquely identifying the medical imaging device.
The subject identification metadata preferably comprises one or more of the subject's name, the subject's e-mail address and a unique identification number allocated to the subject.
The or each selected element of the image data may comprise at least one video object selected by a user. For example, this video object may be a video file captured from the medical imaging device in one or more of a variety of formats, such as DV or MPEG-4.
The or each selected element of the image data may comprise at least one still image object selected by a user. For example, this still image object may be an image file captured from the medical imaging device in one or more of a variety of formats, such as JPEG.
Typically, step (d) further comprises transmitting the subject record to a remote server. The subject record can then be stored by the server. In a preferred embodiment, the server is accessible on the Internet so that the subject can access their subject record for sharing purposes.
Step (d) may comprise storing the subject identification metadata including the unique subject record identification number in a database record along with one or more uniform resource locators indicating the location of the image data in a separate file system.
Preferably, the method further comprises constructing a manifest data object, which specifies the or each modified selected element of the image data included in the subject record and including the manifest in the subject record. This enables a straightforward way of validating the subject record on storage and subsequently as explained below. The manifest is typically a list, for example stored in a file, of the video and/or still image objects that comprise the image data.
The method normally further comprises validating the received subject record, prior to storing the subject record, by confirming that the subject record comprises all the image data specified in the manifest and/or the validity of the image data and/or the validity of a node identification number allocated to the medical imaging device for uniquely identifying the medical imaging device.
In a preferred embodiment, step (c) comprises:
i) forming an edge mask comprising only edge pixels in the image data;
ii) for each row of the edge mask, forming a data structure representing candidate text elements consisting of the start and end positions in the row of contiguous horizontal edges; and
iii) for each data structure representing candidate text elements, calculating confidence values for the data structure and for each portion of the data structure on either side of the largest gap between adjacent contiguous horizontal edges; and either
replacing the data structure with two data structures, each consisting of one of the portions of the data structure on either side of the largest gap and then repeating step (c), if the confidence values do not meet a predetermined set of confidence criteria; or
adding the data structure to a data structure representing detected text elements.
The edge mask is preferably formed in step (i) by applying an edge detection algorithm, such as Canny edge detector, to the image data.
The method may further comprise performing an adaptive thresholding algorithm on the edge mask prior to step (ii) such that the edge pixels in the edge mask have a first binary value, all other pixels having a second binary value.
The start position of a contiguous horizontal edge may be detected in step (ii) by detecting the transition along a row between pixels having the second binary value to pixels having the first binary value and/or by detecting a pixel having the first binary value in the left-hand most position along the row.
The end position of a contiguous horizontal edge may be detected in step (ii) by detecting the transition along a row between pixels having the first binary value to pixels having the second binary value and/or by detecting a pixel having the first binary value in the right-hand most position along the row.
The method preferably further comprises detecting regions of connected pixels having the first binary value and removing detected regions from the edge mask that do not meet predefined size criteria. Typically, the size criteria include the height and aspect ratio of a minimum containing rectangle around a detected region. This not only speeds up the processing but improves the quality of the results of text element detection as irrelevant data is not included in the processing, reducing the likelihood of false detection.
In one embodiment, the confidence values do not meet the predetermined set
of confidence criteria if either:
i) the confidence value for the data structure is less than a first threshold and the confidence value for either portion of the data structure on either side of the largest gap exceeds the confidence value for the data structure; or
ii) all of the confidence values are below a second threshold.
Typically, the method further comprises forming an image mask from the data structure representing detected text elements by setting pixels in the image mask to a value depending on the confidence value for each data structure representing candidate text elements added to the data structure representing detected text elements.
In this image mask, the pixels in the image mask may be set to the values depending on the confidence value for each data structure representing candidate text elements by multiplying 255 by the confidence value. The confidence value typically ranges from a value of 0 to 1 and therefore this procedure embeds the confidence value in the image mask.
Typically, the method further comprises removing connections between rows of pixels in the image mask that fall below a threshold value and/or removing gaps between rows of pixels in the image mask that fall below a threshold value.
The method may further comprise performing a thresholding algorithm on the image mask.
The method may further comprise performing a morphological dilation algorithm on the threshoided image mask.
Typically, the method further comprises removing or obscuring text elements in the image data by modifying pixels in the image data relating to detected text elements according to the data structure representing detected text elements.
The pixels are preferably modified by an inpainting algorithm.
In one embodiment, the method further comprises sharing a selected part of the image data in the stored subject record by:
e) receiving login details from a user;
f) authenticating the login details to confirm that the user is the subject identified by the subject identification metadata; and
g) receiving a sharing command from the user, the sharing command indicating a selected one of a plurality of sharing services over which the selected part of the image data is to be shared with a third party, and sharing the selected part of the image data over the selected sharing service.
The selected part of the image data may be all of the image data.
In this embodiment, the method may further comprise transmitting the selected part of image data to one or more e-mail addresses specified in the subject record.
In this embodiment, the sharing services may comprise one or more social media services and/or e-mail. The skilled person will be aware of such social media services (e.g. Youtube, Twitter and Facebook) and how to share the image data over these services using the application programming interfaces (APIs) provided by the operators of these services for that purpose.
The login details received from the user will typically be in the form of a username and password. This is preferably provided to the user by e-mail. The user's e-mail address may be obtained when the image data is captured.
Alternatively, the username and password may be provided to the user in a short message service (SMS) message, for which purpose a mobile phone number belonging to the user may be obtained when the image data is captured. In a variant of this alternative, a multimedia messaging service (MMS) message including the user's username and password and one or more of the or each modified selected element of the image data may be sent to the user.
The login details, for example a username and password, may be encoded into a barcode. The barcode can be on a sticker that may be stuck to the user's medical notes. When the image data is captured, the barcode may be scanned to link the captured image data to the user.
As yet another alternative, the user may simpiy be provided with their username and password in printed form.
In accordance with a second aspect of the invention, there is provided a computer- implemented method of sharing image data comprising images of a subject generated by a medical imaging device, the method comprising:
a) receiving a subject record comprising the image data and subject identification metadata, wherein the image data has been previously modified to remove any features identifying the subject;
b) storing the subject record;
c) receiving login details from a user;
d) authenticating the login details to confirm that the user is the subject identified by the subject identification metadata; and
e) receiving a sharing command from the user, the sharing command indicating a selected one of a plurality of sharing services over which the image data is to be shared with a third party, and sharing the image data over the selected sharing service.
This method allows a straightforward way for a subject to share image data generated by a medical imaging device, such as an ultrasound scanner, with friends and family. It is vastly more efficient than the distribution of hard copies on paper or DVD mentioned above. Furthermore, since the image data has been previously modified to remove any features identifying the subject there is no breach of data security protocols with this method, even where the subject record is received at a third party server.
The steps of the method are carried out on a computer device.
The features identifying the subject are usually text features, such as graphical text or a visual representation of text or a caption including text.
In a preferred embodiment, the medical imaging device is an ultrasound scanner.
The subject record typically further comprises a node identification number allocated to the medical imaging device for uniquely identifying the medical imaging device.
The subject identification metadata preferably comprises one or more of the subject's name, the subject's e-mail address and a unique identification number allocated to the subject.
The image data may comprise at least one video object selected by a user. For example, this video object may be a video file captured from the medical imaging device in one or more of a variety of formats, such as DV or MPEG-4.
The image data may comprise at least one still image object selected by a user. For example, this still image object may be an image file captured from the medical imaging device in one or more of a variety of formats, such as JPEG.
Normally, the method further comprises validating the received subject record, prior to storing the subject record, by confirming that the subject record comprises all the image data specified in a manifest and/or the validity of the image data and/or the validity of a node identification number allocated to the medical imaging device for uniquely identifying the medical imaging device. If validation fails, the subject record is usually placed in a queue for manual investigation.
The manifest is usually part of the subject record and is typically a list, for example stored in a file, of the video and/or still image objects that comprise the image data.
Typically, step (b) comprises storing a unique subject record identification number in a database record along with the subject identification data and one or more uniform resource locators indicating the location of the image data in a separate file system. The separate file system could be a cloud-based file system, such as Amazon's S3 service. It could alternatively be a local file system.
In one embodiment, the method further comprises transmitting the image data from the subject record to one or more e-mail addresses specified in the subject record.
The sharing services typically comprise one or more social media services and/or e-mail. The skilled person will be aware of such social media services (e.g. Youtube, Twitter and Facebook) and how to share the image data over these services using the application programming interfaces (APIs) provided by the operators of these services for that purpose.
The login details received from the user will typically be in the form of a usemame and password. This is preferably provided to the user by e-mail. The user's e-mail address may be obtained when the image data is captured.
Alternatively, the username and password may be provided to the user in a short message service (SMS) message, for which purpose a mobile phone number belonging to the user may be obtained when the image data is captured. In a variant of this alternative, a multimedia messaging service (MMS) message including the user's usemame and
password and one or more of the or each modified selected element of the image data may be sent to the user.
The login details, for example a username and password, may be encoded into a barcode. The barcode can be on a sticker that may be stuck to the user's medical notes. When the image data is captured, the barcode may be scanned to link the captured image data to the user.
As yet another alternative, the user may simply be provided with their username and password in printed form.
In accordance with a third aspect of the invention, there is provided a method of sharing image data representing images of a subject generated by a medical imaging device, the method comprising a combination of a method according to the first aspect of the invention followed by a method according to the second aspect of the invention.
In accordance with a fourth aspect of the invention, there is provided a system comprising one or more capture devices adapted to perform a method according to the first aspect of the invention, each of which is coupled to a respective medical imaging device in use. Thus, there is provided a system comprising one or more capture devices as defined by claim 24.
In accordance with a fifth aspect of the invention, there is provided a system comprising one or more capture devices adapted to perform a method according to the first aspect of the invention, each of which is coupled to a respective medical imaging device, in use, and a remote storage device adapted to perform a method according to the second aspect of the invention, the remote storage device and the or each capture device together forming a network.
In accordance with a sixth aspect of the invention, there is provided a system comprising one or more capture devices, each of which is coupled to a respective medical imaging device, in use, and a remote storage device, the remote storage device and the or each capture device together forming a network and together being adapted to perform a method according to the first aspect of the invention. Thus, there is provided a system comprising one or more capture devices, each of which is coupled to a respective medical imaging device, in use, and a remote storage device, the remote storage device and the or each capture device together forming a network as defined by claim 25.
In accordance with a seventh aspect of the invention, there is provided a method of detecting text elements in image data, the method comprising:
a) forming an edge mask comprising only edge pixels in the image data;
b) for each row of the edge mask, forming a data structure representing candidate text elements consisting of the start and end positions in the row of contiguous horizontal edges; and
c) for each data structure representing candidate text elements, calculating confidence values for the data structure and for each portion of the data structure on either side of the largest gap between adjacent contiguous horizontal edges; and either
i) replacing the data structure with two data structures, each consisting of one of the portions of the data structure on either side of the largest gap and then repeating step (c), if the confidence values do not meet a predetermined set of confidence criteria; or
ii) adding the data structure to a data structure representing detected text elements.
This method provides a straightforward way of detecting text elements in image data, which finds a multitude of uses in image processing situations. One such use is to find text elements in image data comprising images of a subject generated by medical imaging devices, such as an ultrasound scanner, that might identify the subject. The text features are, for example, graphical text or a visual representation of text or a caption including text.
In one embodiment, each data structure representing candidate text elements is placed in a stack in step (b). Then, in step (c)(i), it is possible to replace the data structure with two data structures consisting of the portions of the data structure on either side of the largest gap by popping the data structure representing candidate text elements and pushing the two data structures consisting of the portions of the data structure on either side of the largest gap onto the stack. In this way, the two data structures consisting of the portions of the data structure on either side of the largest gap are in the right place on the stack (i.e. at the top) for calculation of their confidence intervals when step (c) is repeated.
The edge mask is typically formed in step (a) by applying an edge detection algorithm, such as Canny edge detector, to the image data.
Preferably, the method further comprises performing an adaptive thresholding algorithm on the edge mask prior to step (b) such that the edge pixels in the edge mask have a first binary value, all other pixels having a second binary value.
Typically, the first binary value represents white pixels and the second binary value represents black pixels. The largest gap between adjacent contiguous horizontal edges is, in this case, the largest expanse of black pixels on a row between white pixels.
The start position of a contiguous horizontal edge may be detected in step (b) by detecting the transition along a row between pixels having the second binary value to pixels having the first binary value and/or by detecting a pixel having the first binary value in the left-hand most position along the row.
The end position of a contiguous horizontal edge may be detected in step (b) by detecting the transition along a row between pixels having the first binary value to pixels having the second binary value and/or by detecting a pixel having the first binary value in the right-hand most position along the row.
The method preferably further comprises detecting regions of connected pixels having the first binary value and removing detected regions from the edge mask that do not meet predefined size criteria. Typically, the size criteria include the height and aspect ratio of a minimum containing rectangle around a detected region. This not only speeds up the processing but improves the quality of the results of text element detection as irrelevant data is not included in the processing, reducing the likelihood of false detection.
In a preferred embodiment, the confidence values do not meet the predetermined set of confidence criteria if either:
i) the confidence value for the data structure is less than a first threshold and the confidence value for either portion of the data structure on either side of the largest gap exceeds the confidence value for the data structure; or
ii) all of the confidence values are below a second threshold.
Preferably, the method further comprises forming an image mask from the data structure representing detected text elements by setting pixels in the image mask to a value depending on the confidence value for each data structure representing candidate text elements added to the data structure representing detected text elements.
In this image mask, the pixels in the image mask may be set to the values depending on the confidence value for each data structure representing candidate text elements by multiplying 255 by the confidence value. The confidence value typically ranges from a value of 0 to 1 and therefore this procedure embeds the confidence value in the image mask.
Typically, the method further comprises removing connections between rows of pixels in the image mask that fall below a threshold value and/or removing gaps between rows of pixels in the image mask that fall below a threshold value.
Preferably, the method further comprises performing a thresholding algorithm on the image mask. This results in an image mask in which all pixels with a confidence value higher than the threshold are present (e.g. by making them white) whereas those with a confidence value lower than the threshold are not present (e.g. by making them black). It is straightforward then to identify the text elements in the original image data using this mask.
Typically, the method further comprises performing a morphological dilation algorithm on the thresholded image mask.
In a preferred embodiment, the method further comprises removing or obscuring text elements in the image data by modifying pixels in the image data relating to detected text elements according to the data structure representing detected text elements.
The pixels are typically modified by an inpainting algorithm,
An embodiment of the invention will now be described with reference to the accompanying drawings, in which:
Figure 1 shows a block diagram of a system for carrying out the invention; Figure 2 shows a flow diagram of an implementation of the invention; Figure 3 shows a flow diagram of a text removal technique; and
Figure 4 shows a detailed flow chart of one module used in the text removal technique for identifying text elements in image data.
In Figure 1 , an ultrasound scanner 1 is shown. The ultrasound scanner 1 is connected to a video capture device 2. This comprises a computer device with a network connection for connection to the Internet 3. The video capture device 2 captures image data from the ultrasound scanner 1 based on user input received from a touch screen input device 5 as will be explained below. In one embodiment, the image data is captured in digital format over
a network from the scanner, for example using the DICOM medical imaging information exchange format. In another embodiment, the image data is captured by way of an analogue-to-digital video converter, such as the ADVC55 from Grass Valley USA, LLC. This receives analogue video directly from the ultrasound scanner 1 , for example in S-Video or composite video formats, and converts it to digital form, for example DV format, for further processing by the computer device.
A second ultrasound scanner 6 is also shown in Figure 1 along with a respective video capture device 7 and touch screen input device 8. These are identical to the ultrasound scanner 1 , video capture device 2 and touch screen input device 5. They may be situated in the same hospital or clinic as ultrasound scanner 1 and its appended video capture device 2 or in another, totally unrelated hospital or clinic. They are shown merely to illustrate that the invention is scalable for use with an unlimited number of ultrasound scanners. The only difference is that each video capture device 2 is programmed with a unique node identification number when it is installed. This serves the purpose of being able to track the source of captured image data to a particular ultrasound scanner 1.
Also shown in Figure 1 are a laptop 9 and a server 10, the function of which will be explained below.
Figure 2 shows a block diagram of the method performed by the video capture device 2 (or 7). All of the interaction with a clinician or other user is performed using the touch screen input device 5 (or 8). The method starts in step 20 when a clinician logs in. This is done in one of the conventional ways, for example using a username and password. Assuming that the login is successful, the clinician enters the patient identification details into the touch screen input device 5. The patient identification details may be the patient's name or a number assigned to them, for example a patient number allocated by the hospital to that particular patient. The patient identification details may be entered manually using a keyboard displayed on the touch screen or by scanning a barcode printed on the patient's notes.
A warning message may also be displayed in step 21 to remind the clinician to advise the patient that their personal data will leave the control of the hospital or clinic during the process. The patient may also be required to confirm their acceptance of this by entering a secret password they have previously been allocated for this purpose.
During the ultrasound scan, the video capture device 2 captures all the video images output by the ultrasound scanner 1. As mentioned above, this is captured from the ultrasound scanner 1 either digitally, for example using the DICOM protocol, or using an analogue-to- digital video converter. The resulting image data is displayed in step 22 on the touch screen input device 5 to the clinician and/or patient using video player software running on the computer device within the video capture device 2. The clinician and/or patient can then, in step 22, select either the whole captured video sequence or portions of it or both. Each selected portion may be either a section of video or a still image.
The clinician may then enter, in step 23, one or more e-mail addresses to which the selected portions of the image data or a notification that the selected portions are available for sharing should be sent in due course. These e-mail addresses will typically be the patient's e-mail address and the clinician's e-mail address. They may also be a predefined group of e- mail addresses identified by a group identifier.
In a typical embodiment, the captured image data is video image data in DV format. The software running on the computer device within video capture device 2 extracts, in step 24, a JPEG file for each selected still image portion and a DV file for each selected video sequence. In step 25, a metadata file is constructed. This is a text file indicating the name and e-mail address of the patient, the node identification number allocated to the ultrasound scanner 1 , the clinician's identification number (e.g. their username entered above), the start and end time of the scan, a manifest of all the files for the still images and video sequences selected, and any e-mail addresses selected in step 23.
Each of the DV and JPEG files is then subjected, in step 26, to an image processing method for removing or obscuring any text features present in the image data that could be used to identify the patient. This will be explained in detail below with reference to Figure 3. The image data is, as a result of this method, anonymised so that it can be sent from the hospital where the ultrasound scanner 1 is located without breaching data security protocols.
In step 27, each of the DV files is converted to MPEG-4 and the resulting bundle of MPEG-4 files, JPEG files and text metadata file is zipped, for example using Lempel-Ziv or similar method. The conversion to MPEG-4 and zipping are carried out to compress the data.
The zipped bundle of files is then transmitted over the Internet 3 to a remote server 10 in step 28. This is done using a file replication process, such as rsync. over a virtual private network (VPN), which is encrypted to protect the data in transit. One advantage of this
technique is that the hospital only needs to open one TPC/IP port to enable the transmission. It is therefore relatively secure.
Finally, the zipped bundle of files and all the captured video data from ultrasound scanner 1 is deleted in step 29 so that no local copy remains.
In step 30, the remote server 10 receives the transmitted bundle of files and validates these. The validation process involves checking that each of the JPEG and MPEG-4 files specified in the manifest is actually present in the transmitted bundle of files and that it is not corrupted. It also involves checking that a valid node identification number has been included in the text metadata file (e.g. that the node identification number is one that has been allocated and is still in use).
Each of the MPEG-4 files is then converted to a variety of different formats to suit the different types of devices to which the video sequences might need to be shared. For example, the files are converted to appropriate formats to ensure that the video sequences are viewable on Flash and HTML5 browsers, and on iPhone and Android smartphones.
A database is then updated by inserting a new record in step 32. This record includes a unique subject record identification number allocated by the server 10 along with the subject identification data and a uniform resource locator (URL) indicating the location of each of the JPEG files and converted video files in a separate file system. In this embodiment, the separate file system is Amazon's S3 cloud-based system, although any other file system could be used. The unique subject record identification number is used rather than the patient's name or other identifying information so that if the system is compromised, there is no indication that the video or still image files correspond to any particular patient. At the same time as the database record is inserted, the JPEG and video files are stored in the locations referred to by the URLs in the database record.
In step 33, e-mails are sent to the patient and/or clinician if their e-mail addresses were included in the text metadata file. This e-mail will normally simply indicate that the images and video sequences are now available for sharing.
In step 34, a user logs in to start the sharing process. This will typically be a patient or clinician and they will log in using a username and password. A clinician would be granted access to any uploaded records that he or she is associated with, whereas a patient would
only be granted access to their own particular records. The system displays the accessible records and allows the user to select one of these.
Then, in step 35, the user selects an image file or video file and a sharing option for that. In the case of a clinician, they will only be allowed to download it or e-mail it to other authorised clinicians. In the case of a patient, they will be allowed to share it via e-mail or a social networking service, such as Youtube, Twitter or Facebook. The server will interface with the selected service using the APIs provided for uploading data to these services. In this way, the patient can easily share the image and/or video files with their friends and family.
Figure 3 shows the image processing method used to detect and remove text elements in the image data. The method starts in step 40 by loading a file of image data for processing. In step 41 , the file is analysed to determine whether it is a video object, a still image object or an unknown format. If it is an unknown format, processing finishes. If it is a still image object then processing continues in a still image processing branch commencing with step 42, whereas if it is a video object then processing continues in a video processing branch commencing with step 43. The two processing branches make use of similar techniques. Indeed, the modules used in the still image processing branch are a subset of those used in the video processing branch. However, the video image processing branch is more complicated, operating over two passes, to cater for the more complicated nature of a video sequence.
The still image processing branch will be described first. In step 42, a pair of predefined masks is loaded. These masks may be created by a user to define areas of image data that they know will always contain text and that they know will always be free of text. A positive image mask indicates areas (in white) where it is assumed that there will always be text, whereas a negative mask indicates areas (again in white) where it is assumed that there will never be text. The use of these masks is optional and they can simply be left empty (i.e. black) if not required.
In step 44, a module is used to detect text features in the still image. The detailed operation of this module will be described below with reference to Figure 4. In the meantime, it suffices to say that it returns an image mask indicating areas (in white) where text elements have been detected in the image data. In step 45, the image mask returned in step 44 is modified using the predefined image masks loaded in step 42 by combining the image mask of step 44 with the positive image mask and the complement of the negative image mask. This ensures that all areas where a user has indicated that there are always text elements are
included on the resultant image mask and that the resultant image mask does not include areas where a user has indicated that there are never text elements.
The text elements in the original still image data corresponding to the areas of text indicated in the resultant image mask are then removed in step 46 using an inpainting algorithm, for example from the OpenCV library. This inpainting procedure obscures the detected text using pixels from areas around the detected text. This makes the obscuring relatively unnoticeable. The modified image data is then saved to replace the original still image data.
In the video processing branch, the predefined masks are loaded in step 43. This is identical to step 42 in the still image processing branch and need not be described further. In step 47, a set of history accumulators are initialised for use later. Their purpose will be explained below.
In the first pass, processing continues in a loop around steps 48, 49 and 50 until all frames of the video sequence have been processed. In step 48, the next frame in the sequence is loaded. In step 49, text features in the frame are detected using the same module as in step 44 of the still image processing branch. The detailed operation of this module will be described below with reference to Figure 4. In step 50, the resultant image mask returned by step 49 is added to the history accumulators initialised in step 47.
After the first pass is complete, the history accumulators are analysed in step 51. This looks for small anomalies in the image masks between frames.
Specifically, it detects where the masks for single frames or small groups of frames in the video sequence indicate the existence of text elements when frames either side do not. It also detects where the masks for single frames or small groups of frames in the video sequence indicate the absence of text elements when frames either side indicate they are present. These anomalies are removed by modifying the masks either to remove the spurious indication of text elements or to include the text elements where they are spuriously absent.
In the second pass, a loop of steps 53, 54, 55 and 56 operates over each frame in the video sequence in turn. In step 53, the next frame in the sequence is loaded. In step 54 the corresponding image mask modified in accordance with the history accumulators in step 51 is loaded and modified using the predefined masks in step 55. The predefined masks are used in precisely the same way as in step 45. Then in step 56, the inpainting procedure
(discussed above with reference to step 46) is used on the frame loaded in step 53 to remove detected text elements in accordance with the image masks modified as appropriate in step 55. The modified video sequence is then saved to replace the original. Once the second pass is complete, the audio stream is copied from the original video sequence to the modified version in step 57.
The detection of the text elements in steps 44 and 49 will now be explained in more detail with reference to Figure 4, which shows the method performed by the module used in steps 44 and 49.
First, in step 60, the image data (which may represent a still image or a frame in a video sequence) is processed by an edge detection algorithm, such as Canny edge detector and then adaptively thresholded and transformed to a binary image. The binary image contains only black and white pixels. The white pixels are the pixels of further interest.
Then connected regions of white pixels on the binary image are detected. These regions are analysed according to their height and aspect ratio, and those that do not meet predefined height and aspect ratio criteria are filtered out. This leaves only regions that could conceivably contain text-like items and reduces the processing load required in the following steps of the algorithm. The aspect ratio is considered to meet the criteria if it exceeds a ratio of 2.5 (measured as the ratio of width to height). The height criterion is considered met if the ratio of the height of the connected region of pixels to the image height is within a range of 0.011 to 0.04.
Next, in step 61 , a horizontal transform is applied to the remaining regions to pick out the high-frequency alternation between background and foreground pixels, which is a significant feature of text elements in image data. The transform is performed separately for each row of pixels in the image data. The transitions along the row from black to white pixels are detected. The white pixels adjacent to these transitions are marked as separators. Furthermore, white pixels in the left-hand most position and white pixels in the right-hand most position are detected and marked as separators. Thus, each separator marks the beginning or end of a contiguous horizontal region along the row of pixels.
In step 62, a data structure is formed for each row by forming an array indicating the position (i.e. the column position) along the row of each of the separators. Thus, the data structure represents (by their location) candidate text elements consisting of the start and end
positions in the row of contiguous horizontal edges. Each data structure is placed on a stack for further processing.
Two metrics relating to these data structures, known as "words", are calculated. The first is the length, which is equal to the number of separators minus 1. The second is the maximum gap, which is the maximum number of black pixels between two adjacent separators. Thus, the maximum gap represents the largest gap between adjacent contiguous horizontal edges in a row.
Confidence values for each "word" (i.e. data structure formed in step 62) and for each portion of each "word" either side of its maximum gap are then calculated in step 63. Trapezoidal fuzzy numbers are used for this to determine the likelihood that each "word" (or portion) is part of a real word depending on the length and maximum gap calculated above. The trapezoidal fuzzy numbers are calculated from these two metrics, which are used because the length correlates with the number of letters in real words typically found on ultrasound images and the maximum gap corresponds to the maximum distance found between letters in a real word. The confidence value of the "word" (or portion) is calculated using fuzzy set theory as the minimum value between two confidence criteria.
An explanation of how the confidence value of a "word" object is calculated follows. Let U be the universal set of all possible "word" objects. Its subset X contains all text-like "word" objects on the given image. X is built as a fuzzy set. Thus, X can be represented as the pair <U, mx> where mx : U → [0, 1] is the membership function, which determines the membership degree for each element in U to the set X. Note that a fuzzy number is a special kind of fuzzy set where the universal set is the set of real numbers R. Also, the fuzzy numbers calculated should satisfy the requirements of continuity, convexity and normalization. These requirements are always satisfied with trapezoidal fuzzy numbers, which are used in the algorithm being described.
Because of the definition of X, only those "word" objects that are found on the given image are considered. To estimate mx(x) where x is an arbitrary "word" object, the two length and maximum gap characteristics of the "word" object are used.
Let Y - <U, my, > be the fuzzy set of "word" objects whose lengths satisfy a first criterion to some degree, which is estimated as follows based on a trapezoidal fuzzy number y = <R, rriy>. The membership function my is determined on the set of real numbers and has the form of a trapezium, the shape of which is taken from assumptions about the "word" object's
length on the image. Let len: U→R be the function which returns the length for any given "word" object. Then we estimate for any x from X the degree of satisfying the first criterion rriy as my (x )= my (len ( x )).
Thus we estimate the membership degree of the "word" object to the fuzzy set Y using the membership degree of its length to the fuzzy number y.
Let Z = <U, mz > be the fuzzy set of "word" objects whose maximum gaps satisfy a second criterion to some degree, which is estimated as follows based on a trapezoidal fuzzy number z = <R, mz >. Its membership function mz is determined from the set of real numbers and has a trapezoidal form, the shape of which is taken from assumptions about the gaps in "word" objects. Let max_gap : U→ R be the function which returns the maximum gap for any given "word" object. Then we estimate for any x from X the degree of satisfying the second criterion mz as m∑ (x ) = mz(max_gap( x )).
Thus there are two fuzzy sets Y and Z satisfying different criteria of text-like "word" objects. We require that both the criteria should be satisfied at the same time. This requirement corresponds to the operation of fuzzy sets intersection. Thus, X = Y Π Z.
According to the fuzzy set theory, intersection of two fuzzy sets can be calculated as follows. For any x from X the membership function value mx ( x ) is evaluated as mx ( x ) = min (mY ( x) , mz ( x)) .
The value mx (x) s the confidence value for x, where x is the "word" object.
For example, if a "word" object has 4 separators and a gap between two contiguous edges of 2 pixels then the length of this "word" (the number of separators minus 1 ) is 3 and the maximum gap of this "word" is 2. Thus, len(x) = 3 and max_gap(x) = 2 where x is the "word" object being discussed. Assuming that y and z are fuzzy numbers already defined in configuration settings used by the algorithm so that my (3)=0.45 and rnz (2)=0.68, the total confidence of x is equal to: mx (x)=min (my ( x) ,mz (x)) = min (my( len( x )), mz ( max_gap( x ))) = 0.45 ,
In step 64, an assessment is made as to whether the required confidence criteria are met. If they are not met then the data structure is replaced on the stack by each portion of the data structure on either side of its maximum gap. In other words, the data structure ("word") is
split. Processing then proceeds back to step 63 where the confidence values will be calculated again. However this time the confidence values are calculated on the first portion of the split "word" and the portion on either side of its maximum gap. This loop continues until the confidence criteria is met when the data structure from the stack is added in step 66 to an output array.
The confidence criteria are considered not to be met if either the confidence value for the data structure is less than a first threshold and the confidence value for either portion of the data structure on either side of the largest gap exceeds the confidence value for the data structure; or if all of the confidence values are below a second threshold. Suitable values for the threshold in a general text detection processing, for example suitable for use in detecting text on ultrasound scan image data, are 0.75 for the first threshold and 0.25 for the second threshold.
This algorithm can be summarised in pseudo-code as follows:
For each row of pixels implement the following instructions:
1. allocate stack structure STACK to store "word" objects
2. allocate output vector OUT of "word" objects
3. build "word" object from the sequence of separators returned by horizontal transform for current row and PUSH it onto the STACK
4. initialise a confidence threshold value T1 (e.g. 0.75) (indicating a high enough confidence)
5. initialise a confidence threshold value T2 (e.g. 0.25) (T2 < T1 ) (indicating a confidence value that is too small)
6. WHILE STACK is not empty DO:
1. POP "word" W from the STACK
2. IF length (W) < 2
1. REMOVE W
2. next iteration
3. calculate confidence of W, conf(W)
4. BREAK "word" W on maximum gap to form left (L) and right (R) sub-words
5. calculate confidence of L and R, conf(L) and conf(R)
6. IF [conf(W) < T1 AND MAX(conf(R), conf(L)) > conf(W)] OR [MAX(conf(R), conf(L)) < T2 AND conf(W) < T2] PUSH subwords L and R into the STACK
7. else PUSH W into OUT
8. go next iteration
The output array is then used to build an image mask in step 67. This is done by copying all pixels from the first separator to the last one to the corresponding places on the mask. The pixel values are set to be equal to 255 multiplied by the "word's" confidence. Initially, the mask is totally black. In other words, all its pixels have an initial value of zero, which is unchanged if not modified by copying pixels to the mask from the output array.
The image mask is then processed to remove vertical gaps and vertical connections between pixels in the mask with a non-zero confidence (i.e. non-black pixels) that fall below a threshold. Again, trapezoidal fuzzy numbers are used to determine whether the vertical gaps and connections fall below the threshold based on the assumption that text height typically lies within a certain range of values. The confidence value of a group of pixels is recalculated after a vertical gap or connection is removed. If the confidence value increases then the removal is deemed correct. Otherwise, it is reversed.
The removal of vertical gaps and vertical connections between pixels is explained below in more detail. As already mentioned, after the horizontal transform is completed, all the "word" objects are projected onto an image mask with their confidence values. Thus, there is a single channel image mask where the intensity value of the pixel corresponds to its confidence value. Black pixels correspond to a minimum confidence value (i.e. 0) and white pixels to a maximum confidence value (i.e. 1).
The goal of this stage is to remove false vertical connections and fill false vertical gaps on the mask. The first step is to transpose the mask to place the columns of the mask in rows. This is not an essential step, and is only performed to make subsequent calculations faster. The following sequence of operations is executed for each row of the transposed mask separately.
A "column" object is formed containing the contiguous sequence of pixels from the row of the transposed mask along with the confidence values. The length of the "column" object is defined as the total number of pixels that are contained in it.
Let Uc be the universal set of "column" objects. Let coljen : L/c→ R be the function which for each c from Uc returns its length. Let Xc = < Uc , mxc > be the fuzzy set of "column" objects that satisfy a text height criterion to some degree determined by the membership function mxc. Let xc = <R, mxC > be the trapezoidal fuzzy number whose membership function is defined from assumptions made about the height of text areas on the image to
search. Then a relationship is established between the "columns" objects set membership function mxc and the fuzzy number's membership function mxC for any c from Uc as follows: mxc ( c ) = mxC(colJen (c)) :,
A merge operation is then used on neighbouring "column" objects. If two "column" objects are neighbours on the same row of the transposed mask, the merge operation between them returns a new "column" object that satisfies the following requirements. First, the resultant "column" object contains all the pixels from both "columns" being merged and pixels which lie between them. Second, the confidence values of pixels between "columns" being merged are assigned to the minimal confidence value among all the pixels of the "columns" being merged.
As an example of the "column" merge operation:
A row has the following sequence of pixels designated by their confidence values:
., 0 0.75 0.45 0.98 0.23 0 0 0 0 0 0.37 0.17 0.76 0.4 0 ,,..·
The "column" objects are presented here as sequences of numbers in bold
text. The gaps between the "columns" are not in bold text. The gap between the two "columns" has a length equal to 5, both "columns" have a length equal to 4. The minimum confidence value between them is 0.17, the result of the merging operation on these "column" objects is:
... 0 0.75 0.45 0.98 0.23 0.17 0.17 0.17 0.17 0.17 0.37 0.17 0.76 0.4 0 .,.
The new merged "column" object has a length equal to 4+4+5 = 13.
The pixels that previously belonged to the gap between the "columns" are assigned to the minimum confidence value among the pixels of the "columns" being merged.
Then a vertical transform algorithm is used on the rows of the transposed mask. In this, initial "column" objects are first built from the assumption that all zero-pixels are parts of the gaps between "column" objects. For example, part of the row below:
... 0 0.75 0.45 0.98 0.23 0 0 0 0 0 0.37 0.17 0.76 0.4 0 <■>.
is broken on "columns" as:
... 0 0.75 0.45 0.98 0.23 0 0 0 0 0 0.37 0.17 0.76 0.4 0
The "columns" above are marked with bold type.
Then an attempt is made to merge all neighbouring pairs of "columns" in the row. If the confidence value of the merging result is greater than the maximum confidence value in the "columns" being merged and the quantity of pixels originally belonging to the "columns" being merged that still appear in the resultant "column" is greater than a threshold value then the result of the merging operation is accepted and retained instead of the original pair of "columns". Otherwise, the merging result is declined and the original "columns" are retained.
Next, for all "columns" in the row each pixel's confidence is recalculated as the minimum value of the current pixel's confidence and the confidence values of the "column" which it belongs to. For example, if for the "column" object:
... 0 0.75 0.45 0.98 0.23 0.17 0.17 0.17 0.17 0.17 0.37 0.17 0.76 0.4 0 ... the confidence value is 0.45 then the confidence values of its pixels will be changed to
.... 0 0.45 0.45 0.45 0.23 0.17 0.17 0.17 0.17 0.17 0.37 0.17 0.45 0.4 0 ...
Next a thresholding procedure is used where pixels with too small a confidence value (lower than 0.011 ) are rejected.
The non-binary image mask of step 67 is then thresholded to turn it into a binary image mask in step 68. A morphological dilate operation is then performed and the resultant image mask returned by the module. The resultant image mask can then be used to determine which pixels in the original image data should be obscured by the inpainting process referred to above. In this way, text elements can be detected and obscured to anonymise image data.
Claims
1. A computer-implemented method of storing image data comprising images of a subject generated by a medical imaging device, the method comprising:
a) capturing the image data;
b) receiving subject identification metadata;
c) analysing at least one selected element of the image data to detect features identifying the subject and modifying the or each selected element of the image data by removing or obscuring any such detected features; and
d) storing a subject record comprising the or each modified selected element of the image data and the subject identification metadata.
2. A computer-implemented method according to claim 1 , wherein the subject record further comprises a node identification number allocated to the medical imaging device for uniquely identifying the medical imaging device.
3. A computer-implemented method according to claim 1 or claim 2, wherein the subject identification metadata comprises one or more of the subject's name, the subject's e-mail address and a unique identification number allocated to the subject.
4. A computer-implemented method according to any of the preceding claims, wherein the or each selected element of the image data comprises at least one video object selected by a user.
5. A computer-implemented method according to any of the preceding claims, wherein the or each selected element of the image data comprises at least one still image object selected by a user.
6. A computer-implemented method according to any of the preceding claims, wherein step (d) further comprises transmitting the subject record to a remote server.
7. A computer-implemented method according to any of claims 3 to 6, wherein step (d) comprises storing the subject identification metadata including the unique subject record identification number in a database record along with one or more uniform resource locators indicating the location of the image data in a separate file system.
8. A computer-implemented method according to any of the preceding claims, further comprising constructing a manifest data object, which specifies the or each modified selected element of the image data included in the subject record and including the manifest in the subject record.
9. A computer-implemented method according to claim 8, further comprising validating the received subject record, prior to storing the subject record, by confirming that the subject record comprises all the image data specified in the manifest and/or the validity of the image data and/or the validity of a node identification number allocated to the medical imaging device for uniquely identifying the medical imaging device.
10. A computer-implemented method according to any of the preceding claims, wherein step (c) comprises:
i) forming an edge mask comprising only edge pixels in the image data;
ii) for each row of the edge mask, forming a data structure representing candidate text elements consisting of the start and end positions in the row of contiguous horizontal edges; and
iii) for each data structure representing candidate text elements, calculating confidence values for the data structure and for each portion of the data structure on either side of the largest gap between adjacent contiguous horizontal edges; and either
replacing the data structure with two data structures, each consisting of one of the portions of the data structure on either side of the largest gap and then repeating step (c), if the confidence values do not meet a predetermined set of confidence criteria; or
adding the data structure to a data structure representing detected text elements.
11. A computer-implemented method according to claim 10, wherein the edge mask is formed in step (i) by applying an edge detection algorithm, such as Canny edge detector, to the image data.
12. A computer-implemented method according to claim 10 or claim 11 , further comprising performing an adaptive thresholding algorithm on the edge mask prior to step (ii) such that the edge pixels in the edge mask have a first binary value, all other pixels having a second binary value.
13. A computer-implemented method according to claim 12, wherein the start position of a contiguous horizontal edge is detected in step (ii) by detecting the transition along a row between pixels having the second binary value to pixels having the first binary value and/or by detecting a pixel having the first binary value in the left-hand most position along the row.
14. A computer-implemented method according to claim 12 or claim 13, wherein the end position of a contiguous horizontal edge is detected in step (ii) by detecting the transition along a row between pixels having the first binary value to pixels having the second binary value and/or by detecting a pixel having the first binary value in the right-hand most position along the row.
15. A computer-implemented method according to any of claims 10 to 14, wherein the confidence values do not meet the predetermined set of confidence criteria if either:
i) the confidence value for the data structure is less than a first threshold if the confidence value for either portion of the data structure on either side of the largest gap exceeds the confidence value for the data structure; or
ii) all of the confidence values are below a second threshold.
16. A computer-implemented method according to any of claims 10 to 15, further comprising forming an image mask from the data structure representing detected text elements by setting pixels in the image mask to a value depending on the confidence value for each data structure representing candidate text elements added to the data structure representing detected text elements.
17. A computer-implemented method according to claim 16, further comprising performing a thresholding algorithm on the image mask.
18. A computer-implemented method according to claim 17, further comprising performing a morphological dilation algorithm on the thresholded image mask.
19. A computer-implemented method according to any of claimslO to 14, further comprising removing or obscuring text elements in the image data by modifying pixels in the image data relating to detected text elements according to the data structure representing detected text elements.
20. A computer-implemented method according to claim 19, wherein the pixels are modified by an inpainting algorithm.
21. A computer-implemented method according to any of the preceding claims, further comprising sharing a selected part of the image data in the stored subject record by:
e) receiving login details from a user;
f) authenticating the login details to confirm that the user is the subject identified by the subject identification metadata; and
g) receiving a sharing command from the user, the sharing command indicating a selected one of a plurality of sharing services over which the selected part of the image data is to be shared with a third party, and sharing the selected part of the image data over the selected sharing service.
22. A computer-implemented method according to claim 21 , further comprising transmitting the selected part of the image data to one or more e-mail addresses specified in the subject record.
23. A computer-implemented method according to claim 21 or 22, wherein the sharing services comprise one or more social media services and/or e-mail.
24. A system comprising one or more capture devices adapted to perform the method of any of claims 1 to 6, 8 or 10 to 20, each of which is coupled to a respective medical imaging device, in use.
25. A system comprising one or more capture devices, each of which is coupled to a respective medical imaging device, in use, and a remote storage device, the remote storage device and the or each capture device together forming a network and together being adapted to perform the method of any of claims 21 to 23.
26. A computer-implemented method of detecting text elements in image data, the method comprising:
a) forming an edge mask comprising only edge pixels in the image data;
b) for each row of the edge mask, forming a data structure representing candidate text elements consisting of the start and end positions in the row of contiguous horizontal edges; and
c) for each data structure representing candidate text elements, calculating confidence values for the data structure and for each portion of the data structure on either side of the largest gap between adjacent contiguous horizontal edges; and either
i) replacing the data structure with two data structures, each consisting of one of the portions of the data structure on either side of the largest gap and then repeating step (c), if the confidence values do not meet a predetermined set of confidence criteria; or
ii) adding the data structure to a data structure representing detected text elements.
27. A computer-implemented method according to claim 26, wherein the edge mask is formed in step (a) by applying an edge detection algorithm, such as Canny edge detector, to the image data.
28. A computer-implemented method according to claim 26 or claim 27, further comprising performing an adaptive thresholding algorithm on the edge mask prior to step (b) such that the edge pixels in the edge mask have a first binary value, all other pixels having a second binary value.
29. A computer-implemented method according to claim 28, wherein the start position of a contiguous horizontal edge is detected in step (b) by detecting the transition along a row between pixels having the second binary value to pixels having the first binary value and/or by detecting a pixel having the first binary value in the left-hand most position along the row.
30. A computer-implemented method according to claim 28 or claim 29, wherein the end position of a contiguous horizontal edge is detected in step (b) by detecting the transition along a row between pixels having the first binary value to pixels having the second binary value and/or by detecting a pixel having the first binary value in the right-hand most position along the row.
31. A computer-implemented method according to any of claims 26 to 30, wherein the confidence values do not meet the predetermined set of confidence criteria if either:
i) the confidence value for the data structure is less than a first threshold and the confidence value for either portion of the data structure on either side of the largest gap exceeds the confidence value for the data structure; or
ii) all of the confidence values are below a second threshold.
32. A computer-implemented method according to any of claims 26 to 31 , further comprising forming an image mask from the data structure representing detected text elements by setting pixels in the image mask to a value depending on the confidence value for each data structure representing candidate text elements added to the data structure representing detected text elements.
33. A computer-implemented method according to claim 32, further comprising performing a thresholding algorithm on the image mask.
34. A computer-implemented method according to claim 33, further comprising performing a morphological dilation algorithm on the thresholded image mask.
35. A computer-implemented method according to any of claims 26 to 32, further comprising removing or obscuring text elements in the image data by modifying pixels in the image data relating to detected text elements according to the data structure representing detected text elements.
36. A computer-implemented method according to claim 35, wherein the pixels are modified by an inpainting algorithm.
37. A computer-implemented method substantially as hereinbefore described with reference to the accompanying drawings.
38. A system substantially as hereinbefore described with reference to the accompanying drawings.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1204686.8 | 2012-03-16 | ||
GB1204686.8A GB2500264A (en) | 2012-03-16 | 2012-03-16 | Removing or obscuring sensitive medical image |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2013136093A2 true WO2013136093A2 (en) | 2013-09-19 |
WO2013136093A3 WO2013136093A3 (en) | 2013-12-05 |
Family
ID=46052071
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2013/050671 WO2013136093A2 (en) | 2012-03-16 | 2013-03-15 | Image data storage and sharing |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2500264A (en) |
WO (1) | WO2013136093A2 (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3026877A1 (en) * | 2014-11-26 | 2016-06-01 | NCR Corporation | Secure image processing |
US9858699B2 (en) | 2015-09-18 | 2018-01-02 | International Business Machines Corporation | Image anonymization using analytics tool |
US9917898B2 (en) | 2015-04-27 | 2018-03-13 | Dental Imaging Technologies Corporation | Hybrid dental imaging system with local area network and cloud |
US10706958B2 (en) | 2015-11-20 | 2020-07-07 | Ikeguchi Holdings Llc | Electronic data document for use in clinical trial verification system and method |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020026332A1 (en) * | 1999-12-06 | 2002-02-28 | Snowden Guy B. | System and method for automated creation of patient controlled records |
US6594393B1 (en) * | 2000-05-12 | 2003-07-15 | Thomas P. Minka | Dynamic programming operation with skip mode for text line image decoding |
US6823203B2 (en) * | 2001-06-07 | 2004-11-23 | Koninklijke Philips Electronics N.V. | System and method for removing sensitive data from diagnostic images |
EP1728189A2 (en) * | 2004-03-26 | 2006-12-06 | Convergence Ct | System and method for controlling access and use of patient medical data records |
US8938671B2 (en) * | 2005-12-16 | 2015-01-20 | The 41St Parameter, Inc. | Methods and apparatus for securely displaying digital images |
US20070192137A1 (en) * | 2006-02-01 | 2007-08-16 | Ombrellaro Mark P | Access control in an electronic medical record system |
US7724918B2 (en) * | 2006-11-22 | 2010-05-25 | International Business Machines Corporation | Data obfuscation of text data using entity detection and replacement |
US7949167B2 (en) * | 2008-06-12 | 2011-05-24 | Siemens Medical Solutions Usa, Inc. | Automatic learning of image features to predict disease |
US20100082371A1 (en) * | 2008-10-01 | 2010-04-01 | General Electric Company, A New York Corporation | Patient Document Privacy And Disclosure Engine |
WO2010059584A1 (en) * | 2008-11-19 | 2010-05-27 | Theladders.Com, Inc. | System and method for managing confidential information |
EP2449522A4 (en) * | 2009-06-30 | 2013-08-07 | Univ Wake Forest | Method and apparatus for personally controlled sharing of medical image and other health data |
US20110239113A1 (en) * | 2010-03-25 | 2011-09-29 | Colin Hung | Systems and methods for redacting sensitive data entries |
-
2012
- 2012-03-16 GB GB1204686.8A patent/GB2500264A/en not_active Withdrawn
-
2013
- 2013-03-15 WO PCT/GB2013/050671 patent/WO2013136093A2/en active Application Filing
Non-Patent Citations (1)
Title |
---|
None |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3026877A1 (en) * | 2014-11-26 | 2016-06-01 | NCR Corporation | Secure image processing |
CN105631354A (en) * | 2014-11-26 | 2016-06-01 | Ncr公司 | Secure image processing |
US9917898B2 (en) | 2015-04-27 | 2018-03-13 | Dental Imaging Technologies Corporation | Hybrid dental imaging system with local area network and cloud |
US10530863B2 (en) | 2015-04-27 | 2020-01-07 | Dental Imaging Technologies Corporation | Compression of dental images and hybrid dental imaging system with local area and cloud networks |
US9858699B2 (en) | 2015-09-18 | 2018-01-02 | International Business Machines Corporation | Image anonymization using analytics tool |
US9858696B2 (en) | 2015-09-18 | 2018-01-02 | International Business Machines Corporation | Image anonymization using analytics tool |
US10706958B2 (en) | 2015-11-20 | 2020-07-07 | Ikeguchi Holdings Llc | Electronic data document for use in clinical trial verification system and method |
US10811122B2 (en) | 2015-11-20 | 2020-10-20 | Ikeguchi Holdings, LLC | Electronic data document for use in clinical trial verification system and method |
US11562811B2 (en) | 2015-11-20 | 2023-01-24 | Akyrian Systems LLC | Electronic data document for use in clinical trial verification system and method |
US11562810B2 (en) | 2015-11-20 | 2023-01-24 | Akyrian Systems LLC | Electronic data document for use in clinical trial verification system and method |
Also Published As
Publication number | Publication date |
---|---|
WO2013136093A3 (en) | 2013-12-05 |
GB201204686D0 (en) | 2012-05-02 |
GB2500264A (en) | 2013-09-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhou et al. | Coverless image steganography using partial-duplicate image retrieval | |
US8810684B2 (en) | Tagging images in a mobile communications device using a contacts list | |
Poisel et al. | Forensics investigations of multimedia data: A review of the state-of-the-art | |
US20220172476A1 (en) | Video similarity detection method, apparatus, and device | |
US9325691B2 (en) | Video management method and video management system | |
CN105744292A (en) | Video data processing method and device | |
US9621628B1 (en) | Mobile image capture and transmission of documents to a secure repository | |
US11800201B2 (en) | Method and apparatus for outputting information | |
WO2013136093A2 (en) | Image data storage and sharing | |
CN109272526B (en) | Image processing method and system and electronic equipment | |
US20140029854A1 (en) | Metadata supersets for matching images | |
Athanasiadou et al. | Camera recognition with deep learning | |
CN111369557A (en) | Image processing method, image processing device, computing equipment and storage medium | |
US20100102961A1 (en) | Alert system based on camera identification | |
JPWO2009050877A1 (en) | Inappropriate content detection method and apparatus, computer program thereof, and content publishing system | |
JP5984880B2 (en) | Image processing device | |
Ali et al. | A meta-heuristic method for reassemble bifragmented intertwined JPEG image files in digital forensic investigation | |
US20170214823A1 (en) | Computer system for reformatting input fax data into an output markup language format | |
JP4740706B2 (en) | Fraud image detection apparatus, method, and program | |
US20200267447A1 (en) | Method and system for preventing upload of multimedia content with objectionable content into a server | |
Gupta et al. | Detection and localization for watermarking technique using LSB encryption for DICOM Image | |
Le Moan et al. | Towards exploiting change blindness for image processing | |
Wales | Proposed framework for digital video authentication | |
Sultan | Deep learning approach and cover image transportation: a multi-security adaptive image steganography scheme | |
Novozámský et al. | Extended IMD2020: a large‐scale annotated dataset tailored for detecting manipulated images |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13711465 Country of ref document: EP Kind code of ref document: A2 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13711465 Country of ref document: EP Kind code of ref document: A2 |