WO2017030448A1 - Method and apparatus for evaluating an animal - Google Patents
Method and apparatus for evaluating an animal Download PDFInfo
- Publication number
- WO2017030448A1 WO2017030448A1 PCT/NZ2016/050129 NZ2016050129W WO2017030448A1 WO 2017030448 A1 WO2017030448 A1 WO 2017030448A1 NZ 2016050129 W NZ2016050129 W NZ 2016050129W WO 2017030448 A1 WO2017030448 A1 WO 2017030448A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- animal
- shape information
- model
- evaluation
- measurements
- Prior art date
Links
- 241001465754 Metazoa Species 0.000 title claims abstract description 308
- 238000000034 method Methods 0.000 title claims abstract description 76
- 238000005259 measurement Methods 0.000 claims abstract description 36
- 238000011156 evaluation Methods 0.000 claims description 71
- 210000003484 anatomy Anatomy 0.000 claims description 25
- 238000003384 imaging method Methods 0.000 claims description 20
- 238000012545 processing Methods 0.000 claims description 17
- 238000004458 analytical method Methods 0.000 claims description 12
- 238000004891 communication Methods 0.000 claims description 5
- 238000001914 filtration Methods 0.000 claims description 5
- 238000009499 grossing Methods 0.000 claims description 4
- 238000012935 Averaging Methods 0.000 claims description 2
- 241000283690 Bos taurus Species 0.000 abstract description 36
- 208000030175 lameness Diseases 0.000 abstract description 10
- 235000013365 dairy product Nutrition 0.000 abstract description 6
- 230000004888 barrier function Effects 0.000 description 78
- 230000003068 static effect Effects 0.000 description 26
- 230000012173 estrus Effects 0.000 description 12
- 238000010801 machine learning Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 9
- 230000033001 locomotion Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 7
- 238000013459 approach Methods 0.000 description 4
- 230000001419 dependent effect Effects 0.000 description 4
- 230000008569 process Effects 0.000 description 4
- 230000000694 effects Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 3
- 230000004807 localization Effects 0.000 description 3
- 239000011159 matrix material Substances 0.000 description 3
- 238000010606 normalization Methods 0.000 description 3
- 238000010171 animal model Methods 0.000 description 2
- 210000000988 bone and bone Anatomy 0.000 description 2
- 210000000481 breast Anatomy 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 230000006872 improvement Effects 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 238000007620 mathematical function Methods 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 238000007781 pre-processing Methods 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 238000004513 sizing Methods 0.000 description 2
- 230000001960 triggered effect Effects 0.000 description 2
- 208000027418 Wounds and injury Diseases 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000002411 adverse Effects 0.000 description 1
- 230000003466 anti-cipated effect Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000037237 body shape Effects 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 210000000038 chest Anatomy 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 238000013211 curve analysis Methods 0.000 description 1
- 230000006378 damage Effects 0.000 description 1
- 230000007812 deficiency Effects 0.000 description 1
- 238000010586 diagram Methods 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 244000144980 herd Species 0.000 description 1
- 208000014674 injury Diseases 0.000 description 1
- 230000010354 integration Effects 0.000 description 1
- 230000003993 interaction Effects 0.000 description 1
- 238000011835 investigation Methods 0.000 description 1
- 244000144972 livestock Species 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 239000003973 paint Substances 0.000 description 1
- 230000002085 persistent effect Effects 0.000 description 1
- 230000010076 replication Effects 0.000 description 1
- 238000007619 statistical method Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- A—HUMAN NECESSITIES
- A01—AGRICULTURE; FORESTRY; ANIMAL HUSBANDRY; HUNTING; TRAPPING; FISHING
- A01K—ANIMAL HUSBANDRY; AVICULTURE; APICULTURE; PISCICULTURE; FISHING; REARING OR BREEDING ANIMALS, NOT OTHERWISE PROVIDED FOR; NEW BREEDS OF ANIMALS
- A01K29/00—Other apparatus for animal husbandry
- A01K29/005—Monitoring or measuring activity, e.g. detecting heat or mating
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/752—Contour matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30128—Food products
Definitions
- the present invention relates to a method and apparatus for automatically evaluating an animal based on its physical appearance, and in particular, but not exclusively, to a method and apparatus for determining a body condition score (BCS), particularly for a dairy cow.
- BCS body condition score
- the present application is also directed to methods for automatically evaluating a body condition score for cattle.
- WO2010/063527 One method of evaluating BCS is described in international publication WO2010/063527.
- a 3D camera is used to obtain a three dimensional image of the animal.
- the image is processed and a BCS score is determined based on a statistical analysis of a number of features of the data.
- the method described in WO2010/063527 results in the need to use a global analysis approach.
- the region of the animal which is of interest must be within the area captured by the image; the region must not be obscured; and the image must not contain relevant features from two or more different animals. It is also strongly preferable that the system evaluates the animal quickly enough to allow the animal to be drafted immediately after the image is collected.
- a method of calculating an evaluation of an animal comprising:
- the step of evaluating the animal comprises the step of calculating a category for the animal.
- the 3D model comprises a triangular mesh.
- the 3D model comprises a grid.
- the method comprises the step of smoothing and/or filtering the 3D shape information prior to the step of creating the 3D point cloud.
- the method comprises the step of smoothing and/or filtering the 3D point cloud prior to the step of forming the 3D model.
- the method comprises the step of identifying points or areas within the model which are representative of one or more anatomical features or regions.
- the step of receiving 3D shape information comprises receiving information from a plurality of frames of 3D shape information.
- the method comprises the step of pre-processing the 3D shape information before the 3D model is formed.
- the method comprises the step of normalizing the model prior to (or simultaneously with) the step of calculating the one or more representative measurements.
- the step of normalizing the model comprises the step of determining one or more geometries of the animal.
- the step of normalising the model comprises the step of determining one or more of the animal's height, length or width.
- the step of determining the animal's height comprises the step of determining a position of a surface on which the animal is standing.
- the step of determining the animal's height comprises the step of averaging data from several frames or a comparison with a feature of known size.
- the step of determining a position of a surface on which the animal is standing comprises the step of receiving 3D shape information from the surface when the animal is not standing on the surface.
- the step of identifying data corresponding to one or more anatomical regions comprises comparing the data to one or more 3D feature descriptors.
- an orientation of the animal is calculated from the relative positions of the anatomical regions or from an analysis of the outline of the animal.
- the step of identifying data corresponding to one or more anatomical regions comprises the step of excluding results which result in one or more parameters of the 3D model falling outside predetermined limits.
- the one or more parameters comprise one or more of a predetermined angle, curvature, length or depth, or a relationship to one or more other anatomical regions.
- the step of identifying data corresponding to one or more anatomical regions comprises performing a global optimisation to minimise or maximise, within predetermined limits, a distance between preselected anatomical features.
- the step of identifying data corresponding to one or more anatomical regions comprises performing a global optimisation to minimise or maximise, within predetermined limits, a distance between preselected anatomical features, or to minimise or maximise, within predetermined limits, another geometric descriptor of the 3D model.
- the step of calculating one or more representative measurements from the 3D model comprises the step of modelling an intersection plane through the one or more anatomical regions.
- the step of modelling an intersection plane through the region of interest comprises the step of rotating or translating the intersection plane to maximise or minimise a selected parameter.
- the step of calculating one or more representative measurements comprises the step of fitting a curve to the two dimensional shape information.
- the one or more representative measurements comprise one or more of coefficients of the curve, the length between two points on the curve, an area defined by the curve, a depth associated with the curve, or another descriptor of the curve.
- the step of identifying data corresponding to one or more anatomical regions comprises the step of modelling an intersection plane through the region of interest to define two dimensional shape information corresponding to a two dimensional curve.
- the one or more representative measurements comprise one or more of the average height, maximum height or minimum height of each grid square.
- the step of forming a 3D model comprises defining a grid based on the 3D shape information.
- the step of receiving 3D shape information corresponding to an area occupied by the animal comprises the step of receiving information from a 3D imaging device.
- the 3D imaging device comprises one or more of LiDAR, structure from motion devices, stereo or multiview camera devices, depth cameras based on time-of-flight or any similar methodology, lightfield cameras, or any device which provides depth information for a scene being captured.
- the category relates to one or more of body condition, lameness, udder conformation or a trait other than production (TOP), for example height at shoulder.
- an apparatus for calculating an evaluation of an animal comprising a three dimensional ( 3D) imaging device for collecting 3D shape information corresponding to a space occupied by the animal and a processing device in communication with the 3D imaging device which is configured to:
- the processing device calculates a category for the animal.
- an apparatus for automatically evaluating animals comprising a structure comprising two spaced apart static barrier means, a spacing between the barrier means selected to allow an animal to pass between the barrier means, the apparatus further comprising a three dimensional (3D) imaging device for selectively collecting 3D shape information of an area of interest of the animal when the animal is positioned in a space between the static barrier means and processing means for selectively evaluating the animal based on the 3D shape information, if collected, the apparatus further comprising first moveable barrier means provided at a first end of the static barrier means for selectively preventing a second animal from entering the space between the static barrier means and/or second moveable barrier means provided at a second end of the static barrier means for selectively preventing the animal from moving out of the space between the static barrier means.
- 3D three dimensional
- the apparatus determines whether a selected animal is to be evaluated based, in part, on a space between the animal and an adjacent animal.
- the apparatus determines whether the (or a) selected animal is to be evaluated based, in part, on a speed at which the animal is moving. Preferably, in use, the apparatus determines whether the (or a) selected animal is to be evaluated based, in part, on an assessment of whether a new evaluation of the animal is required.
- the apparatus determines whether a selected animal is to be evaluated based solely on one or more of:
- the apparatus comprises the second moveable barrier means wherein, in use, the apparatus closes the second moveable barrier means when the apparatus determines that a new evaluation of the (or a) selected first animal is required, and a distance between the first animal and a second animal which is in front of the first animal, is greater than a predetermined distance.
- the apparatus comprises the first moveable barrier means, wherein, in use, the apparatus closes the first moveable barrier means when the apparatus determines that a new evaluation of the (or a) selected first animal is required, and a distance between the first animal and a second animal which is behind the first animal, is greater than a predetermined distance.
- the apparatus comprises the second moveable barrier means, wherein, in use, the apparatus closes the moveable barrier means when the apparatus determines that a new evaluation of the (or a) selected animal is required, and the selected animal is travelling at a speed which is greater than a predetermined speed.
- the evaluation of whether the selected animal is required is dependent, in part, on a length of time since the last evaluation of the animal.
- the evaluation of whether the selected animal is required is dependent, in part, on the result of a previous evaluation of the selected animal.
- the apparatus further comprises animal position sensing means.
- the animal position sensing means comprise a first sensor means for detecting when a fore part of the animal is in a first position which is indicative of the entire animal having moved between the static barrier means.
- the animal position sensing means comprise a second sensor means for detecting when a rear of the animal has moved beyond a second position which is adjacent the first moveable barrier means.
- the first sensor means comprises a photoelectric sensor.
- the second position sensing means comprises a photoelectric sensor.
- the animal position sensing means comprises the 3D imaging device and the processing means.
- the apparatus comprises an electronic ID reader.
- the 3D imaging device comprises a 3D camera.
- the apparatus comprises a lighting means for artificially lighting an area which is within a field of view of the 3D imaging device.
- the intensity of the light inside the structure is adjustable.
- the apparatus sends a signal to an automatic drafting gate depending on the evaluation of the animal.
- the animal position sensing means comprises a drafting gate entry sensor.
- the animal position sensing means comprises a drafting gate exit sensor.
- the evaluation of the animal performed by the apparatus comprises a calculation of a body condition score.
- a method of automatically evaluating animals comprising the steps of:
- the animal ii. if the animal is to selected to be evaluated, collecting 3D shape information of an area of interest of the animal when the animal is in a space between two spaced apart static barrier means, and processing the 3D shape information to evaluate the animal based on the 3D shape information.
- the method comprises the step of closing a first moveable barrier means to prevent a second animal from entering into the space between the static barrier means if a distance between the animal and a second animal which is behind the first animal is greater than a predetermined distance.
- the method comprises the step of closing a second moveable barrier means to prevent the animal from moving out of the space between the static barrier means, if a speed of the animal is greater than a predetermined maximum speed.
- the method comprises the step of closing the (or a) second moveable barrier means to prevent the animal from moving out of the space between the static barrier means if a distance between the animal and a second animal which is in front of the first animal is greater than a predetermined distance.
- the method comprises receiving a signal from a first animal position sensor means when a fore part of the animal is in a first position which is indicative of the entire animal having moved into the space between the static barrier means.
- the method comprises receiving a signal from a second animal position sensor means when a rear of the animal has passed a second position which is adjacent a first end of the static barrier means.
- the method comprises capturing the 3D shape information after receiving the signal from the second animal position sensing means.
- the method comprises the step of using a 3D camera to capture the 3D shape information.
- the method comprises the step of processing the 3D shape information to determine when the animal is in a suitable position to obtain 3D information to perform the evaluation.
- the method comprises the step of processing the 3D shape information to determine whether the animal is in a suitable stance to obtain 3D information to perform the evaluation.
- the method comprises updating a herd management system depending on the evaluation of the animal.
- the method comprises sending an automatic drafting gate a signal which is representative of the evaluation of the animal.
- the evaluation of the animal comprises a calculation of a body condition score.
- the method and/or apparatus of the preceding aspects is combined to form a system.
- the invention may be broadly said to consist in a method for calculating an evaluation of an animal comprising the method to calculate an evaluation used in the apparatus for automatically evaluating an animal.
- the combined apparatus also forms an aspect of the invention.
- the invention may also be said broadly to consist in the parts, elements and features referred to or indicated in the specification of the application, individually or collectively, in any or all combinations of two or more of said parts, elements or features, and where specific integers are mentioned herein which have known equivalents in the art to which the invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.
- a system and/or method for calculating an evaluation of an animal is substantially as herein described, with reference to any one or more of the accompanying embodiments and drawings.
- Figure 1 is a diagrammatic side view of an apparatus according to one embodiment of the present invention.
- Figure 2 is a diagrammatic top view of the apparatus of Figure 2, with the cover removed for clarity and the drafting gate positions shown in outline.
- Figure 3 is a schematic view of a system for calculating an evaluation of an animal
- Figure 4 is a flow chart of the operation of an embodiment of the system used to calculate body condition score.
- Figure 5a, 5b are embodiments of a 3D point cloud of (a) an empty location and (b) an animal in location, as can be used in an embodiment of the method or system.
- Figure 6a, 6b show the intersection of a plane with an animal and the resulting 2D curve points measured in an embodiment of the system.
- Figure 7 shows a plurality of possible planes intersecting with an animal
- Figure 8a, 8b are diagrams showing calculation of parameters from a 2D curve as in figure 6b
- a system for calculating an evaluation of an animal is generally referenced by arrow 1 10.
- the system estimates which class from a plurality of predetermined physical classes an animal falls into, for example by calculating a body condition score (BCS) between 1 and 5.
- BCS body condition score
- the system comprises an imaging device 1 1 which is capable of collecting 3D shape information corresponding to an area of interest.
- the imaging device 1 1 may collect information on the entire animal, or only a portion of the animal which has features which are representative of the particular parameter which is to be categorised.
- the imaging device 1 1 may include one or more of a LiDAR, structure from motion devices, stereo or multiview camera devices, depth cameras based on time-of-flight or any similar methodology, lightfield cameras, or any other device which provides depth information for the scene being captured.
- the imaging device may be referred to herein as a "3D camera".
- the 3D camera is capable of taking at least one photo or a series of photos.
- the series of photos may be, or may be considered, as a plurality of frames in sequence. In some embodiments there is a comparison between frames, or the evaluation of a plurality of frames is used to confirm the evaluation.
- the imaging device 1 1 is statically mounted, while in other embodiments it may be attached to or otherwise carried by a person.
- the imaging device is in communication with a processing device 300, typically a computer.
- the processing device receives the 3D shape information from the imaging device 1 1 and calculates an evaluation of the animal (e.g. by determining which class the selected animal belongs to) based on the 3D shape information.
- the class calculated may be output to a suitable visual output device such as a computer monitor, a portable device such as a tablet, or a wearable display such as Google GlassTM.
- a suitable visual output device such as a computer monitor, a portable device such as a tablet, or a wearable display such as Google GlassTM.
- a record of the evaluation may be updated in an electronic record or database, for example MINDATM by Livestock Improvement Corporation.
- the parameter investigated may be one of a number of possible categorisations of an animal which are based on its appearance. Examples include body condition score, lameness, udder conformation or a trait other than production (TOP), for example height at shoulder.
- TOP trait other than production
- lameness detection can be achieved using a similar approach.
- lameness is detected by analysing the shape of the spine of either a stationary cow or a cow in motion. The analysis can investigate whether the animal has an arched or flat back. Lame cows tend to stand and walk with a progressively arched back as the severity of the lameness increases, whereas healthy cows stand and walk with a flat back.
- the 2D curve can be extracted from the 3D shape information by intersecting a plane through the spine of the cow running from just in front of the hips to roughly the shoulders of the cow. Variations of the plane or location of investigation are possible.
- Measurements can then be computed which describe how flat or arched this curve is and hence predict whether this cow is lame and the degree of lameness. This can either be performed on a single frame for a stationary cow, or over a series of successive frames when analysing a cow in motion.
- Figure 4 shows a possible method for operation of an embodiment of the system which is explained in more detail below.
- the sensor, imaging device or 3D camera captures 40 an image or set of images (frames).
- the images may be smoothed or filtered 41 to improve the modelling or accuracy of the technique.
- the images are then used to form a 3D point cloud 43.
- the point cloud allows a 3D surface to be viewed without having to connect each of the points.
- the point cloud can again be smoothed or filtered 42 before the point cloud is used to create a 3D connected model 44.
- the 3D model should now be an accurate and relatively smooth replication of the animal.
- the 3D connected model is used to detect anatomical points of interest 45. This can be used to measure relative locations or to extract curves 46 between areas of interest.
- the extracted curves or model portions are preferably 2 dimensional to allow measurements of the curves to be taken 47. However this may also be possible in 3D segments.
- the curve measurements may be compared directly to a known set but preferably are provided to 48 a machine learning model 51 to determine a characteristic(s) of the animal.
- the machine learning model has been trained on a known data set 49 through a training program 50.
- the characteristic of the animal may be BCS 52, lameness, or other characteristic determinable from the geometry of the animal.
- the 3D shape information is used to create a 3D point cloud.
- An example point cloud 70 for a bovine animal A is shown in Figure 5b. As can be seen in the figure the animal A is partially constrained between parallel bars for instance in a race, however this may not be necessary if sufficient camera views are available.
- the point cloud data is used to create a 3D surface model.
- the surface model is a triangular mesh formed using the well known method described by Gopi & Krishnan (Gopi, M., & Krishnan, S. (2002). A Fast and Efficient Projection-Based Approach for Surface Reconstruction. SIBGRAPI ' 02, Proceedings of the 15th Brazilian Symposium on Computer Graphics and Image Processing (pp. 179-186). Washington, DC: IEEE Computer Society, 2002).
- Filtering and/or smoothing of the data may occur prior to the creation of the 3D point cloud and/or prior to the creation of the 3D model. This may involve removing or reducing noise or artefacts, filtering parts of the scene so that only information from a specific region of interest is analysed further, and/or other pre-processing steps. Next, one or more points or areas of interest within the model are identified. A number of options for identifying one or more areas of interest may be used, as are described further below.
- Figure 6a shows how once the 3D model has been prepared it is possible to calculate the intersection of the model surface 70 with a plane 71 .
- the intersection of the plane with the model surface forms a 2D curve 72 which has the shape of the underlying physical object captured at this location as shown in Figure 6b.
- the curve information for any of part of the underlying object can be extracted for further analysis.
- a plurality of plane locations and/or orientations may be used.
- the planes 71 may be limited to a portion of the model, or they may extend across the width or depth of the model.
- the planes 71 may be sloped relative to the horizontal or vertical axis.
- the planes may be curves, for instance between three areas of interest.
- the location of the planes 71 or sections of Figure 7 may be chosen to select possible areas of interest although the system is not limited to these particular planes. In fact different planes may be desirable where different characteristics are being evaluated. The number of areas of interest identified may be dependent on the quality of the images available and/or the characteristic being evaluated.
- the information from these curves is then passed to a machine learning (ML) framework which has been trained to evaluate the animal (for example by calculating a body condition score) based on the curve data for each region extracted from the 3D model of the animal.
- ML machine learning
- the information the ML model uses to calculate the evaluation may be the raw curve points 90 provided by intersecting the model with the intersection plane.
- a curve described by a mathematical function can be fit to the raw curve points and the coefficients of the mathematical function can be provided as features to the ML system for predicting the evaluation or class.
- An example curve 91 is shown in Figure 8a where a curve has been fit to the upper surface of an animal and the distance between intersection points has been measured.
- measurements computed from a 2D curve which has been fit to the points can be calculated from the fitted curves and provided as features to the ML framework.
- Figure 8b shows a radius 92 being fit to a curve 90 of the top surface of an animal.
- all of the aforementioned features can be supplied when training the system and the ML system can determine which set of complementary features can best be used to evaluate the parameter which is under evaluation.
- Other metadata about each animal can also be provided such as animal breed breakdown and other properties that may be relevant to the evaluation.
- a number of techniques may be used to detect features or regions from which the curve information is to be extracted.
- 3D feature descriptors which describe the shape of a region of the object.
- 3D descriptor type is described by Rusu et al (Persistent Point Feature Histograms for 3D Point Clouds (Radu Bogdan Rusu, Zoltan Csaba Marton, Nico Blodow, Michael Beetz), Proceedings of the 10th International Conference on Intelligent Autonomous Systems (IAS-10), Baden-Baden, Germany, 2008.) although many others exist.
- an anatomical feature or region of the animal can be identified. If the feature or region is distinctive enough from the rest of the input data, the system can locate the same region in another similar view (e.g., another "frame") of the same underlying animal. In other words, another similar view (e.g., another "frame") of the same underlying animal.
- the detection of anatomical points of interest may use any one or more of seam carving, plane sweeping or conforming the model to a known structure. By extracting several descriptors around the vicinity of the 'centre' of the anatomical part the orientation of the part can also be established.
- the system may exploit known or fixed constraints as well as knowledge of the environment in which the input data was captured in to reduce the search space when looking for certain anatomical regions of interest.
- the known anatomy of the animal may be exploited to reduce the search space further or eliminate false positive matches.
- the model may include predetermined limits for anatomical data
- the orientation of a given plane may also be set relative to other parts of the model, that is, it may be set to be parallel to or perpendicular to other parts.
- its orientation may be set relative to the pin bones right at the back of the animal.
- correct plane orientation can also be determined by rotating or translating the positioned plane and maximizing or minimizing an angle, curvature, length or depth appropriately for the region.
- Various possible plane positions are shown in Figure 7.
- intersection plane described above may be swept across the model of the animal and anatomical parts of interest identified based on their distinctive shape profile. For instance when identifying the correct plane placement to extract a vertical cut across the tailhead region the plane may be swept across the broad area where this region is expected to be, and then the plane position that maximises the depth between the pin bones and the tail can be selected as the point that represents this region.
- a global optimization is applied which minimizes the descriptor distance between candidate locations on anatomical regions while maintaining a feasible object pose.
- This optimization simultaneously minimizes geometric measurements at the proposed anatomical feature locations. For instance, when applying the method to the problem of body condition scoring and determination of the precise location of the backbone, the height of the backbone ridge is maximised and/or the point which maximises or minimizes curvature (for example the radius of an osculating circle fit to the backbone ridge curve) is selected. Curvature of other regions such as the hips may also be maximised or minimized, or other geometric measurements or measurements associated with the hip to hip curve may be used, where the area between a straight line connecting the proposed hip points and the curve of the cow's surface may be maximised. Other similar properties of each anatomical region can be exploited.
- the size of the particular animal needs to be calculated and the curve data adjusted based on this size.
- Several geometries or measures of the size of a given animal can be used for this purpose, for example the length of the animal, its width, or its height.
- the position of the sensor from which the 3D shape information is captured e.g., the 3D camera
- the geometries or measures are calculated by comparing multiple frames or images of the animal, or by comparison with a known feature. For instance a ruler or known length could be included in the image field.
- Calculation of the animal's height can be established through knowledge of how far from the ground a certain part of the animal is. Often when measuring the stature of an animal such as a dairy cow the height at the shoulders is used. If the ground is visible from the perspective of the sensor (for instance a 3D camera) then a ground plane can be fit to 3D points on the ground, and thus the height above the ground for any point on the animal model can be easily calculated. If the depth sensor's position is static then a 3D capture of the area in which the animal stands when the 3D image is taken may also be used to pre-calculate the ground plane for later use in determining the height of any point on the animal model.
- the 3D shape information of the surface S on which the animals are imaged is taken when the animal is not standing on the surface as shown in Figure 5a.
- a point cloud may be formed from this image for use in the method. This provides an initial position form which a height can be extrapolated if required.
- Height may be computed at a consistent location on the animal (just forward of the hips) or an average over the entire length of the backbone from the tailhead forward may be used.
- a single view containing both the relevant part of the animal and the ground can be used to compute the animal's height.
- several points along the backbone of the animal can be used.
- multiple height measurements of the same animal taken over time from successive captures (images) could be aggregated to ensure that any single
- Calculation of the animal's width may be preferred as it does not rely on knowledge of where the ground is.
- the length of the animal may be used as it does not require knowledge of the position of the ground. However, this does require the sensor to be far enough away from the animal to see its entire length.
- the evaluation of the animal may be based on absolute measurements, such that normalisation of some or all of the data is not required.
- the image capture, point cloud formation and 3D model generation steps are the same as those described above. However, in this embodiment simpler features are extracted and the process of accurate anatomical point detection and plane placement is avoided.
- This method involves finding the rear-most point of the animal.
- the point cloud surface is then divided into a grid and the height from the ground of each individual point in each grid square is computed and then normalized by the height of the particular animal. Measures such as the average height, maximum height, minimum height, and standard deviation of the height of the points, may be computed for each grid square.
- the measurements for all grid squares are then entered into the ML framework to calculate the evaluation of the animal, for example by determining the category of the animal.
- the size of each grid square needs to be large enough to ensure that precise localisation of the individual squares on the surface of the animal does not significantly affect the measurements of the grid. Conversely the squares must not be so large that the discriminative power of the measurements that describe the region and its depressions (or lack thereof) are lost.
- Normalizing the region under analysis may be achieved by ascertaining the animal's height, as is described above. Any curve data calculated may be normalized by a factor derived from the animal's actual height relative to a standardised height.
- imaging device 1 1 and the processing device may be integrated into a single unit, or may be separate.
- the function of the processing device 2 may be distributed between two or more processors.
- the processor may be a computer or microprocessor or logic device.
- the system may use a matrix or grid which is superimposed on the 3D model of the animal.
- the method may measure the volume in each matrix grid position, or a shape of the volume in each matrix grid position to provide an input to the machine learning module 51 .
- the size of the grid may be adjusted depending on the animal or accuracy required, or a plurality of grid sizes may be used.
- the grid sizing may be adaptive dependent on the curvature or other aspect of the model.
- an apparatus for automatically evaluating an animal is generally referenced by arrow 100.
- the animal is a bovine.
- the apparatus 100 comprises two spaced apart static barrier means 1 .
- the static barrier means 1 are typically substantially parallel, as shown in Figure 1 .
- the static barrier means 1 may comprise a prior art cattle race, and are spaced apart sufficiently widely to allow an animal A to comfortably walk between them, but not so widely as to allow the animal A to turn around.
- At least one automatically moveable barrier means 2 is provided, typically configured as a pair of pneumatically operated doors.
- the barrier means 2 may be provided as first moveable barrier means at the entrance end of the race (that is, a first end of the static barrier means 1 ) and/or as second moveable barrier means at the opposite, exit end of the race (at the second, opposite end of the static barrier means 1 ).
- the moveable barrier means 2a can be opened to allow animals A to proceed into the space between the static barrier means 1 , or closed to prevent animals behind the barrier means 2a from proceeding forward and to prevent animals A in front of the barrier means 2a from moving backward.
- the moveable barrier means 2b can be opened to allow the animal A to proceed out of the space between the static barrier means 1 , or can be closed to bring the animal A to a halt within the space between the static barrier means 1 , and to prevent an animal in front of the moveable barrier means 2b from moving backward into that space.
- a structure 3 comprising a cover 4 may be provided.
- the cover 4, if provided, must be sufficiently high that the animal is comfortable walking through the structure 3, but is preferably sufficiently low that some or all of the animal inside the structure is in shadow.
- the cover 4 may extend partially or fully down the sides of the structure 3.
- the apparatus 100 may be provided with a walk-over weigh platform (not shown).
- the apparatus 100 is provided with animal position sensing means for sensing the position of the animal A.
- the animal position sensing means comprise a photoelectric sensor 6 located at a first position 7 for sensing when a required portion of the animal has moved through the first moveable barrier means 2a.
- the first position sensor 7 is spaced apart from the first moveable barrier means 2a or the first end of the static barrier means 1 by a distance which is approximately equal to the length of the animal, for example around 150 cm.
- the animal position sensing means may also comprise a second photoelectric sensor 9 located at a second position 10 which is substantially adjacent the first moveable barrier means 2a, or if that is not present, is adjacent the first end of the static barrier means 1 .
- the apparatus 100 comprises a 3D imaging device 1 1 .
- the 3D camera 1 1 is position such that one or more portions of the animal which are relevant to the evaluation of the animal can be brought within the field of view of the 3D camera 1 1 . These portions of the animal are described herein as the "area of interest". In some embodiments not all of the areas of interest will be within the field of view of the 3D camera simultaneously, but rather, information about each of the areas of interest may be captured at different times.
- an artificial lighting source 12 may be provided.
- the lighting source 12, if provided, is preferably adjustable (preferably automatically) to provide at least a minimum light level required by the 3D camera 1 1 .
- the animal When measuring characteristics of the animal with the 3D camera it is often preferable that the animal be stationary for a small amount of time in order to improve the accuracy of the measurements taken.
- the animal's pose when assessing animal characteristics such as BCS or lameness it is preferable for the animal's pose to be such that they are standing with even weight distribution and with their joints in a consistent position, in order to get an accurate sense of the animal's body structure and shape without the changes in body shape introduced through the animal being in motion. In addition, it may be preferable to stop the animal in order to allow some further interaction with the animal.
- the apparatus may allow a certain animal to pass through the apparatus without taking any steps to evaluate it, if certain conditions are present, one of those conditions being whether a new evaluation of the animal is "required".
- an evaluation of the animal is said to be "required” if more than a threshold period of time has elapsed since the last evaluation.
- the threshold period may be changed depending on the result of the last evaluation (for example, a cow which was last assessed as lame may be monitored more frequently than other cows which were not last assessed as lame). If an evaluation is "required” then an evaluation will be performed at the next convenient occasion. However, this does not infer that the apparatus will perform an evaluation of the animal the very next time it passes through the apparatus, if certain other conditions (as described further below), mean that it is not possible or not convenient to do so.
- the apparatus 100 may be in communication with a database to record the evaluation, when performed, and to receive information on when the last evaluation was performed and what its result was.
- Other conditions which may be used in making the decision on whether or not to evaluate the animal may include the speed at which the animal is moving, and the distance between the animal and any other animals in front or behind. Animals which are too close to other animals may not be evaluated, as the presence of two animals in the field of view of the 3D camera may result in an incorrect evaluation. In addition, the fact that animals are closely spaced together can be an indication that the animals are becoming bottlenecked in the race (i.e. the animals waiting to proceed through the system are being crowded together) perhaps because one animal has not proceeded through the system as quickly as expected or because animals are coming out of the milking shed faster than anticipated.
- the system may not evaluate any animals (or at least, may not close any of the moveable barriers 2a, 2b) until it detects that a space between the animal currently between the static barriers and the next animal waiting to enter the space between the barriers is at least equal to a predetermined minimum distance. Missing some evaluations during a single milking is not a problem as properties such as BCS or other metrics change slowly thus obtaining a measurement once every few days is sufficient.
- gate 2a will close whenever required to ensure separation and valid heat detection results, as timely assessment of oestrus is critical to the farmer. Damage of an adjacent animal by closing a moveable barrier 2a, 2b on them is avoided, as is closing a barrier 2a, 2b at a time which might startle an adjacent animal.
- animals which are moving rapidly though the apparatus may not be evaluated as they may be moving too fast for accurate information to be collected from the 3D camera, and too fast to safely bring them to a halt by closing the second moveable barrier means 2b.
- the apparatus may close the second movable barrier means 2b to bring the animal to a complete halt while 3D information of the area(s) of interest is captured.
- the start of the signal from the animal presence sensor 16 is allowed to initiate the command to close moveable barrier 2b.
- the presence of an animal at drafting gate entrance sensor 13 inhibits this command, preventing moveable barrier 2b from closing in the case where there is insufficient distance between animals.
- the apparatus may collect the 3D information without closing the second moveable barrier means 2b. This may occur in particular when the system is used to evaluate cows which are waiting to be milked in a rotary milking shed Operation of a preferred embodiment of the apparatus 100 is as follows:
- the moveable barrier means doors 2a, 2b are normally in the open position so that an animal A can move past the moveable barrier means 2a and into the field of view of the 3D camera 1 1 .
- the first photoelectric sensor 6 detects the presence of the head or chest of the animal A. Triggering of the first photoelectric sensor 6 may cause the moveable barrier means 2a to close behind the animal A, preventing the animal from moving backwards, and preventing the head of another animal from entering the field of view of the 3D camera.
- the second moveable barrier means 2b may closed, or both barrier means 2a, 2b may be closed.
- the 3D camera 1 1 may be triggered to record one or more images. Alternatively the 3D camera may be triggered a predetermined time after the first photoelectric sensor 6 detects the presence of the animal, or when a video analysis of the animal shows that the animal is in a suitable position and pose for information to be captured.
- the 3D camera 1 1 records a plurality of images, for example three images. Each image may have a different exposure setting.
- the system may analyse a video signal to determine when the animal is moving the least and capture an image at that time.
- position sensor may comprise any type of applicable position sensor, for example an acoustic range sensor, a motion sensor, a camera based system performing video analysis, a thermal camera, a depth camera, a laser based device or the like. These may replace one or more of the photo eyes 6, 9.
- acoustic range sensor for example an acoustic range sensor, a motion sensor, a camera based system performing video analysis, a thermal camera, a depth camera, a laser based device or the like. These may replace one or more of the photo eyes 6, 9.
- the position sensor may be capable of assessing the speed at which the animal is moving through the apparatus 100.
- This may comprise a plurality of photoeye sensors or a video system.
- the apparatus 100 will be used in conjunction with an automated drafting gate system 200.
- the automated drafting gate system 200 is provided with a sensor 13 (for example a photoelectric sensor) to indicate that the animal has passed through the drafting gate entrance 14, a signal from this sensor 13 may be used to indicate that the moveable barrier means 2a, if closed, can be opened.
- the apparatus may utilize moveable barrier means which are part of an existing automated drafting gate system 200 as the second moveable barrier means 2b.
- the system comprises an EID reader 15.
- a further sensor 16 may be positioned to indicate that the animal is in position for the EID sensor to take a reading of an EID tag associated with the animal. If the EID reader 15 has not obtained a reading within a predetermined time of the sensor 16 indicating that the animal is in position, then the moveable barrier means 2a may be kept closed until the animal has moved past another sensor 17 positioned at an exit of the drafting gates (if provided).
- the moveable barrier means 2a open to allow access to a second animal, the animal A which has just been processed by the apparatus 100 will be motivated to move away.
- further means for motivating the first animal A to move away from the moveable barrier means 2 may be provided, for example a nozzle configured to squirt compressed air towards the animal, or means (possibly pneumatic) for making a suitable noise.
- the 3D camera 1 1 is in communication with a processing means 300 which performs an analysis of the images taken from the 3D camera 1 1 to calculate an evaluation of the animal A, and to determine whether an evaluation of the animal is required, and if one is required, whether the correct conditions (e.g. speed, proximity to other animals) are present for an evaluation to occur.
- the evaluation comprises categorising of the animal, for example by calculating a body condition score (BCS) between 1 and 5 for the animal.
- BCS body condition score
- the processing means 300 may send a control signal to the automated drafting gates 200 depending on the result of the evaluation. For example, in one embodiment cows with a normal BCS may be drafted into one area (for example an entrance to a milking shed), while cows with a low BCS may be drafted into an area where additional feed is available. Cows which have failed to be identified by the electronic ID reader may be drafted into a third area.
- an animal may not be drafted into a special area as soon as the result of the evaluation indicates that this may be necessary. Instead, a record may be kept that the animal must be drafted at a later time.
- the position sensing means may comprise the 3D camera 1 1 and processing means 300.
- the first and second sensors 6, 9 may not be required, as the apparatus may be capable of determining when the animal is in the correct position to capture an image of the area of interest without the use of additional sensors.
- the 3D camera 1 1 may operate substantially continuously while the apparatus is in use.
- the position sensing means are operable to determine whether a second animal is within a predetermined distance, for example 100cm, of the moveable barrier means 2a.
- the position sensing means may comprise a further photoelectric sensor located substantially 100cm in front of the moveable barrier means 2a.
- This additional sensor may be used to determine whether another animal is within a
- This information may be used to determine if the moveable barrier means 2a, 2b are to be held open (e.g., if the animals are bottlenecked in the area leading to the apparatus), and may also be used to determine that the moveable barrier means 2a, 2b must be closed to ensure separation of the animals (e.g. if the evaluation includes oestrus detection).
- the decision on when to open the moveable barrier may be based on further criteria, sensors or characteristics.
- the system may use:
- the milking cycle i.e. am or pm
- Processing of the information from the 3D camera to calculate the evaluation of the animal may be performed using any of the embodiments described herein.
- the processor may use the output from the 3D camera and/or an additional 2D camera to detect the presence of an oestrus indicator on the animal, and may include an analysis of the indicator in the calculation of the evaluation, or as part of a separate evaluation calculation.
- the oestrus indicator may be a pressure sensitive heat detection patch, or any suitable alternative oestrus indicator.
- the oestrus indicator may comprise a tail paint marking.
- the oestrus indicator may comprise a patch which has different infra-red or human visible characteristics when activated.
- the present invention provides an apparatus and method for automatically evaluating an animal which can be operated independently of a rotary milking shed and which creates a minimal disruption to the movement of the animals through the race.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Environmental Sciences (AREA)
- Software Systems (AREA)
- Evolutionary Computation (AREA)
- Medical Informatics (AREA)
- Health & Medical Sciences (AREA)
- Artificial Intelligence (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- Animal Husbandry (AREA)
- General Health & Medical Sciences (AREA)
- Biodiversity & Conservation Biology (AREA)
- Biophysics (AREA)
- Human Computer Interaction (AREA)
- Quality & Reliability (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Length Measuring Devices By Optical Means (AREA)
Abstract
The application relates to a method and apparatus for automatically evaluating an animal based on its physical appearance, and in particular, but not exclusively, to a method and apparatus for determining a body condition score (BCS) or lameness, particularly for a dairy cow. The present application is also directed to methods for automatically evaluating a body condition score for cattle. The method preferably involves receiving three dimensional (3D) shape information corresponding to a space occupied by the animal; Creating a 3D point cloud from the 3D shape information; Creating a 3D model based on the shape information; Calculating one or more representative measurements from the 3D model; and Evaluating the animal based on the measurements.
Description
METHOD AND APPARATUS FOR EVALUATING AN ANIMAL
The present invention relates to a method and apparatus for automatically evaluating an animal based on its physical appearance, and in particular, but not exclusively, to a method and apparatus for determining a body condition score (BCS), particularly for a dairy cow. The present application is also directed to methods for automatically evaluating a body condition score for cattle.
Background
Profitability from farmed animals, in particular dairy cows, is enhanced by regular evaluation of a number of parameters, for example oestrus status, body condition, lameness/locomotion, teat conformation and the like. Often such assessments result in the categorisation of the animal according to a recognised scale, for example 1 -5.
Attempts have been made in the past to automate the evaluation of some of these parameters, in particular oestrus status. However, many of these parameters, for example body condition, are still evaluated manually in most cases. This can lead to the information obtained being unreliable through inconsistencies between evaluations performed by different people, or through the same person evaluating different animals inconsistently.
One method of evaluating BCS is described in international publication WO2010/063527. Here a 3D camera is used to obtain a three dimensional image of the animal. The image is processed and a BCS score is determined based on a statistical analysis of a number of features of the data. The method described in WO2010/063527 results in the need to use a global analysis approach.
A number of conditions must be satisfied for the automated systems of the prior art to function correctly: the region of the animal which is of interest must be within the area captured by the image; the region must not be obscured; and the image must not contain relevant features from two or more different animals. It is also strongly preferable that the system evaluates the animal quickly enough to allow the animal to be drafted immediately after the image is collected.
It would be advantageous to develop a system which was not reliant on integration with a rotary milking shed in order to function properly, and which allows a high throughput of animals.
Throughout the description and claims the term "2D" is understood to mean "two dimensional", and the term "3D" is understood to mean "three dimensional". The reference to any prior art in the specification is not, and should not be taken as, an acknowledgement or any form of suggestion that the prior art forms part of the common general knowledge in any country.
Object of the Invention
It is an object of the present invention to provide a system or method for calculating an evaluation of an animal which will overcome or ameliorate problems with such systems or methods at present, or which will at least provide a useful choice. It is an object of the present invention to provide a method and/or apparatus for automatically evaluating an animal which will overcome and/or ameliorate problems with such apparatus and methods at present, or which will at least provide a useful choice.
Other objects of the present invention may become apparent from the following description, which is given by way of example only
Brief Summary of the Invention
According to one aspect of the present invention there is provided a method of calculating an evaluation of an animal comprising:
i. Receiving three dimensional (3D) shape information corresponding to a space occupied by the animal;
ii. Creating a 3D point cloud from the 3D shape information;
iii. Creating a 3D model based on the shape information;
iv. Calculating one or more representative measurements from the 3D model and;
v. evaluating the animal based on the measurements.
Preferably, the step of evaluating the animal comprises the step of calculating a category for the animal.
Preferably the 3D model comprises a triangular mesh. Alternatively the 3D model comprises a grid.
Preferably the method comprises the step of smoothing and/or filtering the 3D shape information prior to the step of creating the 3D point cloud.
Preferably the method comprises the step of smoothing and/or filtering the 3D point cloud prior to the step of forming the 3D model. Preferably the method comprises the step of identifying points or areas within the model which are representative of one or more anatomical features or regions.
Preferably the step of receiving 3D shape information comprises receiving information from a plurality of frames of 3D shape information.
Preferably the method comprises the step of pre-processing the 3D shape information before the 3D model is formed.
Preferably the method comprises the step of normalizing the model prior to (or simultaneously with) the step of calculating the one or more representative measurements.
Preferably the step of normalizing the model comprises the step of determining one or more geometries of the animal. Preferably the step of normalising the model comprises the step of determining one or more of the animal's height, length or width.
Preferably the step of determining the animal's height comprises the step of determining a position of a surface on which the animal is standing.
Preferably the step of determining the animal's height comprises the step of averaging data from several frames or a comparison with a feature of known size.
Preferably the step of determining a position of a surface on which the animal is standing comprises the step of receiving 3D shape information from the surface when the animal is not standing on the surface.
Preferably the step of identifying data corresponding to one or more anatomical regions comprises comparing the data to one or more 3D feature descriptors.
Preferably data corresponding to a plurality of anatomical regions is identified, an orientation of the animal is calculated from the relative positions of the anatomical regions or from an analysis of the outline of the animal.
Preferably the step of identifying data corresponding to one or more anatomical regions comprises the step of excluding results which result in one or more parameters of the 3D model falling outside predetermined limits.
Preferably the one or more parameters comprise one or more of a predetermined angle, curvature, length or depth, or a relationship to one or more other anatomical regions. Preferably the step of identifying data corresponding to one or more anatomical regions comprises performing a global optimisation to minimise or maximise, within predetermined limits, a distance between preselected anatomical features.
Preferably the step of identifying data corresponding to one or more anatomical regions comprises performing a global optimisation to minimise or maximise, within predetermined limits, a distance between preselected anatomical features, or to minimise or maximise, within predetermined limits, another geometric descriptor of the 3D model.
Preferably the step of calculating one or more representative measurements from the 3D model comprises the step of modelling an intersection plane through the one or more anatomical regions.
Preferably the step of modelling an intersection plane through the region of interest comprises the step of rotating or translating the intersection plane to maximise or minimise a selected parameter.
Preferably the step of calculating one or more representative measurements comprises the step of fitting a curve to the two dimensional shape information. Preferably the one or more representative measurements comprise one or more of coefficients of the curve, the length between two points on the curve, an area defined by the curve, a depth associated with the curve, or another descriptor of the curve.
Preferably the step of identifying data corresponding to one or more anatomical regions comprises the step of modelling an intersection plane through the region of interest to define two dimensional shape information corresponding to a two dimensional curve.
Preferably the one or more representative measurements comprise one or more of the average height, maximum height or minimum height of each grid square.
Preferably the step of forming a 3D model comprises defining a grid based on the 3D shape information.
Preferably the step of receiving 3D shape information corresponding to an area occupied by the animal comprises the step of receiving information from a 3D imaging device. Preferably the 3D imaging device comprises one or more of LiDAR, structure from motion devices, stereo or multiview camera devices, depth cameras based on time-of-flight or any similar methodology, lightfield cameras, or any device which provides depth information for a scene being captured. Preferably the category relates to one or more of body condition, lameness, udder conformation or a trait other than production (TOP), for example height at shoulder.
According to a second aspect of the present invention there is provided an apparatus for calculating an evaluation of an animal comprising a three dimensional ( 3D) imaging device for collecting 3D shape information corresponding to a space occupied by the animal and a processing device in communication with the 3D imaging device which is configured to:
i. Create a 3D point cloud from the 3D shape information;
ii. Create a 3D model based on the shape information;
iii. Calculate one or more representative measurements from the 3D model and;
iv. Calculate the evaluation based on the measurements.
Preferably, the processing device calculates a category for the animal.
According to a further aspect of the present invention there is provided an apparatus for automatically evaluating animals, the apparatus comprising a structure comprising two spaced apart static barrier means, a spacing between the barrier means selected to allow an animal to pass between the barrier means, the apparatus further comprising a three dimensional (3D) imaging device for selectively collecting 3D shape information of an area of interest of the
animal when the animal is positioned in a space between the static barrier means and processing means for selectively evaluating the animal based on the 3D shape information, if collected, the apparatus further comprising first moveable barrier means provided at a first end of the static barrier means for selectively preventing a second animal from entering the space between the static barrier means and/or second moveable barrier means provided at a second end of the static barrier means for selectively preventing the animal from moving out of the space between the static barrier means.
Preferably, in use, the apparatus determines whether a selected animal is to be evaluated based, in part, on a space between the animal and an adjacent animal.
Preferably, in use, the apparatus determines whether the (or a) selected animal is to be evaluated based, in part, on a speed at which the animal is moving. Preferably, in use, the apparatus determines whether the (or a) selected animal is to be evaluated based, in part, on an assessment of whether a new evaluation of the animal is required.
Preferably, in use, the apparatus determines whether a selected animal is to be evaluated based solely on one or more of:
a space between the animal and an adjacent animal;
a speed at which the animal is moving; and
an assessment of whether a new evaluation of the animal is required. Preferably the apparatus comprises the second moveable barrier means wherein, in use, the apparatus closes the second moveable barrier means when the apparatus determines that a new evaluation of the (or a) selected first animal is required, and a distance between the first animal and a second animal which is in front of the first animal, is greater than a predetermined distance.
Preferably the apparatus comprises the first moveable barrier means, wherein, in use, the apparatus closes the first moveable barrier means when the apparatus determines that a new evaluation of the (or a) selected first animal is required, and a distance between the first animal and a second animal which is behind the first animal, is greater than a predetermined distance.
Preferably the apparatus comprises the second moveable barrier means, wherein, in use, the apparatus closes the moveable barrier means when the apparatus determines that a new
evaluation of the (or a) selected animal is required, and the selected animal is travelling at a speed which is greater than a predetermined speed.
Preferably, the evaluation of whether the selected animal is required is dependent, in part, on a length of time since the last evaluation of the animal.
Preferably, the evaluation of whether the selected animal is required is dependent, in part, on the result of a previous evaluation of the selected animal. Preferably, the apparatus further comprises animal position sensing means.
Preferably, the animal position sensing means comprise a first sensor means for detecting when a fore part of the animal is in a first position which is indicative of the entire animal having moved between the static barrier means.
Preferably, the animal position sensing means comprise a second sensor means for detecting when a rear of the animal has moved beyond a second position which is adjacent the first moveable barrier means. Preferably, the first sensor means comprises a photoelectric sensor.
Preferably, the second position sensing means comprises a photoelectric sensor.
Preferably, the animal position sensing means comprises the 3D imaging device and the processing means.
Preferably, the apparatus comprises an electronic ID reader. Preferably, the 3D imaging device comprises a 3D camera.
Preferably, the apparatus comprises a lighting means for artificially lighting an area which is within a field of view of the 3D imaging device.
Preferably, the intensity of the light inside the structure is adjustable.
Preferably, the apparatus sends a signal to an automatic drafting gate depending on the evaluation of the animal.
Preferably, the animal position sensing means comprises a drafting gate entry sensor. Preferably, the animal position sensing means comprises a drafting gate exit sensor. Preferably the evaluation of the animal performed by the apparatus comprises a calculation of a body condition score.
According to a second aspect of the present invention there is provided a method of automatically evaluating animals, the method comprising the steps of:
i. determining whether a selected animal is to be evaluated based on one or more of: a space between the animal and an adjacent animal;
a speed at which the animal is moving; and
an assessment of whether a new evaluation of the animal is required.
ii. if the animal is to selected to be evaluated, collecting 3D shape information of an area of interest of the animal when the animal is in a space between two spaced apart static barrier means, and processing the 3D shape information to evaluate the animal based on the 3D shape information.
Preferably, the method comprises the step of closing a first moveable barrier means to prevent a second animal from entering into the space between the static barrier means if a distance between the animal and a second animal which is behind the first animal is greater than a predetermined distance.
Preferably, the method comprises the step of closing a second moveable barrier means to prevent the animal from moving out of the space between the static barrier means, if a speed of the animal is greater than a predetermined maximum speed.
Preferably, the method comprises the step of closing the (or a) second moveable barrier means to prevent the animal from moving out of the space between the static barrier means if a distance between the animal and a second animal which is in front of the first animal is greater than a predetermined distance.
Preferably, the method comprises receiving a signal from a first animal position sensor means when a fore part of the animal is in a first position which is indicative of the entire animal having moved into the space between the static barrier means.
Preferably, the method comprises receiving a signal from a second animal position sensor means when a rear of the animal has passed a second position which is adjacent a first end of the static barrier means. Preferably, the method comprises capturing the 3D shape information after receiving the signal from the second animal position sensing means.
Preferably, the method comprises the step of using a 3D camera to capture the 3D shape information.
Preferably, the method comprises the step of processing the 3D shape information to determine when the animal is in a suitable position to obtain 3D information to perform the evaluation.
Preferably, the method comprises the step of processing the 3D shape information to determine whether the animal is in a suitable stance to obtain 3D information to perform the evaluation.
Preferably, the method comprises updating a herd management system depending on the evaluation of the animal. Preferably, the method comprises sending an automatic drafting gate a signal which is representative of the evaluation of the animal.
Preferably the evaluation of the animal comprises a calculation of a body condition score. Preferably the method and/or apparatus of the preceding aspects is combined to form a system.
In a further aspect the invention may be broadly said to consist in a method for calculating an evaluation of an animal comprising the method to calculate an evaluation used in the apparatus for automatically evaluating an animal. Preferably the combined apparatus also forms an aspect of the invention.
The invention may also be said broadly to consist in the parts, elements and features referred to or indicated in the specification of the application, individually or collectively, in any or all combinations of two or more of said parts, elements or features, and where specific integers are mentioned herein which have known equivalents in the art to which the invention relates, such known equivalents are deemed to be incorporated herein as if individually set forth.
According to a still further aspect of the present invention, a system and/or method for calculating an evaluation of an animal is substantially as herein described, with reference to any one or more of the accompanying embodiments and drawings.
Further aspects of the invention, which should be considered in all its novel aspects, become apparent from the following description given by way of example of possible embodiments of the invention.
Brief Description of the Drawings
Figure 1 is a diagrammatic side view of an apparatus according to one embodiment of the present invention.
Figure 2 is a diagrammatic top view of the apparatus of Figure 2, with the cover removed for clarity and the drafting gate positions shown in outline.
Figure 3 is a schematic view of a system for calculating an evaluation of an animal
according to one embodiment of the present invention.
Figure 4 is a flow chart of the operation of an embodiment of the system used to calculate body condition score.
Figure 5a, 5b are embodiments of a 3D point cloud of (a) an empty location and (b) an animal in location, as can be used in an embodiment of the method or system.
Figure 6a, 6b show the intersection of a plane with an animal and the resulting 2D curve points measured in an embodiment of the system.
Figure 7 shows a plurality of possible planes intersecting with an animal
Figure 8a, 8b are diagrams showing calculation of parameters from a 2D curve as in figure 6b
Brief Description of Preferred Embodiments
Referring first to Figure 3, a system for calculating an evaluation of an animal is generally referenced by arrow 1 10. In one embodiment the system estimates which class from a plurality of predetermined physical classes an animal falls into, for example by calculating a body condition score (BCS) between 1 and 5.
The system comprises an imaging device 1 1 which is capable of collecting 3D shape information corresponding to an area of interest. The imaging device 1 1 may collect information on the entire animal, or only a portion of the animal which has features which are representative of the particular parameter which is to be categorised.
The imaging device 1 1 may include one or more of a LiDAR, structure from motion devices, stereo or multiview camera devices, depth cameras based on time-of-flight or any similar methodology, lightfield cameras, or any other device which provides depth information for the scene being captured. The imaging device may be referred to herein as a "3D camera". The 3D camera is capable of taking at least one photo or a series of photos. The series of photos may be, or may be considered, as a plurality of frames in sequence. In some embodiments there is a comparison between frames, or the evaluation of a plurality of frames is used to confirm the evaluation.
In some embodiments the imaging device 1 1 is statically mounted, while in other embodiments it may be attached to or otherwise carried by a person.
The imaging device is in communication with a processing device 300, typically a computer. The processing device receives the 3D shape information from the imaging device 1 1 and calculates an evaluation of the animal (e.g. by determining which class the selected animal belongs to) based on the 3D shape information.
The class calculated may be output to a suitable visual output device such as a computer monitor, a portable device such as a tablet, or a wearable display such as Google Glass™.
Additionally or alternatively a record of the evaluation may be updated in an electronic record or database, for example MINDA™ by Livestock Improvement Corporation.
The parameter investigated may be one of a number of possible categorisations of an animal which are based on its appearance. Examples include body condition score, lameness, udder conformation or a trait other than production (TOP), for example height at shoulder.
Many of the techniques may be broadly analogous to body condition score. For instance lameness detection can be achieved using a similar approach. In an example embodiment lameness is detected by analysing the shape of the spine of either a stationary cow or a cow in motion. The analysis can investigate whether the animal has an arched or flat back. Lame cows tend to stand and walk with a progressively arched back as the severity of the lameness increases, whereas healthy cows stand and walk with a flat back. Through automatic identification of the spine of the animal the 2D curve can be extracted from the 3D shape information by intersecting a plane through the spine of the cow running from just in front of the hips to roughly the shoulders of the cow. Variations of the plane or location of investigation are
possible. Measurements can then be computed which describe how flat or arched this curve is and hence predict whether this cow is lame and the degree of lameness. This can either be performed on a single frame for a stationary cow, or over a series of successive frames when analysing a cow in motion.
Figure 4 shows a possible method for operation of an embodiment of the system which is explained in more detail below. The sensor, imaging device or 3D camera captures 40 an image or set of images (frames). The images may be smoothed or filtered 41 to improve the modelling or accuracy of the technique. The images are then used to form a 3D point cloud 43. The point cloud allows a 3D surface to be viewed without having to connect each of the points. The point cloud can again be smoothed or filtered 42 before the point cloud is used to create a 3D connected model 44. The 3D model should now be an accurate and relatively smooth replication of the animal. In a preferred embodiment the 3D connected model is used to detect anatomical points of interest 45. This can be used to measure relative locations or to extract curves 46 between areas of interest. The extracted curves or model portions are preferably 2 dimensional to allow measurements of the curves to be taken 47. However this may also be possible in 3D segments. The curve measurements may be compared directly to a known set but preferably are provided to 48 a machine learning model 51 to determine a characteristic(s) of the animal. The machine learning model has been trained on a known data set 49 through a training program 50. The characteristic of the animal may be BCS 52, lameness, or other characteristic determinable from the geometry of the animal.
In one embodiment the 3D shape information is used to create a 3D point cloud. An example point cloud 70 for a bovine animal A is shown in Figure 5b. As can be seen in the figure the animal A is partially constrained between parallel bars for instance in a race, however this may not be necessary if sufficient camera views are available. Next the point cloud data is used to create a 3D surface model. In a preferred embodiment the surface model is a triangular mesh formed using the well known method described by Gopi & Krishnan (Gopi, M., & Krishnan, S. (2002). A Fast and Efficient Projection-Based Approach for Surface Reconstruction. SIBGRAPI ' 02, Proceedings of the 15th Brazilian Symposium on Computer Graphics and Image Processing (pp. 179-186). Washington, DC: IEEE Computer Society, 2002).
Filtering and/or smoothing of the data may occur prior to the creation of the 3D point cloud and/or prior to the creation of the 3D model. This may involve removing or reducing noise or artefacts, filtering parts of the scene so that only information from a specific region of interest is analysed further, and/or other pre-processing steps.
Next, one or more points or areas of interest within the model are identified. A number of options for identifying one or more areas of interest may be used, as are described further below.
Figure 6a shows how once the 3D model has been prepared it is possible to calculate the intersection of the model surface 70 with a plane 71 . The intersection of the plane with the model surface forms a 2D curve 72 which has the shape of the underlying physical object captured at this location as shown in Figure 6b. By positioning, orientating and sizing the intersection plane, the curve information for any of part of the underlying object can be extracted for further analysis. As shown in Figure 7 a plurality of plane locations and/or orientations may be used. In some instances the planes 71 may be limited to a portion of the model, or they may extend across the width or depth of the model. In some embodiments the planes 71 may be sloped relative to the horizontal or vertical axis. In yet further embodiments the planes may be curves, for instance between three areas of interest. The location of the planes 71 or sections of Figure 7 may be chosen to select possible areas of interest although the system is not limited to these particular planes. In fact different planes may be desirable where different characteristics are being evaluated. The number of areas of interest identified may be dependent on the quality of the images available and/or the characteristic being evaluated.
The information from these curves is then passed to a machine learning (ML) framework which has been trained to evaluate the animal (for example by calculating a body condition score) based on the curve data for each region extracted from the 3D model of the animal. As shown in Figure 8a and 8b the information the ML model uses to calculate the evaluation may be the raw curve points 90 provided by intersecting the model with the intersection plane. Alternatively, or in combination a curve described by a mathematical function can be fit to the raw curve points and the coefficients of the mathematical function can be provided as features to the ML system for predicting the evaluation or class. An example curve 91 is shown in Figure 8a where a curve has been fit to the upper surface of an animal and the distance between intersection points has been measured.
In an alternate embodiment measurements computed from a 2D curve which has been fit to the points, such as lengths, curvature information, depths, areas, or any other descriptive measure, can be calculated from the fitted curves and provided as features to the ML framework. Figure 8b shows a radius 92 being fit to a curve 90 of the top surface of an animal. Alternatively all of
the aforementioned features can be supplied when training the system and the ML system can determine which set of complementary features can best be used to evaluate the parameter which is under evaluation. Other metadata about each animal can also be provided such as animal breed breakdown and other properties that may be relevant to the evaluation.
Detecting points of interest
A number of techniques may be used to detect features or regions from which the curve information is to be extracted.
One option for identifying an area of interest is the use of 3D feature descriptors which describe the shape of a region of the object. One example of a 3D descriptor type is described by Rusu et al (Persistent Point Feature Histograms for 3D Point Clouds (Radu Bogdan Rusu, Zoltan Csaba Marton, Nico Blodow, Michael Beetz), Proceedings of the 10th International Conference on Intelligent Autonomous Systems (IAS-10), Baden-Baden, Germany, 2008.) although many others exist.
By comparing a reference shape descriptor to a descriptor derived from the 3D model an anatomical feature or region of the animal can be identified. If the feature or region is distinctive enough from the rest of the input data, the system can locate the same region in another similar view (e.g., another "frame") of the same underlying animal. In other
embodiments the detection of anatomical points of interest may use any one or more of seam carving, plane sweeping or conforming the model to a known structure. By extracting several descriptors around the vicinity of the 'centre' of the anatomical part the orientation of the part can also be established.
In some embodiments the system may exploit known or fixed constraints as well as knowledge of the environment in which the input data was captured in to reduce the search space when looking for certain anatomical regions of interest. In the case of animals, the known anatomy of the animal may be exploited to reduce the search space further or eliminate false positive matches. For instance the model may include predetermined limits for anatomical data
(predetermined angle, curvature, length or depth, or a relationship to one or more other anatomical regions). If these limits are exceeded data may be removed or a curve used to extrapolate nearby data.
Once several anatomical points or regions in the input data have been located, this information, combined with the known anatomy of the animal, allows the orientation of the animal to be determined and consequently the orientation of a given intersection plane. In a further example the outline of the animal can be used to perform the orientation. The outline may be considered a collection of anatomical points or regions herein.
If additional constraints are known (for instance the animal is captured in a race or milking bail) then false positives can be reduced and assigning orientations to planes is simply a process of refinement of an initial estimate. Alternatively if an animal's orientation is constrained by the barrier means false positives can be reduced, as assigning orientations to planes is simply a process of refinement of an initial estimate.
The orientation of a given plane may also be set relative to other parts of the model, that is, it may be set to be parallel to or perpendicular to other parts. For example, in the case of the tailhead plane, its orientation may be set relative to the pin bones right at the back of the animal. Like the plane positioning process, correct plane orientation can also be determined by rotating or translating the positioned plane and maximizing or minimizing an angle, curvature, length or depth appropriately for the region. Various possible plane positions are shown in Figure 7.
In another embodiment the intersection plane described above may be swept across the model of the animal and anatomical parts of interest identified based on their distinctive shape profile. For instance when identifying the correct plane placement to extract a vertical cut across the tailhead region the plane may be swept across the broad area where this region is expected to be, and then the plane position that maximises the depth between the pin bones and the tail can be selected as the point that represents this region.
In yet another embodiment a global optimization is applied which minimizes the descriptor distance between candidate locations on anatomical regions while maintaining a feasible object pose. This optimization simultaneously minimizes geometric measurements at the proposed anatomical feature locations. For instance, when applying the method to the problem of body condition scoring and determination of the precise location of the backbone, the height of the backbone ridge is maximised and/or the point which maximises or minimizes curvature (for example the radius of an osculating circle fit to the backbone ridge curve) is selected.
Curvature of other regions such as the hips may also be maximised or minimized, or other geometric measurements or measurements associated with the hip to hip curve may be used, where the area between a straight line connecting the proposed hip points and the curve of the cow's surface may be maximised. Other similar properties of each anatomical region can be exploited.
In one embodiment all of these factors are simultaneously optimized in order to obtain the globally optimal location of the anatomical points of interest while compensating for the deficiencies and limitations of any one approach.
Normalisation
Some animal properties vary between animals due to the size of the animal. However, certain calculations must be independent of the absolute size of the animal. For example, absolute measures such as lengths, depths or distances between parts of the animal will vary due to the animal's size, as well as due to a classification of the animal. Thus, in many embodiments, the animal's size must be standardised prior to analysis.
To ensure that a curve analysis or measurements taken from curves are independent of the effect of animal size, the size of the particular animal needs to be calculated and the curve data adjusted based on this size. Several geometries or measures of the size of a given animal can be used for this purpose, for example the length of the animal, its width, or its height. The position of the sensor from which the 3D shape information is captured (e.g., the 3D camera) may dictate which of these measurements are available and are sufficiently accurate. In further embodiments the geometries or measures are calculated by comparing multiple frames or images of the animal, or by comparison with a known feature. For instance a ruler or known length could be included in the image field.
Calculation of the animal's height (sometimes known as the animal's stature) can be established through knowledge of how far from the ground a certain part of the animal is. Often when measuring the stature of an animal such as a dairy cow the height at the shoulders is used. If the ground is visible from the perspective of the sensor (for instance a 3D camera) then a ground plane can be fit to 3D points on the ground, and thus the height above the ground for any point on the animal model can be easily calculated. If the depth sensor's position is static then a 3D capture of the area in which the animal stands when the 3D image is taken may also be used to pre-calculate the ground plane for later use in determining the height of any point on
the animal model. That is to say the 3D shape information of the surface S on which the animals are imaged is taken when the animal is not standing on the surface as shown in Figure 5a. A point cloud may be formed from this image for use in the method. This provides an initial position form which a height can be extrapolated if required.
Height may be computed at a consistent location on the animal (just forward of the hips) or an average over the entire length of the backbone from the tailhead forward may be used.
If the sensor's position is not static then a single view containing both the relevant part of the animal and the ground can be used to compute the animal's height. To ensure the robustness of the calculated height several points along the backbone of the animal can be used. In addition, or alternatively, multiple height measurements of the same animal taken over time from successive captures (images) could be aggregated to ensure that any single
unrepresentative measure (e.g. if the cow is standing with an arched back in a given capture) does not distort the results.
Calculation of the animal's width may be preferred as it does not rely on knowledge of where the ground is. Similarly the length of the animal may be used as it does not require knowledge of the position of the ground. However, this does require the sensor to be far enough away from the animal to see its entire length.
While normalisation is required in many embodiments, particularly those which result in a categorisation of the animal, in other embodiments the evaluation of the animal may be based on absolute measurements, such that normalisation of some or all of the data is not required.
Embodiment 2
In another embodiment the image capture, point cloud formation and 3D model generation steps are the same as those described above. However, in this embodiment simpler features are extracted and the process of accurate anatomical point detection and plane placement is avoided.
This method involves finding the rear-most point of the animal. The point cloud surface is then divided into a grid and the height from the ground of each individual point in each grid square is computed and then normalized by the height of the particular animal.
Measures such as the average height, maximum height, minimum height, and standard deviation of the height of the points, may be computed for each grid square.
The measurements for all grid squares are then entered into the ML framework to calculate the evaluation of the animal, for example by determining the category of the animal. The size of each grid square needs to be large enough to ensure that precise localisation of the individual squares on the surface of the animal does not significantly affect the measurements of the grid. Conversely the squares must not be so large that the discriminative power of the measurements that describe the region and its depressions (or lack thereof) are lost.
If required, further invariance to the precise localisation of the grid can be achieved by interpolating point values between adjacent squares or by weighting the values in the squares near the boundaries of the squares which are increasingly subject to the effects of grid localisation. Depending on the exact camera placement, points near the edges of the model - which may be more susceptible to noise or missing data due to the extreme angle they make between the animal's surface normal at this point and the camera itself - may be omitted.
Normalizing the region under analysis (so as to remove the effect of the size of the animal) may be achieved by ascertaining the animal's height, as is described above. Any curve data calculated may be normalized by a factor derived from the animal's actual height relative to a standardised height.
Those skilled in the art will appreciate that the imaging device 1 1 and the processing device may be integrated into a single unit, or may be separate. Similarly the function of the processing device 2 may be distributed between two or more processors. The processor may be a computer or microprocessor or logic device.
While many of the embodiments of the invention have been described above with reference to the calculation of the body condition score of a dairy cow, the methods described may also be used for evaluation of other characteristics of a dairy cow and/or for evaluation of various characteristics of other animals.
A possible embodiment the system may use a matrix or grid which is superimposed on the 3D model of the animal. The method may measure the volume in each matrix grid position, or a shape of the volume in each matrix grid position to provide an input to the machine learning module 51 . The size of the grid may be adjusted depending on the animal or accuracy required,
or a plurality of grid sizes may be used. The grid sizing may be adaptive dependent on the curvature or other aspect of the model.
Referring to Figures 1 and 2, an apparatus for automatically evaluating an animal according to one embodiment of the present invention is generally referenced by arrow 100. In preferred embodiments the animal is a bovine.
The apparatus 100 comprises two spaced apart static barrier means 1 . The static barrier means 1 are typically substantially parallel, as shown in Figure 1 .
The static barrier means 1 may comprise a prior art cattle race, and are spaced apart sufficiently widely to allow an animal A to comfortably walk between them, but not so widely as to allow the animal A to turn around. At least one automatically moveable barrier means 2 is provided, typically configured as a pair of pneumatically operated doors. The barrier means 2 may be provided as first moveable barrier means at the entrance end of the race (that is, a first end of the static barrier means 1 ) and/or as second moveable barrier means at the opposite, exit end of the race (at the second, opposite end of the static barrier means 1 ). The moveable barrier means 2a can be opened to allow animals A to proceed into the space between the static barrier means 1 , or closed to prevent animals behind the barrier means 2a from proceeding forward and to prevent animals A in front of the barrier means 2a from moving backward. The moveable barrier means 2b can be opened to allow the animal A to proceed out of the space between the static barrier means 1 , or can be closed to bring the animal A to a halt within the space between the static barrier means 1 , and to prevent an animal in front of the moveable barrier means 2b from moving backward into that space.
In some embodiments a structure 3 comprising a cover 4 may be provided. The cover 4, if provided, must be sufficiently high that the animal is comfortable walking through the structure 3, but is preferably sufficiently low that some or all of the animal inside the structure is in shadow. In some embodiments the cover 4 may extend partially or fully down the sides of the structure 3. In some embodiments the apparatus 100 may be provided with a walk-over weigh platform (not shown). The apparatus 100 is provided with animal position sensing means for sensing the position of the animal A. In one embodiment the animal position sensing means comprise a photoelectric
sensor 6 located at a first position 7 for sensing when a required portion of the animal has moved through the first moveable barrier means 2a. In one embodiment the first position sensor 7 is spaced apart from the first moveable barrier means 2a or the first end of the static barrier means 1 by a distance which is approximately equal to the length of the animal, for example around 150 cm. The animal position sensing means may also comprise a second photoelectric sensor 9 located at a second position 10 which is substantially adjacent the first moveable barrier means 2a, or if that is not present, is adjacent the first end of the static barrier means 1 . The apparatus 100 comprises a 3D imaging device 1 1 . The 3D camera 1 1 is position such that one or more portions of the animal which are relevant to the evaluation of the animal can be brought within the field of view of the 3D camera 1 1 . These portions of the animal are described herein as the "area of interest". In some embodiments not all of the areas of interest will be within the field of view of the 3D camera simultaneously, but rather, information about each of the areas of interest may be captured at different times.
In some embodiments an artificial lighting source 12 may be provided. The lighting source 12, if provided, is preferably adjustable (preferably automatically) to provide at least a minimum light level required by the 3D camera 1 1 . When measuring characteristics of the animal with the 3D camera it is often preferable that the animal be stationary for a small amount of time in order to improve the accuracy of the measurements taken. Some 3D cameras, for example those based on time-of-flight technology, produce the best results when the object being measurement is moving as little as possible. Furthermore, when assessing animal characteristics such as BCS or lameness it is preferable for the animal's pose to be such that they are standing with even weight distribution and with their joints in a consistent position, in order to get an accurate sense of the animal's body structure and shape without the changes in body shape introduced through the animal being in motion. In addition, it may be preferable to stop the animal in order to allow some further interaction with the animal.
Many of the evaluations which can be performed by the apparatus do not need to be performed every time the animal passes through the apparatus, as the evaluation is not likely to change rapidly. For example, in the case of Body Condition Score (BCS) it is sufficient for the animal to be evaluated only once every 3-4 days. Accordingly, the apparatus may allow a certain animal to pass through the apparatus without taking any steps to evaluate it, if certain conditions are present, one of those conditions being whether a new evaluation of the animal is "required". In
this context an evaluation of the animal is said to be "required" if more than a threshold period of time has elapsed since the last evaluation. The threshold period may be changed depending on the result of the last evaluation (for example, a cow which was last assessed as lame may be monitored more frequently than other cows which were not last assessed as lame). If an evaluation is "required" then an evaluation will be performed at the next convenient occasion. However, this does not infer that the apparatus will perform an evaluation of the animal the very next time it passes through the apparatus, if certain other conditions (as described further below), mean that it is not possible or not convenient to do so. The apparatus 100 may be in communication with a database to record the evaluation, when performed, and to receive information on when the last evaluation was performed and what its result was.
Other conditions which may be used in making the decision on whether or not to evaluate the animal may include the speed at which the animal is moving, and the distance between the animal and any other animals in front or behind. Animals which are too close to other animals may not be evaluated, as the presence of two animals in the field of view of the 3D camera may result in an incorrect evaluation. In addition, the fact that animals are closely spaced together can be an indication that the animals are becoming bottlenecked in the race (i.e. the animals waiting to proceed through the system are being crowded together) perhaps because one animal has not proceeded through the system as quickly as expected or because animals are coming out of the milking shed faster than anticipated. In this circumstance it is desirable to allow the animals to proceed through the system without delaying them to avoid causing a backlog of animals which eventually would adversely impact operations in the shed and the farmer, and so the system may not evaluate any animals (or at least, may not close any of the moveable barriers 2a, 2b) until it detects that a space between the animal currently between the static barriers and the next animal waiting to enter the space between the barriers is at least equal to a predetermined minimum distance. Missing some evaluations during a single milking is not a problem as properties such as BCS or other metrics change slowly thus obtaining a measurement once every few days is sufficient.
However in some cases it may be necessary to evaluate every animal if at all possible (for example if the evaluation to be performed includes oestrus detection). In that case gate 2a will close whenever required to ensure separation and valid heat detection results, as timely assessment of oestrus is critical to the farmer.
Injury of an adjacent animal by closing a moveable barrier 2a, 2b on them is avoided, as is closing a barrier 2a, 2b at a time which might startle an adjacent animal.
Similarly, animals which are moving rapidly though the apparatus may not be evaluated as they may be moving too fast for accurate information to be collected from the 3D camera, and too fast to safely bring them to a halt by closing the second moveable barrier means 2b.
If the animal is moving relatively slowly, and there is a sufficient distance between the animal and any animals in front of the animal, then the apparatus may close the second movable barrier means 2b to bring the animal to a complete halt while 3D information of the area(s) of interest is captured. With a sufficient distance between animals, the start of the signal from the animal presence sensor 16 is allowed to initiate the command to close moveable barrier 2b. However, the presence of an animal at drafting gate entrance sensor 13 inhibits this command, preventing moveable barrier 2b from closing in the case where there is insufficient distance between animals. However, if the animal is moving sufficiently slowly, or voluntarily stops in a convenient position, the apparatus may collect the 3D information without closing the second moveable barrier means 2b. This may occur in particular when the system is used to evaluate cows which are waiting to be milked in a rotary milking shed Operation of a preferred embodiment of the apparatus 100 is as follows:
The moveable barrier means doors 2a, 2b are normally in the open position so that an animal A can move past the moveable barrier means 2a and into the field of view of the 3D camera 1 1 . When the animal A has moved past the barrier means 2a the first photoelectric sensor 6 detects the presence of the head or chest of the animal A. Triggering of the first photoelectric sensor 6 may cause the moveable barrier means 2a to close behind the animal A, preventing the animal from moving backwards, and preventing the head of another animal from entering the field of view of the 3D camera. Alternatively the second moveable barrier means 2b may closed, or both barrier means 2a, 2b may be closed.
Continued forward motion by the animal A moves the rear of the animal beyond the second position 10. When the second photoelectric sensor 9 detects that this has occurred, the 3D camera 1 1 may be triggered to record one or more images. Alternatively the 3D camera may be triggered a predetermined time after the first photoelectric sensor 6 detects the presence of
the animal, or when a video analysis of the animal shows that the animal is in a suitable position and pose for information to be captured.
In one embodiment the 3D camera 1 1 records a plurality of images, for example three images. Each image may have a different exposure setting. In another embodiment the system may analyse a video signal to determine when the animal is moving the least and capture an image at that time.
In another embodiment position sensor may comprise any type of applicable position sensor, for example an acoustic range sensor, a motion sensor, a camera based system performing video analysis, a thermal camera, a depth camera, a laser based device or the like. These may replace one or more of the photo eyes 6, 9.
In another embodiment the position sensor may be capable of assessing the speed at which the animal is moving through the apparatus 100. This may comprise a plurality of photoeye sensors or a video system.
In many embodiments the apparatus 100 will be used in conjunction with an automated drafting gate system 200. If the automated drafting gate system 200 is provided with a sensor 13 (for example a photoelectric sensor) to indicate that the animal has passed through the drafting gate entrance 14, a signal from this sensor 13 may be used to indicate that the moveable barrier means 2a, if closed, can be opened. In one embodiment the apparatus may utilize moveable barrier means which are part of an existing automated drafting gate system 200 as the second moveable barrier means 2b.
In preferred embodiments the system comprises an EID reader 15. A further sensor 16 may be positioned to indicate that the animal is in position for the EID sensor to take a reading of an EID tag associated with the animal. If the EID reader 15 has not obtained a reading within a predetermined time of the sensor 16 indicating that the animal is in position, then the moveable barrier means 2a may be kept closed until the animal has moved past another sensor 17 positioned at an exit of the drafting gates (if provided).
Those skilled in the art will appreciate that when the moveable barrier means 2a open to allow access to a second animal, the animal A which has just been processed by the apparatus 100 will be motivated to move away. In some embodiments further means for motivating the first animal A to move away from the moveable barrier means 2 may be provided, for example a
nozzle configured to squirt compressed air towards the animal, or means (possibly pneumatic) for making a suitable noise.
The 3D camera 1 1 is in communication with a processing means 300 which performs an analysis of the images taken from the 3D camera 1 1 to calculate an evaluation of the animal A, and to determine whether an evaluation of the animal is required, and if one is required, whether the correct conditions (e.g. speed, proximity to other animals) are present for an evaluation to occur. In one embodiment the evaluation comprises categorising of the animal, for example by calculating a body condition score (BCS) between 1 and 5 for the animal.
In preferred embodiments only the results from certain animals are processed for evaluation. For example, some evaluations may only be performed on animals which have previously been flagged as requiring ongoing monitoring. In preferred embodiments the processing means 300 may send a control signal to the automated drafting gates 200 depending on the result of the evaluation. For example, in one embodiment cows with a normal BCS may be drafted into one area (for example an entrance to a milking shed), while cows with a low BCS may be drafted into an area where additional feed is available. Cows which have failed to be identified by the electronic ID reader may be drafted into a third area.
In some embodiments an animal may not be drafted into a special area as soon as the result of the evaluation indicates that this may be necessary. Instead, a record may be kept that the animal must be drafted at a later time.
In some embodiments the position sensing means may comprise the 3D camera 1 1 and processing means 300. In these embodiments the first and second sensors 6, 9 may not be required, as the apparatus may be capable of determining when the animal is in the correct position to capture an image of the area of interest without the use of additional sensors. In these embodiments the 3D camera 1 1 may operate substantially continuously while the apparatus is in use.
In another embodiment (not shown) the position sensing means are operable to determine whether a second animal is within a predetermined distance, for example 100cm, of the moveable barrier means 2a. In one embodiment the position sensing means may comprise a
further photoelectric sensor located substantially 100cm in front of the moveable barrier means 2a.
This additional sensor may be used to determine whether another animal is within a
predetermined distance of the first moveable barrier means 2a. This information may be used to determine if the moveable barrier means 2a, 2b are to be held open (e.g., if the animals are bottlenecked in the area leading to the apparatus), and may also be used to determine that the moveable barrier means 2a, 2b must be closed to ensure separation of the animals (e.g. if the evaluation includes oestrus detection).
In further embodiments the decision on when to open the moveable barrier may be based on further criteria, sensors or characteristics. For example the system may use:
Identity of cow, or failure to identify the cow by the time it reaches a certain position whether the cow is in the set of cows in the database being monitored for oestrus The progress of the cow through the system
The state of a Oestrus heat patch or other device on the cow the heat patch: for instance activated, inactivated or missing
The milking cycle: i.e. am or pm
The progress of the cow in front of it, and whether that cow has its electronic
identification read
Processing of the information from the 3D camera to calculate the evaluation of the animal may be performed using any of the embodiments described herein. In some embodiments the processor may use the output from the 3D camera and/or an additional 2D camera to detect the presence of an oestrus indicator on the animal, and may include an analysis of the indicator in the calculation of the evaluation, or as part of a separate evaluation calculation. The oestrus indicator may be a pressure sensitive heat detection patch, or any suitable alternative oestrus indicator. For example, in one embodiment the oestrus indicator may comprise a tail paint marking. In another embodiment the oestrus indicator may comprise a patch which has different infra-red or human visible characteristics when activated.
Those skilled in the art will appreciate that the present invention provides an apparatus and method for automatically evaluating an animal which can be operated independently of a rotary milking shed and which creates a minimal disruption to the movement of the animals through the race.
Unless the context clearly requires otherwise, throughout the description and the claims, the words "comprise", "comprising", and the like, are to be construed in an inclusive sense as opposed to an exclusive or exhaustive sense, that is to say, in the sense of "including, but not limited to".
Where in the foregoing description, reference has been made to specific components or integers of the invention having known equivalents, then such equivalents are herein incorporated as if individually set forth.
Although this invention has been described by way of example and with reference to possible embodiments thereof, it is to be understood that modifications or improvements may be made thereto without departing from the spirit or scope of the accompanying claims.
Claims
1 . A method of calculating an evaluation of an animal, the method comprising the steps of:
Receiving three dimensional (3D) shape information corresponding to a space occupied by the animal;
Creating a 3D point cloud from the 3D shape information;
Creating a 3D model based on the shape information;
Calculating one or more representative measurements from the 3D model; and Evaluating the animal based on the measurements.
2. The method of claim 1 comprising the step of smoothing and/or filtering either one of or both of:
the 3D shape information prior to the step of creating the 3D point cloud; and/or the 3D point cloud prior to the step of forming the 3D model.
3. The method of either one of claims 1 or 2 wherein the step of receiving 3D shape
information comprises receiving information from a plurality of frames of 3D shape information.
4. The method of any one of claims 1 to 3 comprising the step of identifying data
corresponding to one or more anatomical features or regions.
5. The method of claim 4 wherein the step of identifying data corresponding to one or more anatomical regions comprises comparing the data to one or more 3D feature descriptors.
6. The method of either one of claims 4 or 5 wherein an orientation of the animal is
calculated from the relative positions of the anatomical regions, or from an analysis of the outline of the animal.
7. The method of any one of claims 4 to 6 wherein the step of identifying data
corresponding to one or more anatomical regions comprises the step of excluding results which result in one or more parameters of the 3D model falling outside
predetermined limits.
8. The method of claim 7 wherein the one or more parameters comprise one or more of a predetermined angle, curvature, length or depth, or a relationship to one or more other anatomical regions.
9. The method of any one of claims 4 to 8 wherein the step of identifying data
corresponding to one or more anatomical regions comprises performing a global optimisation to minimise or maximise, within predetermined limits, a distance between preselected anatomical features.
10. The method of any one of claims 4 to 8 wherein the step of identifying data
corresponding to one or more anatomical regions comprises performing a global optimisation to minimise or maximise, within predetermined limits, a geometric descriptor of the 3D model.
1 1 . The method of any one of claims 4 to 8 wherein the step of identifying data
corresponding to one or more anatomical regions comprises the step of modelling an intersection plane through the region of interest to define two dimensional shape information.
12. The method of claim 1 1 wherein the step of calculating one or more representative
measurements from the 3D model comprises the step of fitting a curve to the two dimensional shape information.
13. The method of any one of claims 1 to 12 comprising the step of normalizing the model prior to (or simultaneously with) the step of calculating the one or more representative measurements.
14. The method of claim 13 wherein the step of normalizing the model comprises the step of determining one or more geometries of the animal.
15. The method of claim 14 wherein the step of determining one or more geometries of the animal comprises the step of determining a position of a surface on which the animal is standing and/or comprises the step of averaging data from several frames and/or comprises a comparison with a feature of known size.
16. The method of claim 15 wherein the step of determining a position of a surface on which the animal is standing comprises the step of receiving 3D shape information from the surface when the animal is not standing on the surface.
17. The method of any one of claims 1 to 16 wherein the step of calculating one or more representative measurements from the 3D model comprises the step of modelling an intersection plane through the one or more anatomical regions.
18. The method of claim 17 wherein the step of modelling an intersection plane through the region of interest comprises the step of rotating or translating the intersection plane to maximise or minimise a selected parameter.
19. The method of any one of claims 1 to 18 wherein the one or more representative
measurements comprise one or more of the average height, maximum height or minimum height of one or a plurality of grid squares.
20. An apparatus for calculating an evaluation of an animal comprising a three dimensional (3D) imaging device for collecting 3D shape information corresponding to a space occupied by the animal and a processing device in communication with the 3D imaging device which is configured to:
Create a 3D point cloud from the 3D shape information;
Create a 3D model based on the shape information;
Calculate one or more representative measurements from the 3D model; and Calculate the evaluation based on the measurements.
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
NZ71109215 | 2015-08-17 | ||
NZ711092 | 2015-08-17 | ||
NZ711098 | 2015-08-17 | ||
NZ711098A NZ711098A (en) | 2015-08-17 | Apparatus and method for automatically evaluating an animal |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017030448A1 true WO2017030448A1 (en) | 2017-02-23 |
Family
ID=58051110
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/NZ2016/050129 WO2017030448A1 (en) | 2015-08-17 | 2016-08-17 | Method and apparatus for evaluating an animal |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2017030448A1 (en) |
Cited By (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108416260A (en) * | 2018-01-25 | 2018-08-17 | 北京农业信息技术研究中心 | A kind of 3-D view monitoring device and method |
WO2019003015A1 (en) * | 2017-06-29 | 2019-01-03 | The Gsi Group Llc | Regression-based animal weight estimation |
CN109238264A (en) * | 2018-07-06 | 2019-01-18 | 中国农业大学 | A kind of domestic animal posture method for normalizing and device |
WO2019036532A1 (en) * | 2017-08-16 | 2019-02-21 | Adams Land & Cattle Co. | Livestock sorting facility |
JP2019187277A (en) * | 2018-04-24 | 2019-10-31 | 国立大学法人 宮崎大学 | Evaluation device, evaluation method and evaluation program of body condition score of cow |
CN111353416A (en) * | 2020-02-26 | 2020-06-30 | 广东温氏种猪科技有限公司 | Posture detection method, system and storage medium based on livestock three-dimensional measurement |
CN112036364A (en) * | 2020-09-14 | 2020-12-04 | 北京海益同展信息科技有限公司 | Limp home recognition method and device, electronic device and computer-readable storage medium |
CN112825791A (en) * | 2020-12-25 | 2021-05-25 | 河南科技大学 | Milk cow body condition scoring method based on deep learning and point cloud convex hull characteristics |
WO2021259886A1 (en) * | 2020-06-25 | 2021-12-30 | Signify Holding B.V. | A sensing system for determining a parameter of a set of animals |
CN114266811A (en) * | 2021-11-26 | 2022-04-01 | 河南讯飞智元信息科技有限公司 | Livestock body condition scoring method and device, electronic equipment and storage medium |
US11425892B1 (en) | 2021-08-18 | 2022-08-30 | Barel Ip, Inc. | Systems, methods, and user interfaces for a domestic animal identification service |
WO2023031759A1 (en) | 2021-09-02 | 2023-03-09 | Lely Patent N.V. | Animal husbandry system |
CN116883328A (en) * | 2023-06-21 | 2023-10-13 | 查维斯机械制造(北京)有限公司 | Method for quickly extracting spine region of beef carcass based on computer vision |
US11910784B2 (en) | 2020-10-14 | 2024-02-27 | One Cup Productions Ltd. | Animal visual identification, tracking, monitoring and assessment systems and methods thereof |
PL442537A1 (en) * | 2022-10-17 | 2024-04-22 | Szkoła Główna Gospodarstwa Wiejskiego w Warszawie | Method for cattle condition scoring |
WO2024088479A1 (en) | 2022-10-28 | 2024-05-02 | Technische Universität München, Körperschaft des öffentlichen Rechts | Method and device for automated recording and analysis of the gait pattern of an animal |
US12121007B2 (en) | 2021-09-07 | 2024-10-22 | Hill's Pet Nutrition, Inc. | Method for determining biometric data relating to an animal based on image data |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050136819A1 (en) * | 2002-08-02 | 2005-06-23 | Kriesel Marshall S. | Apparatus and methods for the volumetric and dimensional measurement of livestock |
WO2010127023A1 (en) * | 2009-05-01 | 2010-11-04 | Spicola Tool, Llc | Remote contactless stereoscopic mass estimation system |
US20110279650A1 (en) * | 2008-12-03 | 2011-11-17 | Bohao Liao | Arrangement and method for determining a body condition score of an animal |
US20140029808A1 (en) * | 2012-07-23 | 2014-01-30 | Clicrweight, LLC | Body Condition Score Determination for an Animal |
WO2016023075A1 (en) * | 2014-08-13 | 2016-02-18 | Meat & Livestock Australia Limited | 3d imaging |
-
2016
- 2016-08-17 WO PCT/NZ2016/050129 patent/WO2017030448A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20050136819A1 (en) * | 2002-08-02 | 2005-06-23 | Kriesel Marshall S. | Apparatus and methods for the volumetric and dimensional measurement of livestock |
US20110279650A1 (en) * | 2008-12-03 | 2011-11-17 | Bohao Liao | Arrangement and method for determining a body condition score of an animal |
WO2010127023A1 (en) * | 2009-05-01 | 2010-11-04 | Spicola Tool, Llc | Remote contactless stereoscopic mass estimation system |
US20140029808A1 (en) * | 2012-07-23 | 2014-01-30 | Clicrweight, LLC | Body Condition Score Determination for an Animal |
WO2016023075A1 (en) * | 2014-08-13 | 2016-02-18 | Meat & Livestock Australia Limited | 3d imaging |
Cited By (27)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2019003015A1 (en) * | 2017-06-29 | 2019-01-03 | The Gsi Group Llc | Regression-based animal weight estimation |
US10816387B2 (en) | 2017-06-29 | 2020-10-27 | The Gsi Group Llc | Regression-based animal weight estimation |
US11819008B2 (en) | 2017-08-16 | 2023-11-21 | Adams Land & Cattle Co. | Livestock sorting facility |
WO2019036532A1 (en) * | 2017-08-16 | 2019-02-21 | Adams Land & Cattle Co. | Livestock sorting facility |
US11324203B2 (en) | 2017-08-16 | 2022-05-10 | Adams Land & Cattle Co. | Livestock sorting facility |
CN108416260A (en) * | 2018-01-25 | 2018-08-17 | 北京农业信息技术研究中心 | A kind of 3-D view monitoring device and method |
JP7062277B2 (en) | 2018-04-24 | 2022-05-06 | 国立大学法人 宮崎大学 | Cow body condition score evaluation device, evaluation method and evaluation program |
JP2019187277A (en) * | 2018-04-24 | 2019-10-31 | 国立大学法人 宮崎大学 | Evaluation device, evaluation method and evaluation program of body condition score of cow |
CN109238264B (en) * | 2018-07-06 | 2020-09-01 | 中国农业大学 | Livestock position and posture normalization method and device |
CN109238264A (en) * | 2018-07-06 | 2019-01-18 | 中国农业大学 | A kind of domestic animal posture method for normalizing and device |
CN111353416B (en) * | 2020-02-26 | 2023-07-07 | 广东温氏种猪科技有限公司 | Gesture detection method, system and storage medium based on livestock three-dimensional measurement |
CN111353416A (en) * | 2020-02-26 | 2020-06-30 | 广东温氏种猪科技有限公司 | Posture detection method, system and storage medium based on livestock three-dimensional measurement |
WO2021259886A1 (en) * | 2020-06-25 | 2021-12-30 | Signify Holding B.V. | A sensing system for determining a parameter of a set of animals |
CN112036364A (en) * | 2020-09-14 | 2020-12-04 | 北京海益同展信息科技有限公司 | Limp home recognition method and device, electronic device and computer-readable storage medium |
CN112036364B (en) * | 2020-09-14 | 2024-04-16 | 京东科技信息技术有限公司 | Lameness recognition method and device, electronic equipment and computer readable storage medium |
US11910784B2 (en) | 2020-10-14 | 2024-02-27 | One Cup Productions Ltd. | Animal visual identification, tracking, monitoring and assessment systems and methods thereof |
CN112825791B (en) * | 2020-12-25 | 2023-02-10 | 河南科技大学 | Milk cow body condition scoring method based on deep learning and point cloud convex hull characteristics |
CN112825791A (en) * | 2020-12-25 | 2021-05-25 | 河南科技大学 | Milk cow body condition scoring method based on deep learning and point cloud convex hull characteristics |
US11425892B1 (en) | 2021-08-18 | 2022-08-30 | Barel Ip, Inc. | Systems, methods, and user interfaces for a domestic animal identification service |
WO2023031759A1 (en) | 2021-09-02 | 2023-03-09 | Lely Patent N.V. | Animal husbandry system |
US12121007B2 (en) | 2021-09-07 | 2024-10-22 | Hill's Pet Nutrition, Inc. | Method for determining biometric data relating to an animal based on image data |
CN114266811A (en) * | 2021-11-26 | 2022-04-01 | 河南讯飞智元信息科技有限公司 | Livestock body condition scoring method and device, electronic equipment and storage medium |
PL442537A1 (en) * | 2022-10-17 | 2024-04-22 | Szkoła Główna Gospodarstwa Wiejskiego w Warszawie | Method for cattle condition scoring |
WO2024088479A1 (en) | 2022-10-28 | 2024-05-02 | Technische Universität München, Körperschaft des öffentlichen Rechts | Method and device for automated recording and analysis of the gait pattern of an animal |
DE102022128733A1 (en) | 2022-10-28 | 2024-05-08 | Technische Universität München, Körperschaft des öffentlichen Rechts | Method for automated recording and analysis of the gait of an animal |
CN116883328B (en) * | 2023-06-21 | 2024-01-05 | 查维斯机械制造(北京)有限公司 | Method for quickly extracting spine region of beef carcass based on computer vision |
CN116883328A (en) * | 2023-06-21 | 2023-10-13 | 查维斯机械制造(北京)有限公司 | Method for quickly extracting spine region of beef carcass based on computer vision |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017030448A1 (en) | Method and apparatus for evaluating an animal | |
CA2744146C (en) | Arrangement and method for determining a body condition score of an animal | |
US9737040B2 (en) | System and method for analyzing data captured by a three-dimensional camera | |
US10373306B2 (en) | System and method for filtering data captured by a 3D camera | |
US20120272903A1 (en) | System and Method for Improved Attachment of a Cup to a Dairy Animal | |
US10303939B2 (en) | System and method for filtering data captured by a 2D camera | |
WO2016023075A1 (en) | 3d imaging | |
CA2775395C (en) | Vision system for robotic attacher | |
Ruchay et al. | Accurate 3d shape recovery of live cattle with three depth cameras | |
US9171208B2 (en) | System and method for filtering data captured by a 2D camera | |
EP4402657A1 (en) | Systems and methods for the automated monitoring of animal physiological conditions and for the prediction of animal phenotypes and health outcomes | |
US9681634B2 (en) | System and method to determine a teat position using edge detection in rear images of a livestock from two cameras | |
EP3266371A1 (en) | Method of and apparatus for diagnosing leg pathologies in quadrupeds | |
US20230342902A1 (en) | Method and system for automated evaluation of animals | |
NZ711098A (en) | Apparatus and method for automatically evaluating an animal | |
CA2849212C (en) | Vision system for robotic attacher | |
Yuan et al. | Stress-free detection technologies for pig growth based on welfare farming: A review | |
Zhao et al. | Real-time automatic classification of lameness in dairy cattle based on movement analysis with image processing technique | |
WO2023180587A2 (en) | System and method for detecting lameness in cattle | |
CA2898603C (en) | Vision system for robotic attacher |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16837377 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16837377 Country of ref document: EP Kind code of ref document: A1 |