[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

WO2012121735A1 - Apparatus and method of targeting small weapons - Google Patents

Apparatus and method of targeting small weapons Download PDF

Info

Publication number
WO2012121735A1
WO2012121735A1 PCT/US2011/027986 US2011027986W WO2012121735A1 WO 2012121735 A1 WO2012121735 A1 WO 2012121735A1 US 2011027986 W US2011027986 W US 2011027986W WO 2012121735 A1 WO2012121735 A1 WO 2012121735A1
Authority
WO
WIPO (PCT)
Prior art keywords
target
data
small weapon
weapon
targeting system
Prior art date
Application number
PCT/US2011/027986
Other languages
French (fr)
Inventor
Lorenzo Tessiore
Adalberto Foresti
Original Assignee
Tesfor, Llc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tesfor, Llc filed Critical Tesfor, Llc
Priority to PCT/US2011/027986 priority Critical patent/WO2012121735A1/en
Publication of WO2012121735A1 publication Critical patent/WO2012121735A1/en

Links

Classifications

    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G1/00Sighting devices
    • F41G1/38Telescopic sights specially adapted for smallarms or ordnance; Supports or mountings therefor
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F41WEAPONS
    • F41GWEAPON SIGHTS; AIMING
    • F41G3/00Aiming or laying means
    • F41G3/12Aiming or laying means with means for compensating for muzzle velocity or powder temperature with means for compensating for gun vibrations

Definitions

  • the present disclosure generally relates to targeting of firearms and more particularly but not exclusively relates to an electronic assisted targeting system for firearms.
  • Figure 1 illustrates an operator 100 targeting a handgun 102 with a front gun sight 104 and a rear gun sight 106 according to the prior art.
  • the operator 100 is aiming the handgun 102 at a distant target 108.
  • the proper operation of the gun sights 104, 106 allows the operator 100 to create a line of sight along the barrel of the handgun 102.
  • the line of sight is substantially parallel to barrel.
  • the operator 100 forms a line in the target direction between his eye, through the nadir of the rear sight 106, through the front sight 104, and to the target 108. If handgun 102 is aimed along such a line of sight, then, but for distance, wind, and other such factors, the operator 100 can reasonably expect that firing the weapon 102 will result in a bullet striking the target 108.
  • Figure 2 illustrates an operator 100 targeting a grenade launcher 1 10 with an optical scope 1 12.
  • the operator has taken a prone position in order to aim and fire the weapon 1 10.
  • a support 1 14 assists the operator 100 with holding the grenade launcher 1 10 so as to more accurately aim at the target 108; the window of a building in the distance.
  • the operator or operator 100 of the weapon 1 10 creates a substantially parallel line of sight in the target direction along the barrel of the grenade launcher 1 10.
  • the optical scope may include reticle marks for aiming and distance measurements, magnifying lenses, parallax reducing adjustments, illuminating light sources, night vision, or other features. If the grenade launcher 1 10 is aimed at the target 108 according the image viewed through the optical scope 1 12, then the operator 100 can reasonably expect that firing the weapon 1 10 will result in a grenade striking the target 108.
  • FIG 3 illustrates an operator 100 targeting a rifle 1 16 with a laser sight 1 18 according to the prior art.
  • the laser sight 1 18 operates according to similar principles as the mechanical iron sights 104, 106 of Figure 1 and the optical scope 1 12 of Figure 2.
  • the laser sight 1 18 generates a beam of generally non-diverging laser light in the target direction that is substantially parallel to the barrel of the weapon 1 16.
  • the operator 100 points the weapon 1 16 in the desired direction and looks for the point 120 where the laser beam strikes the target.
  • the rifle 1 16 is aimed such that the laser point 120 strikes the target 108 as shown, then, but for wind, distance, or other such factors, firing the rifle 1 16 will result in the bullet striking the target.
  • Embodiments of the intelligent small weapon targeting system disclosed in the present application illustrate and describe electronic and electro-mechanical devices and methods to improve targeting of small weapons. That is, in some cases, the small weapon becomes an intelligent system that automatically locks on a target and fires when ready. Fewer bullets need be spent while targets are hit faster and more precisely.
  • the intelligent small weapon targeting system provides an artificial intelligence (Al) layer to participate in the interaction between the small weapon targeting system (e.g., scope), the input (e.g., trigger), the small weapon operator, and the small weapon output (e.g., firing pin strike, combustion, bullet release).
  • the intelligent small weapon targeting system interprets the operator's intent and can aid the operator in timing the firing action or even automatically releasing the bullet when a target is acquired and a shot is deemed effective.
  • the Al layer is partially or completely transparent to the small weapon operator.
  • the intelligent small weapon targeting system augments the effective operation of a conventional small weapon.
  • the operator typically does not need additional training, and in some cases, even reduced training can be provided.
  • the operator may pull the trigger to fire the small weapon in a conventional manner and many of the other functions of the small weapon continue to operate just as before, however, the aiming and in some
  • the instant of firing is assisted by the Al system.
  • Embodiments of the intelligent small weapon targeting system include an Al layer running in an embedded electronics system coupled to or integrated with the small weapon.
  • the intelligent small weapon targeting system can acquire input data via one or more cameras, accelerometers, environment sensors, and the like. It can use a series of frames of image input data from the cameras to identify and track a target. It can also correlate the image input data with the motion input data from the accelerometers to determine particular information such as the relative motion of the background, the designated target, and characteristics of the weapon itself.
  • the motion data, environment data, and performance information related to the small weapon can be used to calculate a trajectory and location in the image data where a fired projectile would travel and strike.
  • the intelligent small weapon targeting system can also assess the information, determine a particular instant or range of time during which to fire the weapon so as to hit the target, and present a firing decision signal to the operator. In some cases, the intelligent small weapon targeting system will automatically fire the small weapon.
  • the intelligent small weapon targeting system can compensate for many factors such as operator or environment induced tremors, distance, relative movement of the small weapon or target, and other environmental factors.
  • the small weapon will perform better even in low visibility conditions and can better compensate for optical effects like parallax, physical effects, such as bullet drop, wind, and other effects.
  • the intelligent small weapon targeting system may also account for a configurable "confidence" threshold, which can speed or delay the actual firing of the small weapon to a point when the system has concluded, to a determined confidence threshold, that the intended target will be hit.
  • a configurable "confidence" threshold which can speed or delay the actual firing of the small weapon to a point when the system has concluded, to a determined confidence threshold, that the intended target will be hit.
  • Some embodiments of the intelligent small weapon targeting system will have a manual but “assisted” mode. In such embodiments, rather than firing automatically when ready, the system will provide the operator with visual cues, auditory cues, tactile clues, or some other indication that the system has identified a reasonable time to fire.
  • the intelligent small weapon targeting system can further accommodate different and variable tactical requirements. That is, in some cases, parameters that direct the operation of the system are determined, pre- loaded, and unchangeable, and in other cases, some or all of the parameters may be changed by an operator of the small weapon. Many benefits are derived from the small weapon equipped with the intelligent small weapon targeting system described herein. Traditionally, a small weapon is considered to have the capability to strike targets of particular size, moving at a particular speed, and located within a particular distance based on the type, size and construction of the particular small weapon.
  • a small weapon equipped with the intelligent small weapon targeting system described herein can be contrasted with the performance of even a skilled operator using the same small weapon while shooting at small or distant moving targets.
  • a small weapon is equipped with the intelligent small weapon targeting system
  • nearly any target within the weapon's specification limits of target size, speed, and distance can be hit quickly with substantial accuracy.
  • targets previously not thought to be within the range of a particular small weapon based on their distance, movement, or size can also be hit with substantial accuracy.
  • a novice operator of the small weapon equipped with the intelligent small weapon targeting system can be an operator that is just as quick and accurate as a seasoned veteran or even more so.
  • a soldier newly graduated from boot camp can shoot as accurately as a seasoned special operations warrior because the difference in skill between the two fighters is evened out by the capability of the intelligent small weapon targeting system.
  • the intelligent small weapon targeting system can reduce the threshold of human skill necessary for effective use in military, paramilitary, and non-military situations. Human reflexes can only be pushed so far, training is expensive, and human resources are precious and difficult to replace. For example, small weapons are often used in firefight situations where, under extreme time pressure, accuracy of a shot makes a life saving difference. In such situations, the effective use of an intelligent small weapon targeting system can improve the outcome regardless of the amount of training or experience that the operator has received.
  • the present embodiments address several problems of traditional targeting of small weapons by using new procedures and devices that provide more accurate targeting information to a controllable firing mechanism.
  • Targeting is improved with methods and apparatus that electronically recognize and track potential targets.
  • Visual feedback corresponding to the recognition and tracking is provided to the operator of the small weapon.
  • audio feedback is provided to the operator, and in some cases, tactile feedback may also be provided to the operator.
  • An intelligent small weapon targeting system may be summarized as including an imaging system coupled to a small weapon, a memory configured to store a program to target the small weapon, and a processor.
  • the processor is operable to execute instructions of the program, which direct the processor to process a first set of imaging data generated by the imaging system to produce processed image data, identify a target in the processed image data, predict whether a projectile fireable from the small weapon will hit the target, and track the target in a second set of imaging data.
  • a method of targeting a small weapon may be summarized as including acquiring a first set of image input data produced by one or more cameras; acquiring motion data produced by one or more accelerometers; acquiring environment data produced by one or more environment sensors; correlating the motion data with the first set of image input data to produce processed image data; identifying a target within the processed image data; tracking the target in a second set of image input data; calculating a location in the second set of image input data where a projectile fired from the small weapon would strike if the small weapon were fired, the calculating being performed based on the motion data, the environment data, and performance information related to the small weapon; determining, based on the location in the second set of image input data, a time instant for firing the weapon to hit the target; and presenting a firing decision signal representative of the time instant.
  • Computer-readable medium embodiments may have a program to target a small weapon may be summarized as the program comprising logic configured to perform the steps of: enabling an imaging system coupled to the small weapon; processing a first set of imaging data generated by the imaging system; identifying a target in the first set of imaging data, the identified target located in a projectiles path, the projectile being fireable from the small weapon; and tracking the target in a second set of imaging data.
  • Figure 2 illustrates an operator targeting a grenade launcher with an optical scope in a conventional way
  • Figure 3 illustrates an operator targeting a rifle with a laser sight in a conventional way
  • FIG. 4 illustrates components of an intelligent small weapon targeting system
  • FIG. 5A illustrates components of an intelligent small weapon targeting system in more detail
  • Figure 5B illustrates an intelligent small weapon targeting system embodied in an electronic-optical scope
  • FIG. 6 illustrates an automatic projectile release (APR) feature flowchart
  • Figure 7 illustrates a system architecture including non-limiting embodiments of hardware modules, software modules, and signals
  • Figure 8 illustrates an activity diagram of an embodiment of the intelligent small weapon targeting system architecture
  • Figure 9 illustrates a functional flow sequence diagram for a round robin scheduler.
  • improvements are provided to targeting small weapons that reduce the reliance on human and situational factors.
  • the intelligent small weapon targeting system tracks the path of a target in real time and presents a firing decision upon identifying the target with a threshold level of accuracy.
  • the system in the present embodiment includes an electro-mechanical trigger and a computer vision system with one or more cameras aimed in a target direction and one or more CPU's to process the acquired images. Additional
  • environment sensors may also be included.
  • the computer vision system When the trigger of the small weapon is squeezed, and when the intelligent small weapon targeting system is operational (i.e. , powered ON and active), the computer vision system is engaged into tracking a target. After successfully identifying and locking on a target, the intelligent small weapon targeting system will automatically present a firing determination. In a fully automatic mode, the firing determination results in a firing command to the weapon output. In a semi-automatic mode, the firing determination results in an indication to the weapon operator. If the intelligent small weapon targeting system is not operational (e.g., turned OFF or malfunctioning), the small weapon operates according to its conventional, mechanically operated firing scheme.
  • Figure 4 illustrates components of an intelligent small weapon targeting system 122 that includes an electronic scope 124, an electromechanical trigger 160 and a trigger actuator 158.
  • an electronic scope 124 has integrated electronic components.
  • the electronic scope 124 may be formed and packaged in a conventional electronic scope housing or some other electronic scope housing.
  • the electronic scope and the electronic components may be formed and packaged in an arrangement that is cooperatively coupled to the electronic scope.
  • the electro-mechanical trigger 160 and trigger actuator 158 illustrated in Figure 4 interact with the electronic scope to fire the weapon.
  • the electro-mechanical trigger 160 has two positions, Engage and Fire.
  • the electro-mechanical trigger 160 may also have a third position, which can be described as inactive or not engaged.
  • the third position is active when the operator is holding the weapon but has not yet aimed at a target and the system is not yet in operation. This third position may actually be the resting position of the trigger, and it is considered inactive.
  • the operator identifies a target, he can cause the weapon to enter the engaged position. In one example, he will enter the engaged position by touching the trigger lightly with his finger.
  • the trigger might be slightly depressed, or it may sense remain stationary and presence of his finger based on the heat, the blood flow, fingerprint data along the length of his finger, or other biometric data without requiring the trigger move may cause the system to enter the engaged position.
  • the trigger Once the trigger enters the engaged position, it will send a signal on engage line 159 and the system will begin to interact with the electronic scope as described herein.
  • the operator decides to fire, he may cause the trigger 160 to enter the fire mode. In one embodiment, he will do this by applying more pressure to the electro-mechanical trigger 160, and the system will enter the fire mode.
  • electro-mechanical triggers will have three or more operating positions and in other embodiments, electro-mechanical triggers will have only a single operating position.
  • electromechanical trigger 160 may be substantially an electronic component with very little or no mechanical structure. It may be composed solely of electronic circuitry that forms the trigger mechanism. Thus, in some cases, electromechanical trigger 160 may be operated by a user manually displacing the trigger 160 to enter the different modes of operation, and in other cases, the electro-mechanical trigger 160 may enter the different modes of operation by receiving various electrical signal inputs.
  • the electro-mechanical trigger 160 is formed in a single device, visually resembling a conventional small weapon trigger. In other embodiments, the electro-mechanical trigger 160 may have a different physical shape or even no exposed physical structure at all. For example, in some cases the electro-mechanical trigger 160 is configured to receive electronic input. It might also be a simple bar or sensor on the stock, below the barrel, or some other location. When the weapon is held in an intended position for use, this might be sufficient to enter the engaged mode.
  • the electro-mechanical trigger 160 is formed from two or more distinct and separate devices.
  • a first device of the electro-mechanical trigger 160 can provide an indication of a first position
  • a second device of the electro-mechanical trigger 160 can provide an indication of a second position
  • the different position-input-producing devices of the electro- mechanical trigger 160 can be cooperatively configured into any number of separate devices.
  • An electrical signal input to the electro-mechanical trigger 160 or to other devices of the intelligent small weapon targeting system 122 includes any known electrical signaling methods.
  • the signaling can be electrical, optical, or some other structure.
  • the signals can be asserted by a determined presence of a voltage or an absence of a voltage thereof.
  • the electronic scope 124 of Figure 4 includes a target acquisition and tracking module 126 and a firing decision engine 128.
  • the target acquisition and tracking module 126 operates on imaging data input produced by active components of the electronic scope 124 ⁇ e.g., a CCD or other imaging device). From the imaging data, a particular target can be identified (i.e., acquired), and the target can be tracked as additional imaging data is captured and processed.
  • a duck hunter may notice a flurry of activity when a flock of ducks takes flight.
  • the hunter can raise her rifle, which is equipped with an intelligent small weapon targeting system 122 coupled to an electronic scope 124, and point the rifle toward the flock.
  • the hunter may place her eye so as to see the flock through the electronic scope 124, or she may merely point her weapon with scope pointed in the direction of the flock.
  • the small weapon targeting system 122 is able to identify one particular duck in the center of the viewing window of her electronic scope 124 and consider this a potential target.
  • a partial press of the electromechanical trigger 160 to a first position will engage the intelligent small weapon targeting system 122 so as to enable the electronics of the electronic scope 124.
  • the target acquisition and tracking module 126 can retrieve a first set of imaging data frames representing a scene in the target direction that is viewed through the electronic scope 124.
  • the target acquisition and tracking module 126 can further identify a representation of the target ⁇ e.g., a specific duck) in the imaging data.
  • the target ⁇ e.g., the identified duck can be tracked.
  • the target acquisition and tracking module 126 provides image data output to the firing decision engine 128.
  • the operational parameters ⁇ e.g., user settings, environmental data, etc.
  • a real time, dynamic determination is made as to the destination of a projectile if the projectile were fired "now," and whether or not the projectile would hit the target ⁇ i.e., the specific duck) being tracked by the target acquisition and tracking module 126.
  • the firing decision engine 128 will assert a "locked" indication.
  • the firing decision engine 128 provides particular predictive feedback data to the target acquisition and tracking module 126.
  • the feedback data which relates to a predictive position of the target, may be used by the target acquisition and tracking module 126 to enhance the viewing, tracking, and aiming characteristics for the weapon's operator. For example, audible or visual enhancements to beep, colorize, flicker, blink, apply a border around, or provide some other real time indications of the acquired, tracked, and locked target can be presented to the operator through the electronic scope 124.
  • a "Fire" determination signal is output on line 161 and presented to the electro-mechanical trigger 160. If the electro- mechanical trigger 160 of Figure 4 is also in the fire position, the combination of the trigger being in the fire position, for example, a full trigger press, and the firing determination from the electronic scope 122 on line 161 will permit the intelligent small weapon targeting system 122 to present an affirmative firing directive on line 157 to the trigger actuator 158.
  • the intelligent small weapon targeting system 122 of Figure 4 is used to accurately acquire and track a specific duck as a desirable target when the hunter partially presses the electro-mechanical trigger 160 to a first position.
  • the firing decision engine 128 evaluates environmental conditions, system parameters, user parameters, and other parameters and determines whether or not an immediate firing of the weapon will result in hitting the duck with a substantial likelihood of success that exceeds a determined threshold level. If so, then a "Fire" ⁇ e.g., "locked”) determination is output on line 161 to the electromechanical trigger 160.
  • the locked determination may also result in an indication to the hunter through the electronic scope 124, such as a beep or flashing icon initiated via feedback to the target acquisition and tracking module 126.
  • the hunter If the hunter has previously fully pressed the electro-mechanical trigger 160 to the second position of fire when a "Fire" signal arrives, it will output a fire signal on line 157 at the instant of receiving the signal from the scope 126. The signal will be presented to the trigger actuator 158, and her rifle will fire with a substantial likelihood that the duck will be hit and brought to the ground. On the other hand, if the fire signal is present on line 161 at the time the trigger is put into the fire mode by the operator, then this will cause the trigger 160 to immediately output a signal on line 157 to fire the weapon.
  • the electro-mechanical trigger 160 has an override position that permits the weapon operator to override the intelligent small weapon targeting system 122 and fire the weapon according to the will of the operator.
  • the override can occur as the result of a third position in the electro- mechanical trigger 160, a separate trigger, or by some other mechanism.
  • the override can occur as the result of the intelligent small weapon targeting system 122 or some part thereof being deactivated, disengaged, or otherwise determined to be malfunctioning. If the trigger 160 is in override mode, then upon the operator placing the trigger in the fire position, this will output the fire signal on line 157 and cause the weapon to immediately fire.
  • Figure 5A illustrates components of an intelligent small weapon targeting system 122 in more detail.
  • One or more central processing units (CPU) 134 are coupled to one or more internal and/or external memories 136 via a bus 138.
  • Bus 138 includes one or more wired or wireless data paths that communicatively couple some or all of the internal and/or external components of the intelligent small weapon targeting system 122.
  • the various components shown in Figure 5A may be part of the target acquisition and tracking system 126, part of the firing decision engine 128, or separate components from either one of these.
  • the components of Figure 5A might be considered part of the electronic scope 124 or separate components from the scope 124.
  • I/O ports 140 permit the intelligent small weapon targeting system 122 to output data and receive data from external sources.
  • the I/O ports 140 are cooperatively coupled to components of the targeting system and other components not shown in Figure 5A.
  • the I/O ports 140 are coupled to keypads, touchscreens, displays, feedback devices, alarms, aiming controllers, gyroscopes, computing devices, distance measuring devices ⁇ e.g., Light Detection And Ranging
  • the I/O ports 140 facilitate the input of new software programs, initialization parameters, control data, and the like.
  • the I/O ports 140 of Figure 5A include general purpose I/O ports that are configured for determined operation with devices of the intelligent small weapon targeting system 122.
  • the I/O ports 140 also include communication ports that may follow one or more standardized or custom protocols such as RS-232 or RS-485 serial communications, universal serial bus (USB), parallel port, IEEE-1394 communication, and the like.
  • the I/O ports 140 facilitate the output of recorded data or other parameters of the intelligent small weapon targeting system 122.
  • the intelligent small weapon targeting system 122 may capture image data in memory 136 for some period of time before a projectile is fired from the small weapon. Additional image data may also be captured during a projectile firing and for a time period after a projectile has been fired.
  • the time period before data capture and the time period after data capture may be 10 seconds, 5 minutes, or some other time period.
  • the image data may be captured at a higher resolution than when data is not being captured, and in other cases, the resolution may be lower. Additional data related to the targeting system such as environmental parameters, user parameters, and the like may also be captured.
  • a video showing events before, during, and after a projectile is fired may include high or low resolution images, moving images, and audio.
  • Additional data related to the video may include settings and recorded values related to environmental sensors, accelerometers, and the like. The additional data may be synchronized in time with the video.
  • the intelligent small weapon targeting system 122 may include audio devices 142.
  • the audio devices may be speakers or other devices capable of reproducing a wide range of frequencies in the audio spectrum.
  • the audio device 142 may project the voice commands.
  • the audio devices 142 are piezo or like devices that project tones of one or more frequencies to alert the small weapon operator to determined events.
  • the tones may be single beeps, interval beeps, or solid tones.
  • a beeping series of tones are sounded as a target is acquired and locked. The beeps may rise in volume and/or the frequency to indicate increasing confidence as the target confidence
  • the beep may change to a solid tone.
  • the intelligent small weapon targeting system 122 include a display or other visual output device 144.
  • the display 144 may be a mono-color, gray-scale ⁇ e.g., black and white), or multi-color display.
  • the display 144 may have an integrated touch-screen or a touchscreen capable device may be cooperatively coupled.
  • selected icons may be integrated with the display.
  • display icons may include a battery indicator, a target outline delineator (e.g., a circle, a square, a rectangular box icon, etc to emphasize a target), a day or night ambient light sensor indicator, operating mode indicators, and others.
  • the display 144 may be a liquid crystal display (LCD), light emitting diode (LED), or some other technology.
  • the display 144 may have a portrait, landscape, square, or another orientation.
  • the display may have quarter video graphics array (QVGA), half VGA, full VGA, resolution or some other high resolution or low resolution configuration.
  • the display may be a substantially transparent display only having particular icons affixed over the optical viewing area of an electronic or optical scope 1 12, 124.
  • FIG. 5A includes one or more camera devices 146a-n, for example, the computer-vision technology described herein.
  • the cameras 146a-n are typically aimed in a target direction and configurable to provide image data to the targeting system.
  • the cameras 146a-n and/or the display 144 are mounted in a straight line or in parallel. In other cases, the cameras 146a-n and/or the display 144 are configurably mounted such that the cameras 146a-n can be aimed in a first target direction while a display 144 is aimed in a second different direction. For example, if a small weapon is positioned around or corner or above an obstacle, the cameras 146a-n positioned in a first direction can provide image data while the small weapon operator views from a safer position the display 144, which is remotely mounted in a second, different direction.
  • the cameras 146a-n may include charge couple devices (CCD), complimentary metal-oxide semiconductor devices (CMOS), or some other image sensor technology.
  • CCD charge couple devices
  • CMOS complimentary metal-oxide semiconductor devices
  • the imaging sensors may be arranged as an array of pixels or in some other configuration.
  • the imaging sensors are configurable to provide a single image data frame or a plurality of data frames ⁇ e.g., a series of sequential images).
  • the number of pixels in a camera 146a-n array may determine that the camera 146a-n is configured as a high resolution camera, a low resolution camera, or some other resolution.
  • Embodiments of cameras 146a-n may also include night vision configurations.
  • the night-vision capabilities may be operative as active or passive configurations or with a different technology that permits the capture of image data in low light or dark conditions.
  • Embodiments of cameras 146a-n may include thermal imaging devices.
  • Embodiments of cameras 146a-n may include image enhancing lenses.
  • the intelligent small weapon targeting system 122 may include one or more environment sensors 148a-n configured to produce environment data samples.
  • the environment sensors 148a-n may sense temperature, wind, humidity, altitude, air density, air pressure, ambient light, flashing light, or other environmental conditions.
  • the sensors 148a-n provide analog or digital environmental data to the targeting system 122.
  • the environmental data may be cooperatively used with other data by the CPU 134 to calculate target acquisition, target lock, projectile trajectory, target distance, confidence threshold, and many other parameters.
  • Embodiments of the intelligent small weapon targeting system 122 may include an accelerometer module having one or more accelerometers 150a-n. Accelerometers 150a-n may be configured to provide motion data information samples related to the motion of the small weapon. For example if the weapon operator is moving the weapon intentionally while tracking a target or shaking the weapon in reaction to his current predicament, the accelerometers can provide motion data to the targeting system 122. The targeting system can cooperatively use the motion data to acquire, track, and lock on a target. In some cases, the small weapon is being aimed from a moving platform. The motion data from accelerometers 150a-n, along with environmental data from environment sensors 148a-n in some cases, can be used in acquiring, tracking, and locking on a target. Further, the motion and/or environment data can be used to determine a projectile's calculated destination if the small weapon were fired immediately.
  • Embodiments of the intelligent small weapon targeting system 122 may include one or more system outputs 152a-n.
  • System outputs 152a-n include outputs to control the small weapon, configure the small weapon, and inform the small weapon operator.
  • the CPU 134 can direct nozzles ⁇ e.g., solenoids, actuators, etc) to move the barrel in order to maintain tracking of a target.
  • micro electro- mechanical system (MEMS) devices such as gyroscopic devices are directed to help track a target.
  • System outputs 152a-n also may include devices to assist the image data producing cameras 146a-n such as infrared beam producing devices for night vision, thermal imaging, autofocus, etc., ultrasound devices, and other devices.
  • Tactile outputs 154 are also included in some embodiments of the intelligent small weapon targeting system 122. Tactile outputs 154 can provide clues to the small weapon operator related to the acquisition, tracking, and locking on of a target. Tactile outputs 154 can provide clues as to the confidence level of a target being tracked. Vibrations, pulses, or other tactile outputs that change in frequency or intensity may indicate that a target is acquired, tracked, and that a determined confidence threshold has been met.
  • the intelligent small weapon targeting system 122 includes one or more user input devices 156a-n.
  • the user input devices may include sliding switches, push buttons, scroll wheels, rotating knobs, and the like connected to electronic input devices such as switches, potentiometers, capacitive input devices, and other components.
  • the input devices 156a-n may be configured by a small weapon operator to provide particular determined parameters for use by the targeting system. For example, a small weapon operator can manipulate input devices 156a-n to provide an override to the targeting system, a safety lockout of the small weapon, a threshold of confidence level, an operating mode, and many others.
  • the input devices 156a-n may be used to provide data inputs to calibrate environment sensors 148a-n, accelerometers 150a-n, cameras 146a- n, and other devices. Additionally still, the input devices 156a-n may be used to power on/off the system, enable/disable the system, change operating modes, change display views on the display 144, store, review, and/or delete data from memory, and perform many other functions.
  • a trigger input device 160 provides a control input.
  • the trigger input device includes a rounded lever arranged to conform to a human finger.
  • part of the trigger input device 160 will be observable and operable on a small weapon.
  • the trigger input device 160 will appear to resemble the trigger of a conventional small weapon.
  • the trigger input device 160 is spring-biased in such a way as to resistively oppose the lateral motion of the trigger input device 160 when pulled with a finger closing into the palm.
  • the trigger input device 160 of the intelligent small weapon targeting system 122 may operate in a way that is nearly indistinguishable from the trigger operation of a
  • the intelligent small weapon targeting system 122 will provide feedback to an operator to indicate the recognized position of the trigger input device 160.
  • visual feedback may be provided.
  • different or additional feedback may be provided.
  • the trigger input device 160 provides one or more mechanical or electronic tactile feedback outputs during the time the trigger input device 160 moves laterally.
  • the trigger input device 160 moves one or more mechanically or electronically produced tactile outputs may be felt by the small weapon operator.
  • audio and/or video outputs may also be produced when the trigger input device 160 moves.
  • one or more additional outputs may be produced. The additional outputs may be the same or similar to the outputs produced when the trigger input device 160 first begins to move.
  • the trigger input device 160 is an electromechanical device.
  • the trigger input device 160 provides a trigger input to a small weapon in a conventional manner when the intelligent small weapon targeting system 122 is appropriately configured, powered off, disabled, or not functioning. In such cases, for example, a small weapon operator will pull the trigger input device 160 and the small weapon will fire a projectile.
  • the trigger input device 160 when the intelligent small weapon targeting system 122 is appropriately configured, the trigger input device 160 will cause a feedback output to the operator when the trigger input device 160 is advanced to a first position. Subsequently, the trigger input device 160 may also cause feedback outputs to the operator when the trigger input device 160 is advanced to further positions, for example to a second position, third position, or other position,
  • the feedback output may include a recognizable mechanical output felt through the operator's finger.
  • the feedback output may include a beep or other audio output.
  • the feedback output may include an indication on a visual display such as the illumination of an icon or shaped indicator ⁇ e.g., a square shape located so as to surround a target viewable on an optical or electronic scope), a fading or intensifying on the visual display, or some other visual indicator.
  • the feedback output of the trigger input device 160 may be produced mechanically or electronically.
  • the trigger input device 160 is configured with electrical contacts. When the trigger input device 160 moves laterally, one or more sets of electrical contacts may be coupled to produce a detectable input signal.
  • the input signal from the trigger input device 160 can be analyzed and processed by components of the intelligent small weapon targeting system 122, and in response to the input signal, the feedback output may be produced and other processing by the CPU 134 may be initiated or advanced.
  • the trigger input device 160 receives one or more electronic signals as inputs.
  • the electronic signals may be provided by a program executing on the intelligent small weapon targeting system 122 or the electronic signals may be provided by some other device (i.e., via ports 140).
  • a first signal input for example, may cause a feedback output to the operator similar to the feedback produced when the electro-mechanical trigger input device 160 is advanced to a first position.
  • Second, third, and other electronic signals may also be input via the trigger input device 160, and particular feedback outputs and operations may be caused.
  • the CPU 134 will execute software instructions that direct the operations of the intelligent small weapon targeting system 122.
  • the operations of the CPU 134 cause the small weapon to receive, analyze, and act on input from the trigger input device 160.
  • the operations may include a determination to fire the small weapon.
  • an electronic or mechanical signal will be provided to a weapon output 158.
  • the weapon output 158 typically includes the structure and/or electronic circuits that initiate a projectile firing sequence. Once the projectile firing sequence begins, it is generally irreversible. That is, once the intelligent small weapon targeting system 122 makes the decision to a fire a projectile, determined signaling is presented to the weapon output 158, and the projectile is fired.
  • the weapon output 158 includes an electronic circuit that provides an electronic signal voltage to the projectile. In another embodiment, the weapon output 158 includes an electro mechanical structure that receives an electronic signal voltage that causes a mechanical pin device to strike the firing point of a projectile.
  • the weapon output 158 may further include one or more safety and/or override inputs. The assertion of a signal on a safety input will prevent the weapon output 158 from firing a projectile.
  • the safety input may be electronic, mechanical, or some combination thereof.
  • the assertion of a signal on the override input will permit the weapon output 158 to fire a projectile if a mechanical trigger on the small weapon ⁇ e.g., the trigger input device 160) is manually squeezed.
  • the override input may be electronic, mechanical, or some combination thereof.
  • the intelligent small weapon targeting system 122 includes a power source 162.
  • the power source 162 includes a disposable or rechargeable battery.
  • the power source 162 provides electrical power to the CPU 134, memory 136, environmental sensors, and other components of the intelligent small weapon targeting system 122.
  • FIG. 5B illustrates an intelligent small weapon targeting system 122 embodied in an electronic-optical scope 188.
  • the electronic-optical scope 188 of Figure 5B includes a housing 190 having a substantially cylindrical shape and various tapers, knurls, mountings, user controls, and other features. It is recognized that the housing 190 may have other shapes and features in any suitable combination.
  • the housing 190 may be formed of a metal, plastic, composite substance, or combination of any such suitable materials.
  • the housing may include mounting structures 196 suitable to removably or irremovably attach the housing to the small weapon. Components of the intelligent small weapon targeting system 122 may be integrated with the housing 190 or the mounting structures 196.
  • the electronic-optical scope 188 has one or more optical lenses 192, but in other cases, the electronic-optical scope 188 has only electronic imaging sensors such as cameras 146a-n and visual output display devices such as display 144.
  • the electronic-optical scope 188 may have transparent, translucent, or opaque markings screen/lens markings 194a-e.
  • screen/lens markings provide information to the operator of the small weapon. Some of the screen/lens markings 194a-e may be etched, painted, or otherwise implemented. In some cases, the screen/lens markings 194a-e are
  • One screen/lens marking 194a includes a plurality of segmented indicators formed around a central portion of a lens or screen.
  • one or more of the plurality of segmented indicators may be progressively enabled. For example, if the screen/lens markings 194a are formed in a horizontal line as shown in Figure 5B, segments of the screen/lens markings 194a can be progressively enabled, beginning from the segments farthest from the center and continuing to enable segments sequentially closer to the center so as to draw the attention of the small weapon operator to the center.
  • the segments of the screen/lens markings 194a can be flashed, left illuminated, or enabled and disabled in any suitable pattern.
  • Another screen/lens marking 194b includes a cross-hairs pattern.
  • the cross-hairs pattern may be used to help a small weapon operator aim the small weapon at a suitable target. In some cases, progressive confidence in the accuracy of an acquired and locked target will cause the screen/lens markings 194b to become more brightly illuminated.
  • Another screen/lens marking 194c includes a shaped pattern around a particular area of the lens or screen.
  • the screen/lens marking 194c shape can be a circle, square, or any other suitable shape.
  • the screen/lens marking 194c shape can be formed in the central area of the lens/screen or in some other area. In some cases, the screen/lens marking 194c shape is enabled in an area where the intelligent small weapon targeting system 122 determines a projectile would hit if fired at that time.
  • a screen/lens marking 194c shape may faintly encircle a target where the system believes a projectile would travel if fired immediately. As time passes, the system may acquire and process more input data and increase the confidence level of where a projectile would strike. As the confidence level increases, the screen/lens marking 194c shape may illuminate more brightly, flash more vigorously, or provide some other indication to the operator of the increased confidence.
  • Still another screen/lens marking 194d includes a point-target indicator.
  • the screen/lens marking 194d may include a single point formed at the center of view in the lens system.
  • the screen/lens marking 194d point-target indicator may include a single point where the intelligent small weapon targeting system 122 determines a projectile would travel or strike if fired immediately.
  • the screen/lens marking 194d point- target indicator is used in some embodiments in a manner similar to how a laser aiming device might be used with a conventional small weapon.
  • the screen/lens marking 194 point-target indicator has several
  • the screen/lens marking 194 point-target indicator may be used without providing any visual clues that may be observed at the target. That is, since the screen/lens marking 194d point-target indicator is derived by the intelligent small weapon targeting system 122, the point-target indicator may be entirely formed within the system. This is an advantage over traditional laser aiming devices wherein a point of red light or another color is visible on the target that is being aimed at.
  • Another advantage of the screen/lens marking 194d point-target indicator is that additional input to the intelligent small weapon targeting system 122 may be used to determine where the screen/lens marking 194d point-target indicator is formed.
  • additional data such as motion data, environmental data, distance data, and the like can be used to determine where the screen/lens marking 194d point-target indicator is formed.
  • a traditional laser point is formed in a straight line without any appreciable respect or
  • the laser pointer may be accurately aimed at a target, but a fired projectile would be subject to the outside conditions and miss the target.
  • the intelligent small weapon targeting system 122 may improve accuracy by using input from accelerometers 150, environmental sensors 148, and the like to compensate for outside factors when the
  • placement position of the screen/lens marking 194d point-target indicator is determined.
  • Another screen/lens marking 194e includes particular icons to provide system information to the small weapon operator.
  • the icons may indicate power source level, day/night conditions, mode of system operation, or any other suitable information.
  • Embodiments of the electronic-optical scope 188 include some or all of the components of the intelligent small weapon targeting system 122.
  • a CPU 134 and memory 136 may be included, along with other circuits coupled by a bus 138.
  • Communication ports 140, audio outputs 142, system outputs 152a-n, tactile outputs 154, user inputs 156a-n, weapon output 158, trigger input 160, and a power source 162 may also be included.
  • the intelligent small weapon targeting system may include or exclude a wide variety of features described herein.
  • the intelligent small weapon targeting system is substantially contained in a portable housing; however, the level of portability may be measured differently for different applications.
  • some SMITH AND WESSON .357 magnum models are less than 170 mm (6.5 inches) and 400 g (about 14 oz.)
  • a U.S. Military REMINGTON M24 sniper rifle is over 1000 mm (nearly four feet) long and, fully equipped, will weigh almost 7.5 kg (about 16 lbs).
  • Military XM307 automatic grenade launcher is over 1300mm long and can weigh more than 22 kg (over 50 lbs). Accordingly, a portable intelligent small weapon targeting system for one system may not be portable for another. Nevertheless, the embodiments described herein may vary a feature set to include, exclude, or adjust the size and/or capability of features to accommodate a desired level of portability.
  • the targeting system embodiments When compared to the small weapon to which they are attached, the targeting system embodiments will be relatively lightweight, compact, maintenance free, energy efficient, and substantially devoid of undesirable electromagnetic emissions. As a result, most existing small weapons may be outfitted with an intelligent small weapon targeting system at the time of manufacture or retrofitted at some time after manufacture. The addition of the intelligent small weapon targeting system to a small weapon can thus be accomplished quickly and at a favorable cost, which will lead to more positive mission outcomes and efficient resource utilization.
  • TgtAcq Target acquisition under different illumination conditions such as:
  • TgtEnv • Compensated target tracking for environmental parameters such as:
  • FailSafe Automatic failsafe mode which overrides the system upon detection of conditions such as low power, high input noise, or detected malfunction.
  • CfgSys • Configurable system modes feature setup, including:
  • an intelligent small weapon targeting system has an automatic projectile release (APR) feature as identified in Table 1 .
  • APR automatic projectile release
  • the APR feature makes a determination based on target acquisition, tracking, and recognition, and presents the determination to one or more modules of the intelligent small weapon targeting system.
  • the intelligent small weapon targeting system will acquire a target from image data input by one or more electronic imaging devices ⁇ e.g., cameras).
  • the image data will be processed so as to acquire, track, and lock on the desired target.
  • the intelligent small weapon targeting system can calculate, derive, or otherwise, identify the timing characteristics relevant to a firing determination.
  • FIG. 6 illustrates an automatic projectile release (APR) feature flowchart 600.
  • APR automatic projectile release
  • the target tracking and acquisition module 126 is powered on, and if the intelligent small weapon targeting system 122 is configured in a Manual-Assisted mode, the cues will be presented to the weapon operator ⁇ e.g., firing readiness state will be shown on the Ul), but firing the weapon to release the projectile will be accomplished by fully pulling the trigger.
  • the electro-mechanical trigger 160 will operate as a two-level switch cooperatively integrated with the electronic scope 124 and the trigger actuator 158. In such operation, partially pulling the electro-mechanical trigger 160 to a first position will engage acquisition and tracking of the target; and fully pulling the electro-mechanical trigger 160 while the determined Fire signal is present will fire the small weapon.
  • acquisition, tracking, and locking of the target particular aiming cues may be displayed on a user interface, particular sounds may emanate from an audio interface, and particular feedback may be provided through a tactile interface.
  • the electro-mechanical trigger 160 will operate as a single-level switch cooperatively integrated with the electronic scope 124 and the trigger actuator 158. Pulling the electromechanical trigger 160 will engage acquisition, tracking, and locking on a target. When the intelligent small weapon targeting system 122 determines that a confidence threshold is met, a determined Fire signal will be presented to the trigger actuator 158, and the small weapon will be fired.
  • the operating mode can be controlled in many ways. For example, physical switches or electro-mechanical inputs, an interactive user interface, software variables loaded by a host system, or other ways. The choice of operating mode may be made in consideration of the expected use for the small weapon or the real time situation in which the small weapon is engaged.
  • the automatic projectile release (APR) feature flowchart 600 of Figure 6 begins at 602 where the electronic scope 124 and the electro- mechanical trigger 160 are powered off. At such time, the small weapon remains operable in the Manual-Unassisted mode. At 604, the weapon operator may fully pull the electro-mechanical trigger 160 on the weapon, and at 628, the small weapon will fire.
  • APR automatic projectile release
  • the weapon operator may enable the electronic scope 124 and the electro-mechanical trigger 160. Powering on or otherwise enabling ⁇ e.g., waking from a sleep mode) the electronic circuits will initialize the intelligent small weapon targeting system 122 to a known state.
  • the initialization process includes powering the target acquisition and tracking module 126 and the firing decision engine 128 and placing the circuitry in a known state.
  • the operator may partially pull the electro-mechanical trigger 160 to a first position.
  • the act of transitioning the electro-mechanical trigger 160 to the first position enables the electronic scope 124 at 610 to render and process image data.
  • the electronic scope 124 may be operating and capturing image data; however at 610, the image data is actively used in additional processes of the intelligent small weapon targeting system 122.
  • the target acquisition and tracking module 126 of the electronic scope 124 captures and processes image data
  • particular cues may be presented to the weapon operator at 612 via a user interface (U I).
  • the particular cues may include some or all of visual, audio, and tactile cues.
  • the target acquisition and tracking module 126 and the firing decision engine 128 cooperate to acquire and lock a target of interest that is represented in the image data captured by the electronic scope 124.
  • the firing decision engine 128 evaluates particular parameters of the intelligent small weapon targeting system 122. Such parameters assist the firing decision engine 128 in deriving or calculating whether or not a determined or user- configurable "confidence" threshold is met or exceeded. In some cases, the determination of whether or not the threshold is met serves to speed up the aiming and firing of the small weapon, and in other cases, the determination serves to delay the aiming and firing of the small weapon. Until the confidence threshold is met, and in some cases even after the threshold is met, at 610 the target acquisition and tracking module 126 of the electronic scope 124 will continue to process image data along with cooperative use of feedback from the firing decision engine 128.
  • the current operating mode of the intelligent small weapon targeting system 122 may be evaluated at 616.
  • the intelligent small weapon targeting system 122 may enter a "Ready-To-Fire" state at 618. At this state, the operator can use visual or audio or tactile feedback cues, or some combination thereof, to assess their confidence level in the target and use the information to decide whether to fire the small weapon. From the ready to fire state at 618, if the operator, at 620, fully pulls the electro-mechanical trigger 160 on the weapon to the second position, the small weapon will fire at 628.
  • the intelligent small weapon targeting system 122 will determine if the system is configured for an assisted mode or an unassisted mode. In an Automatic-Unassisted operation mode, a "Fire"
  • the operator has configured the small weapon to fire automatically after a target has been acquired, tracked, and locked. A sufficient confidence threshold has been met, and the weapon will automatically transition to 628 where the weapon will fire. In some cases, configuring the system in
  • Automatic-Unassisted mode will permit the small weapon to fire when the electro-mechanical trigger 160 is in the first position. In other cases, the system will not to transition to state 628 (where the weapon will fire) until the operator to further advances the electro-mechanical trigger 160 to a second position.
  • the intelligent small weapon targeting system 122 may enter a "Ready-To-Fire" state.
  • the target acquisition and tracking module 126 and firing decision engine 128 are dynamically monitored in real time to determine if the ready to fire condition can be maintained ⁇ e.g., the target remains tracked, acquired, and locked).
  • a "Fire" determination may be presented to the electromechanical trigger 160 and a further fire command may be presented to the trigger actuator 158, and the small weapon will fire a projectile at 628.
  • Automatic-Assisted operation modes particular indications of target tracking, acquisition, locking, and the confidence threshold being met can be presented to the operator at 612. Such indications may also be presented in other modes of the intelligent small weapon targeting system 122 whenever the system is enabled and operating.
  • some non-limiting embodiments of the intelligent small weapon targeting system 122 have a feature that compensates target tracking (TgtTrk). According to the TgtTrk feature, a motion
  • Motion compensation feature may operate when the small weapon moves, which may occur when the weapon operator is shaking or by other causes. Motion compensation may also be enabled to account for motion of the target.
  • the intelligent small weapon targeting system 122 will iteratively attempt to identify a desirable target in a scene of image data captured by an electronic scope 124. If the small weapon is configured in an Automatic-Assisted operating mode, then acquisition of a target to a selected confidence threshold will automatically fire the weapon. If the small weapon is configured in an Automatic-Unassisted or Manual-Assisted operating mode, then particular indications can be presented to the weapon operator via the user interface. For example, during and after acquisition and locking of a target, visual indications of confidence level, motion, and other relevant information can be presented to the weapon operator. In some cases, the information suggests to the weapon operator in real time the best direction to track the target and a confidence level in potentially hitting the target at that moment.
  • aiming cues may be expressed as superimposed arrows or other icons 194a-e.
  • confidence cues may be presented as super-imposed circles of decreasing diameter. Such visual cues may be provided relative to a single shot or even relative to multiple shots.
  • embodiments of the intelligent small weapon targeting system 122 have the ability to "acquire" a target.
  • the target acquisition and tracking module 126 of Figure 4 is able to identify a target from image data that is captured by a camera system 146a-n.
  • additional features may be integrated and/or cooperative with the target acquisition and tracking module 126.
  • some embodiments include features wherein multiple image inputs are used independently or together.
  • the image input sources include optical inputs sourced by natural light in the visible spectrum and inputs sourced by light in the infrared spectrum.
  • the image input source can be configurable or determined automatically. Additional features, as disclosed in Table 1 , may be included embodiments of the intelligent small weapon targeting system 122. For example, consideration of environmental parameters may be used to increase the accuracy of the system.
  • the environmental inputs may include, among other things, humidity, illumination, wind speed, altitude, air density,
  • a determination of the environmental conditions may be provided by sensor inputs ⁇ e.g., environmental sensors 148a-n) or by other inputs ⁇ e.g., user input 156a-n) such as user configuration.
  • the environment data can be used in either or both of the target acquisition and tracking module 126 and firing decision engine 128.
  • the confidence threshold is met after the intelligent small weapon targeting system 122 has acquired and tracked a target.
  • the target acquisition and tracking module 126 and firing decision engine 128 cooperate to identify a particular target in a stream of image data and maintain the identification of the target as the image data is updated.
  • variable levels of confidence threshold settings may permit the intelligent small weapon targeting system 122 to maintain tracking and/or locked indications or lose tracking and/or locked indications.
  • the weapon operator can define different confidence thresholds for an indication that a target is locked.
  • the firing of the projectile occurs after presentation of a Fire signal, and the Fire signal is presented after the target is locked.
  • the different confidence thresholds may be set with a sliding scale, numerical input, or another like mechanism.
  • a weapon operator sets the confidence threshold via a user interface 156a-n.
  • the confidence threshold may be set by a host computer, an initialization setting in the system, a learning algorithm, or by some other process.
  • a "high” or first threshold indicates that the intelligent small weapon targeting system 122 has determined with a substantial likelihood that an identified target would be hit if the projectile was fired at that time.
  • a "low” or second threshold indicates that the intelligent small weapon targeting system 122 has determined with a less than substantial likelihood that an identified target would be hit if the projectile was fired at that time.
  • the distinction between a "high” threshold and a “low” threshold is reversed. Accordingly, it is understood that even though the embodiments described herein use a high threshold to indicate a higher level of confidence in identification of a target, the determination of confidence is not limited to any particular choice of
  • the difference between a "high" threshold and a "low” threshold of confidence can be measured, evaluated, or calculated based on one or more pieces of information.
  • the confidence level can be calculated with relation to the length of time that an acquired target is tracked (e.g., 1 second, 5 seconds, 10 seconds, etc.), the number of sequential sets of image data wherein the target was identified, the number of data points related to the target that are iteratively identified, or by some other measure.
  • the confidence level is calculated on a scale of one to one hundred, and a target is locked if the confidence level is at least seventy-five.
  • one or more target "patterns" are stored in a memory of the intelligent small weapon targeting system 122, and the confidence level is related to the identification of a current target as a likely match to a stored pattern.
  • a high confidence threshold indicates that the targeting system 122 will present a locked indication or firing decision only when the firing decision engine 128 has a high confidence that the target pattern identified by the target acquisition and tracking module 126 is in fact a legitimate target. That is, in some embodiments, for example, the system is configured to only lock onto a specific instance or certain class of target [e.g., via face recognition, pose recognition, plate recognition for vehicles, or other like pattern analysis).
  • a low confidence threshold indicates that the targeting system 122 may present a locked indication or firing decision even when it has less confidence that the target has been properly identified. For example, if intelligent small weapon targeting system 122 has set a low threshold, the locked indication may be presented if the determined projectile trajectory is merely compatible with the current position of the weapon operator and the direction the weapon is being pointed.
  • Additional embodiments of an intelligent small weapon targeting system may present the weapon operator with multiple "candidate” targets that match a selected pattern (e.g., "person"). Some targets may be “friendly,” some targets may be “hostile,” and still other targets may be undetermined.
  • the weapon operator may select a specific target via a user interface ⁇ e.g., touch screen) from a presentation of multiple candidates.
  • a scroll wheel or some other input mechanism may present the multiple candidates for selection by the weapon operator.
  • the lock indication will be presented when the specified target is acquired and determined to be hit with substantial likelihood according to the currently selected confidence threshold.
  • the intelligent small weapon targeting system may track multiple targets, and the multiple targets may be tracked by two or more intelligent small weapon targeting systems operating in relatively close proximity to each other.
  • external input may be provided through an I/O port to an intelligent small weapon targeting system to select a particular target.
  • the external input may be provided by another intelligent small weapon targeting system.
  • an intelligent small weapon targeting system may use its I/O ports in a networked configuration to provide information about one or more targets recognized by the system.
  • Figure 7 illustrates an intelligent small weapon targeting system architecture 122a including non-limiting embodiments of hardware modules, software modules, and signals.
  • the intelligent small weapon targeting system architecture 122a of Figure 7 illustrates the cooperative data sharing, data transfer, and executive control between modules of the system.
  • the embodiment of Figure 7 illustrates an architectural representation of the components of an intelligent small weapon targeting system 122 illustrated in Figures 4, 5A, and 5B.
  • the intelligent small weapon targeting system architecture 122a includes hardware and software modules having similar or overlapping hardware and software features to the intelligent small weapon targeting system 122.
  • the intelligent small weapon targeting system architecture 122a includes software modules that are stored in memory 136 and executed with CPU 134.
  • the intelligent small weapon targeting system architecture 122a includes inputs and outputs provided by and to environmental sensors 148a-n, accelerometers 150a-n, user inputs 156a-n, electro-mechanical trigger inputs 160, I/O ports 140, audio outputs 142, visual outputs 144, system outputs 152a-n, tactile outputs 154, and the like.
  • Table 3 presents a non-limiting list of categorized signals input into the intelligent small weapon targeting system architecture 122a of Figure 7.
  • the input signals are managed by the system architecture, and the input signal categories are further described herein.
  • a camera module includes one or more cameras 170.
  • One embodiment has one camera designated for daytime operation, and another embodiment has one camera designated for nocturnal or other low light operation.
  • a single camera 170 can be used in both day and night-time operations.
  • a Decision Engine Module 164 will activate cameras based on environmental conditions (e.g., day or night conditions).
  • the camera module 170 will provide image data to an image acquisition module 176.
  • the image acquisition module 176 may be in whole or in part a software module. In one embodiment, the image acquisition module 176 is integrated in an electronic scope 124.
  • the image data may be compressed, encoded, and formatted according to a particular standard or the image data may be raw.
  • the image acquisition module 176 filters and extracts features from the image data supplied by the camera modules.
  • the image acquisition module 176 cooperatively communicates with a target tracking module 178.
  • Image data from image acquisition module 176 is made available to the target tracking module 178.
  • the target tracking module 178 may be in whole or in part a software module.
  • the image data may be compressed, encoded, and formatted according to a particular standard or the image data may be raw.
  • the target tracking module 178 computes image evolution data from a sequence of filtered images and features. That is, a particular image feature in one or more images may be identified and tracked in a sequence of images.
  • the target tracking module 178 communicates image feature data to a decision engine module 164.
  • the target tracking module 178 and decision engine module 164 are integrated in a single module.
  • the modules may share software and hardware circuitry, or they may have dedicated software and hardware circuitry.
  • the target tracking module 178 and decision engine module 164 collaborate to provide feedback to the image acquisition module 176 such as image position and/or sight position prediction.
  • accelerometer data from sensors 174 can provide approximate information about the small weapon platform's movement.
  • the decision engine module 164 receives inputs from various sensors 174 in the intelligent small weapon targeting system architecture of Figure 7.
  • the sensors include environment sensors ⁇ e.g., wind, humidity, ambient light, location data, and the like), accelerometers, operational sensors, target distance sensors ⁇ e.g., LIDAR), and the like.
  • the sensors 174 typically include electronic circuitry and software ⁇ e.g., drivers) to configure and operate the sensors.
  • the decision engine module 164 evaluates the environmental and operational inputs and provides feedback to the image acquisition module 176 and the target tracking module 178.
  • the decision engine module 164 can generate aiming queues and make or recommend a projectile release decision. If a tracked target is acquired and locked, and if determined user input is present, the decision engine module 164 can present a trigger release signal to a platform 168. The trigger release signal can be made when a confidence level exceeds a threshold.
  • the platform 168 includes the projectile firing mechanisms of a small weapon ⁇ e.g., electro-mechanical trigger, trigger actuator, etc.).
  • the decision engine module 164 computes a confidence level across a sequence of image data frames generated by the image acquisition module 176.
  • a confidence level is determined from operations performed on three types of data: a classification of the target and/or target features, a position prediction for the target, and a position prediction for the platform 168.
  • a confidence level is determined for each of the three types of data, and an overall confidence level for a target can be a calculated from a combination of the various individual confidence levels.
  • a determined quality value related to motion information for the platform 168 derived from sensors (e.g., accelerometer) applied to the platform 168 can be used.
  • Approximation values representing predictions at the moment of firing can be used.
  • other factors can also be used to determine a confidence level.
  • the system may employ one or more modules to "classify" several features of the particular target in various ways.
  • the target features that can be classified include moments, contour, texture, pose, bounding box, and the like.
  • One way that each feature may be classified includes a system determination of the quality of the features as extracted from the image data.
  • Other ways to classify features may include determinations related to the nature of the intended target [e.g., whether the target is determined to be an animate or inanimate object) and determinations related to the pose of the target, [e.g., whether the target is a standing or sitting human).
  • the modules that perform the classifications may provide an output as a level of confidence for the feature.
  • one module output may be a value representing an 80% confidence level that a target is human.
  • the system may predict the position of the target and the position of the platform 168 at substantially the moment the bullet is fired.
  • the positions may be computed using target and platform motion information across a sequence of image frames used to classify the target.
  • the system can calculate or store as a parameter a delay between a firing decision and an actual firing action.
  • the system can further use information such as the target's motion, the determined target pose, the platform's motion, and the like to compute a firing time that is likely to be successful.
  • the precision of the predicted position values can be increased using feedback including actual measurements in subsequent sampled image data.
  • the predicted target position at any given time can be adjusted in the decision engine 164 based on the target's actual position when the target is successfully classified from the sequence of frames.
  • the approximation of the prediction for the target position may be related to the classification of the target while the approximation of the prediction for the platform 168 position may be related to the quality of raw sensor measurements and the sampling frequency of the sensors.
  • the target feature classification information and confidence level is combined with the target position prediction information and platform position prediction information to create a confidence level in the target.
  • the confidence level in the target can be matched in the decision engine 164 to a configurable threshold, and a firing decision can be determined.
  • the target tracking module 178 and decision module 164 send and receive data signals to and from a user interface 166.
  • the user interface 166 can communicate real time and predetermined user settings to the target tracking module 178 and decision module 164.
  • the target tracking module 178 and decision module 164 can provide aiming cues and other signals to the user interface 166.
  • the communicated data can include trigger pressure, distance measurement calculations or settings, location information, radio/satellite feed data, security data, and the like.
  • the communication of image or other data between modules involves an actual transfer of data from one hardware or software module to another.
  • data is communicated by way of shared memory space wherein one module locates processed data in a determined area of memory and another module accesses the data from the same area of memory.
  • the communication of data may be unidirectional or bidirectional.
  • the software modules of intelligent small weapon targeting system architecture 122a include one or more sub-modules in some
  • particular tasks of the image acquisition module 176, the target tracking module 178, and the decision engine module 164 can be designed in many ways, including as a hierarchical system of software with deterministic response times and predictable resource consumption.
  • the system typically operates under tight computational budgets. Inputs can be analyzed and processed in real time, which means that an operator of the small weapon has the perception of substantially instantaneous operation. That is, to an operator of a properly configured small weapon, the intelligent small weapon targeting system architecture 122a can appear to provide nearly transparent operation.
  • An embodiment of the intelligent small weapon targeting system architecture 122a has a single thread of execution.
  • the single execution thread typically uses less system resources than a multi-tasking operating system.
  • Each subsystem executes a specific module (e.g., image acquisition, target tracking, decision engine) to implement its functionality and to keep track of the input and output data signals.
  • a scheduler interface is implemented by each module to permit the subsystem tasks to execute cooperatively. The scheduler may permit tasks to execute asynchronously or with the appearance to an operator of asynchronous execution.
  • each subsystem module can acquire data and post computational results as instances of a workitem() type to a first-in, first-out (FIFO) queue.
  • a module can access an instance of the queue type for input and an instance of the queue type for output.
  • the output queue of one subsystem typically is an input queue of a following subsystem.
  • Each module is typically responsible for publishing and keeping track of its own execution parameters, such as the number of pending work items.
  • a scheduler can select a highest pending priority task based on the module's execution parameters.
  • each task/module can be associated with an execution cost based, for example, on a moving average of the task/module's historical execution time.
  • Each task can be placed on an execution list based on its expected execution time, the size of its input parameters (i.e., the number of pending work items), and other parameters.
  • a scheduler assesses the outstanding parameters, evaluates task priorities, and directs execution control to each task.
  • the scheduler can update an expected execution time for that task and place the task back in the execution list in an appropriate spot.
  • the number of outstanding work items and the historical execution time can be used to determine the load of a module.
  • Embodiments of the intelligent small weapon targeting system architecture 122a can implement two types of schedulers, and other
  • a single scheduler module is typically selected, compiled, loaded into the targeting system, and configured for deployment.
  • multiple scheduler configurations can be stored in memory and one particular scheduler can be chosen at runtime.
  • user selections or environment conditions will direct the use of one particular scheduler.
  • One type of scheduler used in some embodiments is a round robin scheduler.
  • a round robin scheduler executes one or more tasks from each module according to a determined allocated time.
  • a round robin scheduler embodiment is described in more detail below.
  • a fixed priority scheduler executes one or more tasks from a module according to highest outstanding load.
  • a fixed priority scheduler embodiment for example, the intelligent small weapon targeting system architecture 122a operates in a closed execution system. That is, the system will not use a general purpose real time operating system (RTOS) with threading support or synchronization calls. Instead, the system will use a dedicated task scheduler configured to avoid task starvation and increase throughput of the overall system.
  • RTOS real time operating system
  • the dedicated task scheduler in such an embodiment will configure a single thread of execution to spool active tasks from a list in a determined priority order.
  • Tasks associated with hardware modules may be implemented by actual threads of execution when running in an emulation environment, but the hardware module tasks will be treated in the same co- operative fashion by the scheduler.
  • the system may implement abstraction interfaces for hardware modules (e.g., accelerometers, cameras, user interface, projectile release, etc).
  • Hardware subsystems such as accelerometers, may be configured to provide interrupting alerts in system hardware, and the hardware subsystems can also be abstracted and/or emulated in different operating modes of the system.
  • Abstracted hardware interfaces can be configured as tasks or data inputs that are processed by tasks, and the tasks can be managed by the scheduler.
  • a "scene" data structure tracks global properties of a universe identified in a stream of image data provided by a camera.
  • global properties include particular identifications of foreground and background features, the number of identified targets, the number of actively tracked targets, sights position, and the like.
  • scene background extraction is a useful component of the scene data structure.
  • a scene background can be calculated in a variety of ways, included but not limited to, frame differencing, temporal and spatio-temporal averaging, median extraction, background tessellation reconstruction, and others.
  • scene background extraction methods work on a variable and configurable frame depth.
  • a frame differencing background extraction method works on two frames only.
  • work items are instantiated as transparent containers for data extrapolated from a frame.
  • Work items include connected components, target and sight position, and the like. Work items are linked to each other across one or more queues, and the work items can be used to implement a feedback mechanism across successive frames.
  • a module can acquire contextual data from a previous work item by walking backwards up a work item chain. In the same way, newly discovered contextual data can be pushed to a successive work item.
  • Image data having at least one potential target, or image data of global interest such as an identified target or a predicted sights position, will be tracked in the scene data structure.
  • the scene data structure can be
  • the scene data structure acts as a global property holder.
  • Particular tasks and work items can retrieve information
  • Relevant background scene information is typically available as global data to other modules in the intelligent small weapon targeting system. For example, if a target is identified and actively tracked, then that area of a scene can be so identified such that the particular computational area is restricted. That is, except for the
  • a system of one or more accelerometers provides motion data that is used to help track the small weapon platform's position.
  • the accelerometer inputs can be used to establish a baseline reference position and a direction of platform motion.
  • accelerometers can be used to bootstrap a frame synchronization process.
  • a background can be distinguished and, in some cases, extracted from the scene.
  • a single accelerometer device provides the motion data.
  • motion of the small weapon platform can be refined using techniques such as optical flow, CAM shift, or other processes.
  • One advantage of using motion data provided by accelerometers is that motion recognition and compensation can be achieved with a coarse synchronization process. Subsequently, motion data can be used to restrict a search space of processing algorithms employed at second level fine synchronization process. Using a two-level synchronization process provides more accurate target tracking results with fewer
  • FIG. 8 An activity diagram of an embodiment of the intelligent small weapon targeting system architecture 122a is illustrated in Figure 8.
  • the targeting system 122a of Figure 8 shows components for the targeting system embodiment, and illustrates processing of an initial capture of an image, tracking a potential target, recognizing and acquiring a target, and making a firing decision.
  • the abstracted scheduler 180 of Figure 8 directs processing across five modules.
  • the modules typically comprise software code, data structures, and storage space.
  • the software modules typically interact with electronic circuitry including decoders, timers, interrupts, arithmetic units, and the like.
  • the modules illustrated in Figure 8 include a sensor module 172, an image acquisition module 176, a target tracking module 178, a decision engine module 164, and a user interface module 166.
  • the scheduler 180 works according to its particular architecture, several of which embodiments have been identified herein. Depending on the nature of the scheduler's architecture, the scheduler 180 directs modules to execute tasks. In one embodiment, the scheduler presents a determined time budget to an identified module. The identified module is given execution control and can continue execution until the determined time budget is consumed. In some cases, the module will cyclically continue processing for the fully budgeted time, and in other cases, the module will process through a full queue cycle and return execution control back to the scheduler even if time remains in the time budget.
  • the scheduler 180 directs a sensor module 172 to commence processing by allotting a particular time budget to the sensor module.
  • the sensor module 172 can control processing of one or more cameras and acquire frame image data.
  • the sensor module 172 creates a frame processing task to process the frame image data.
  • the frame image data can be tagged inside the task with the identified sights and image position as global coordinates.
  • Frame image data can additionally be preprocessed to eliminate noise and to enhance determined features.
  • the sensor module 172 can store or use pointer techniques to make the frame image data available to a scene background extraction queue.
  • the frame processing task can itself be queued for operations by other modules.
  • the other modules can cooperatively engage the frame processing task to perform additional image processing operations.
  • the sensor module 172 queues the frame processing task, if there is any time left from the budget granted by the scheduler 180, the sensor module 172 can execute another task ⁇ e.g., process additional image data). Alternatively, the sensor module 172 can return execution control to the scheduler 180.
  • the scheduler 180 of Figure 8 further directs an image acquisition module 176 to commence processing by allotting a particular time budget to the acquisition module 176.
  • the image acquisition module 176 extracts and processes tasks (e.g., frame processing tasks) queued by the sensor module 172.
  • the image acquisition module 176 uses the frame processing tasks to extract a background.
  • the successful extraction of a background is possible only after a determined number of frames have been acquired.
  • the determined number of frames can be adjustable.
  • a small number of frames e.g., 32 acquired frames, n frames acquired in 500msec, n frames of substantially identical image data, etc.
  • a successful background extraction can determined when a larger number of frames have been acquired.
  • image frame data can be coarsely synchronized using motion data provided by one or more accelerometers.
  • the synchronization can further be refined with optical flow computation algorithms.
  • the background image data can be used to calculate and filter a foreground image or target of interest.
  • the image acquisition module 176 performs particular calculation acts to connect components of the background image with components of the foreground image.
  • the image acquisition module 176 re-queues the tasks for further processing by target tracking module 178.
  • the target tracking module 178 of Figure 8 operates to extract queued tasks and perform particular operations. For example, the target tracking module 176 will begin operations to label and segment the connected background and foreground components using determined acts (e.g., clustering, classifying, and other methods). The target tracking module 178 can then identify a target to track. After performing acts to extract features from the identified target, the target tracking module 178 can then queue the tasks to other modules for additional processing. In some cases, the image acquisition module 176 picks up the queued tasks for additional processing. In some cases, a decision engine module 164 picks up the queued tasks for processing.
  • determined acts e.g., clustering, classifying, and other methods
  • the decision engine module 164 can operate on queued tasks to perform particular classifications. For example, the decision engine module 164 can present particular target features to a classifier task. The classifier task can recalculate identified sights and target position based on information acquired from the corresponding frames. The decision engine module 164 can further evaluate particular configuration settings, confidence level threshold settings, determined targeting information, and the like, and make a firing decision.
  • target and sight position data are local properties of a work item associated with a specific frame.
  • Target and sight and position data and tracking data is typically analyzed and processed in conjunction with a determined confidence level.
  • the sight and target position data can be propagated to particular scene data structures when the
  • the global properties of a scene are available to each task and module that uses the data for processing. Thus, when a frame processing task is re-queued by the decision engine module 164, other modules have access to the data.
  • a user interface module 166 is also provided with a time budget and execution control by the scheduler 180.
  • the user interface module 166 can extract queued frame processing tasks and use calculated image data to provide aiming cues and other information to an operator of the intelligent small weapon targeting system architecture 122a.
  • the user interface module 166 can operate according to the granted time budget or the number of queued tasks. In some cases, if the user interface module 166 determines that sufficient useful information has been processed, the operational data can be effectively cleared or deleted, and the particular tasks can be terminated. Execution control is returned to the scheduler 180.
  • Figure 9 illustrates a functional flow sequence diagram for a round robin scheduler 182.
  • the robin scheduler 182 can be found in the intelligent small weapon targeting systems 122, 122a described herein.
  • the intelligent small weapon targeting system processes images sequentially. That is, one or more sets of image data are processed in an uninterrupted flow from beginning in a first module and ending in a last module.
  • the system begins the acquisition and analysis processing of a scene upon activation of small weapon trigger (e.g., a trigger squeeze to a first, engaged position).
  • small weapon trigger e.g., a trigger squeeze to a first, engaged position
  • the trigger sensors 170, 174 begin providing data.
  • the data includes first image data.
  • a sensor module 172 may initiate the process of data acquisition from the sensors 170, 174.
  • the sensor module 172 Prior to queuing the first image data to an image acquisition module 176, the sensor module 172 may further query data from the sensors 174 [e.g., accelerometer motion data samples).
  • the queried data may be attached (i.e., tagged) to the first image data.
  • first image data is designated as target visual n-1
  • the sensor module 172 may tag the target visual n-1 image data with sensor data by linking particular data structures together.
  • the image acquisition module 176 processes the first image data.
  • the processing by the image acquisition module 176 may include filtering out noise, enhancing image features, and segmenting image elements in a process that produces a set of connected components and features.
  • the image acquisition module 176 may engage other modules or sub-tasks to perform processing on the first image data (e.g. , target visual n-1 ) and subsequent image data (e.g., target visual n).
  • a filtering and enhancement module 184 may process the image data by known imaging techniques to smooth edges, extrapolate motion, reduce jitter, and the like.
  • the tagged sensor data may be employed to further filter and enhance the image data.
  • connection and segmentation module 186 may include a connection and segmentation module 186.
  • the connection and segmentation module 186 identifies and segments particular features in the first image data and subsequent image data. The segmented features can then be associated or connected. The levels and strength of particular connections may be cumulative through iterative processing of the first and subsequent image data as particular data structures are created and filtered.
  • the image acquisition module 176 may place greater weight to data that is determined to be at or near the location of a target identified in the scene.
  • the image acquisition module 176 initializes first image data including an assumption that the operator is likely to be aiming the small weapon in the general direction of a desired target.
  • the system can select the object that is at the target point in the scope as the desired target. The system can identify the
  • a heuristic based on "pointing time" can be used to disambiguate which target the operator is determined to be pointing to. For example, the operator can point first to a target, and the system can lock and track it. The system will consider this the target unless changes are made. Subsequently, if the system determines that the operator is pointing to a different target, the system can use a determined amount of time to disambiguate whether pointing to the different target is determined to be intentional or un-intentional. During the
  • the system may track both targets or drop the first target. Later, if the system has maintained tracking of both targets, the system may continue tracking the first target or switch to the new target after a time threshold is exceeded.
  • the operator can also put the system into different target acquisition modes. If the target is hunting deer or elk, the system can be configured in a "deer or elk" mode to consider any deer or elk in the image as the intended target and not some other object that happens to be in the frame. If the operator is a police officer, the system can be configured in a human mode, which can included either torso or face or both. Then, if the system identifies a human in the image, it will consider this the target and lock on the human to assist the police officer. Similar modes can be entered for flying birds or other game that might be hunted. Also, other image modes can be entered if the operator is using the weapon against other identifiably shaped or posed targets.
  • the user may put the system into "safe use” mode in which it will refuse to fire if a human is within the image and within the target area.
  • safe use the system can quickly identify and recognize any people in an image. If the scope is placed in safe use mode, then if a human image is within the range, the system will refuse to let the weapon fire.
  • a deer hunter can place the system in deer mode and into the safe use mode while hunting. If the hunter's buddy or some other person happens to be identified in the image frame when the trigger is pulled, the fire signal will not be sent and the gun will not fire. Even if the intended target, such as deer or elk is in the image, the presence of a human in the image will prevent the weapon from firing.
  • the safe use mode will avoid many accidents that might otherwise happen.
  • the processed image data from first image data or subsequent image data is made available to a target tracking module 178 and a decision engine 164.
  • the processed image data designated image n, has been filtered and enhanced by the image acquisition module.
  • image n exists as a data structure that is operated on by several modules.
  • the target tracking module 178 performs processing to extract identified background information and segment connected components of an identified foreground.
  • background extraction is performed with two or more frames of image data.
  • Particular configuration settings can variably direct which extraction method is chosen and further direct a determined accuracy level.
  • the target tracking module 178 will discard image data having determined insignificant feature sets or image data tagged with inconsistent sensor data.
  • the target tracking module 178 will make segmented, connected ⁇ e.g., clustered), component data structures available to the decision engine 164.
  • the decision engine 164 will perform processing to try and identify a desired target.
  • the decision engine 164 may further provide feedback related to both an identified target and the sight position to the image acquisition module 176 and the target tracking module 178.
  • the feedback allows for performance optimizing techniques to be conducted.
  • the feedback further reduces the computational resources utilized by the system.
  • the target and the sight position can be predicted using techniques such as Kalman filter, condensation filter, or the like.
  • the image acquisition module 176 can use the feedback information from the decision engine 164 to adjust the processing of the first image data or subsequent image data in an acquisition phase. For example, by adjusting filtering parameters in determined areas of image data where an object presence has been predicted, the image acquisition module 176 can extract crisper features in subsequent processing iterations.
  • the target tracking module 178 may use stored information from multiple processing iterations to calculate which elements in a set of image data are connected and which ones are independent. Such connections, or clustering, may be later used by the decision engine module 164 to identify prospective targets that can be hit by releasing a projectile at a determined moment in time.
  • the decision engine 164 operates on a set of image data information including raw shape data and macro data. Evolution data from multiple images and sensor data tagged to the image data from multiple images is further associated to raw image data.
  • the decision engine 164 operates to associate the evolution data with both the raw data and the motion data to lock on an identified target.
  • the association of multiple sets of data and types of data from various sensors can be used by the decision engine 164 to provide aiming cues to the operator. The association further can produce a projectile ⁇ e.g., bullet) release decision with a higher level of confidence.
  • the decision engine 164 uses artificial intelligence (Al) methods to improve its operation.
  • Al techniques include configuring a neural network to determine what target the operator is aiming at and/or the nature of the target the operator is aiming at. Accordingly, while the decision engine 164 can make a projectile release decision based on raw data from an image acquisition module 176, other available sensor and system input data may also be taken into account. In one example, operator configured settings are taken into account. If an operator configured
  • the system can present aiming cues to the operator via the user interface ⁇ e.g., move 10 degrees to the left).
  • the round-robin scheduler 182 of Figure 9 work items can be organized and managed in a queuing system.
  • the queuing system is implemented by creating abstract data structure objects and pointers to the objects.
  • the abstractions which are possible with high-level software architectures, reduce the amount of data copying and replicating often found in embedded computing systems. Instead of creating copies of data for each task module, the same data within the queue structures can be accessed by multiple task modules. Thus, the queue structures increase the efficiency of the small weapon targeting system by conserving memory and reducing CPU processing cycles.
  • the scheduler updates an execution list and services a determined highest priority work item.
  • the scheduler processing and efficient data sharing of the work item flow provides time and resources used to keep the system in operating within acceptable real time limits. Accelerometer-assisted motion compensation and image position prediction further allow the system to compensate for computational latency. That is, each frame of image data can be processed quickly so that target acquisition and projectile release decisions can be made within an acceptable level of real time operation.
  • the efficient scheduler architectures described herein implement a self-balanced computing system.
  • the camera module begins producing image data.
  • the image data does not accumulate and overwhelm the system because the level of processing for each image frame is balanced with processes that implement target acquisition, target locking, decision making, and user interface updating.
  • Some modules process image data in higher volume. Those modules are granted longer time budgets and perform streamlined, dedicated processing. Other modules are designed to process data more quickly, so even with higher volumes, image frames are processed and passed along quickly.
  • accelerometer assisted motion compensation processing operates on sequences of images. Motion is detected through comparisons of temporal images aptly spaced in time. Accordingly, it is expected that in some embodiments, target tracking modules and decision engine modules operate on input queues that are fuller than the input queue of an image acquisition module.
  • One mechanism used by the scheduler to self-balance is a configuration of relative computational balance of the subsystems. That is, each subsystem is configured to process information in a timeframe that is compatible with the other subsystems computation times.
  • the scheduler can be configured to provide time budgets commensurate with the level of computing resources expended to pass one or more sets of image data to a task.
  • the computational latency can be calculated from the time a first frame of image data is received until the time a decision engine projectile release signal or an aiming cue is produced. That is, the time taken by each work item to flow through the whole system across all subsystem task modules.
  • the real time performance of the system can be configured so that desired steady-state equilibrium is reached between the subsystem task modules.
  • the system can recognize and process momentary variations of execution time, motion in the image, relative motion of an image to the small weapon platform, and the execution time entropy of each task. The result of such processing is used by the scheduler to adjust the relative priorities of the task queues and subsystem task module scheduling.
  • the adjustments will include directed degradation of computations provided by the subsystem task modules. That is, particular configurations may allow computations to be completed more quickly. For example, in circumstances where substantial user input or sensor input is present ⁇ e.g., heavy shaking may produce a high volume of disparate accelerometer data), full subsystem task module processing may be compatible with real time operation. In such circumstances, parameters, such as the number of motion compensation iterative calculations, can be adjusted by the scheduler.
  • a "small weapon,” as used herein, includes small firearms, light weapons, non-lethal kinetic systems, and other devices.
  • a small weapon may also be any weapon that fires any of bullets, shot, arrows, darts, sound, light or other electromagnetic energy, water, or some other projectile.
  • the propellant used to fire the projectile may include systems that employ some or all of combustion, chemical reaction, electronic operation, pressure, or any other system.
  • Small firearms are weapons generally designed for individual use. Small firearms include, among other things, revolvers; pistols, whether semi- automatic or fully automatic; rifles, whether single shot, semi or fully automatic, sub-machine guns, assault rifles and light machine guns.
  • Light weapons are designed for use by two or three persons serving as a crew, although some light weapons may be carried and used by a single person or four or more persons.
  • Light weapons generally include, among other things, heavy machine guns, hand-held under-barrel and mounted grenade launchers, portable anti-aircraft guns, portable anti-tank guns, recoilless rifles, portable launchers of anti-tank missile and rocket systems, portable launchers of anti-aircraft missile systems, and mortars of a small or mid-size caliber, for example of less than 100 millimeters.
  • Small weapons may fire a single shot or multiple shots. Small weapons may be loaded and/or fired mechanically, manually, electronically, or the like in a non-automatic, semi-automatic, or fully automatic way.
  • Embodiments of the intelligent small weapon targeting system can be implemented, fully or in part, with embedded electronic systems.
  • the embedded electronic systems are cooperatively coupled to mechanical and/or electro-mechanical parts.
  • non-limiting references may be made to central processing units (CPU), microcontrollers (MCU), digital signal processors (DSP), application specific integrated circuits (ASIC), input/output (I/O) ports, network connectivity ports, memory, logic, circuits, and the like.
  • CPU central processing units
  • MCU microcontrollers
  • DSP digital signal processors
  • ASIC application specific integrated circuits
  • I/O input/output ports
  • network connectivity ports memory, logic, circuits, and the like.
  • embodiments describe a CPU and a memory having software.
  • the software is executable by the CPU and operable to execute the method acts.
  • CPU's, MCU's, and other processors/controllers as used herein interchangeably refer to any type of electronic control circuitry configured to execute programmed software instructions.
  • the programmed instructions may be high-level software instructions, compiled software instructions, assembly- language software instructions, object code, binary code, micro-code, or the like.
  • the programmed instructions may reside in internal or external memory or may be hard-coded as a state machine or set of control signals.
  • the internal and external memory may be volatile or non-volatile.
  • Volatile memory includes random access memory (RAM) that may be static, dynamic, or any other type.
  • RAM random access memory
  • Non-volatile memory is any non-volatile computer- readable media including, for example, flash memory, phase-change memory, magnetic media such as a hard-disk, an optical disk, a flexible diskette, a CD- ROM, and/or the like.
  • I/O ports include serial, parallel, or combined serial and parallel I/O circuitry compliant with various standards and protocols.
  • the protocols may be proprietary or they may follow an accepted, published standard.
  • Embodiments of the intelligent small weapon targeting system may include computer vision technology.
  • Computer-vision technology typically includes high-resolution camera modules.
  • the camera modules may be low- cost.
  • the camera modules can be developed for integration with many types of devices, including optical scopes for small weapons.
  • advanced memory, computing, and battery technologies have now made it efficient and affordable to store many images and mine, match, and analyze visual data contained in the images.
  • digital camera technology can even be used to recognize determined targets, gestures, and motions.
  • the computer-vision technology described herein in singular or plural as “cameras” include one or more object detectors. In some
  • an object detector includes a charge coupled device (CCD) formed as an array of pixels, although other imaging technologies may be used.
  • the object detector may further employ an optical mechanism for focusing a target on the plurality of object detectors.
  • an infrared projection or other device may be used to separately or in cooperation to provide "night vision" capabilities.
  • Various lenses and mirrors may be employed.
  • Various focusing and other camera technologies may be employed.
  • an embodiment and variations thereof means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment.
  • the appearances of the phrases “in one embodiment” or “in an embodiment” in various places throughout this specification are not necessarily all referring to the same embodiment.

Landscapes

  • Engineering & Computer Science (AREA)
  • General Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Optics & Photonics (AREA)
  • Aiming, Guidance, Guns With A Light Source, Armor, Camouflage, And Targets (AREA)

Abstract

An intelligent small weapon targeting embedded system is sized for portability and coupleable to a small weapon. The system includes a trigger device to produce one or more outputs and a weapon output module. The system includes a camera module to produce image data frames representing a scene in the target direction, an accelerometer to produce motion samples, and a user interface including a visual output device. The system operates in real time to process a first set of image data frames based on a trigger device output, produce processed image data in association with motion samples and the first set of image data frames, identify a target representation in the processed image data, track the representation of the target in a second set of image data frames, cause the visual output device to present an indication of tracking the representation of the target, and assist or make firing decisions.

Description

APPARATUS AND METHOD OF TARGETING SMALL WEAPONS
BACKGROUND
Technical Field
The present disclosure generally relates to targeting of firearms and more particularly but not exclusively relates to an electronic assisted targeting system for firearms.
Description of the Related Art
Traditionally, small weapons have been aimed, or targeted, according to the skill of the operator. The likelihood that the fired projectile successfully struck the target was determined by the skill of the operator as he aimed the firearm.
Figure 1 illustrates an operator 100 targeting a handgun 102 with a front gun sight 104 and a rear gun sight 106 according to the prior art. In Figure 1 , the operator 100 is aiming the handgun 102 at a distant target 108. The proper operation of the gun sights 104, 106 allows the operator 100 to create a line of sight along the barrel of the handgun 102. The line of sight is substantially parallel to barrel. Basically, the operator 100 forms a line in the target direction between his eye, through the nadir of the rear sight 106, through the front sight 104, and to the target 108. If handgun 102 is aimed along such a line of sight, then, but for distance, wind, and other such factors, the operator 100 can reasonably expect that firing the weapon 102 will result in a bullet striking the target 108.
Figure 2 illustrates an operator 100 targeting a grenade launcher 1 10 with an optical scope 1 12. The operator has taken a prone position in order to aim and fire the weapon 1 10. A support 1 14 assists the operator 100 with holding the grenade launcher 1 10 so as to more accurately aim at the target 108; the window of a building in the distance. In this case, the operator or operator 100 of the weapon 1 10 creates a substantially parallel line of sight in the target direction along the barrel of the grenade launcher 1 10. Through the optical scope 1 12, the operator 100 is able to see the target window of the building 108. The optical scope may include reticle marks for aiming and distance measurements, magnifying lenses, parallax reducing adjustments, illuminating light sources, night vision, or other features. If the grenade launcher 1 10 is aimed at the target 108 according the image viewed through the optical scope 1 12, then the operator 100 can reasonably expect that firing the weapon 1 10 will result in a grenade striking the target 108.
Figure 3 illustrates an operator 100 targeting a rifle 1 16 with a laser sight 1 18 according to the prior art. The laser sight 1 18 operates according to similar principles as the mechanical iron sights 104, 106 of Figure 1 and the optical scope 1 12 of Figure 2. The laser sight 1 18 generates a beam of generally non-diverging laser light in the target direction that is substantially parallel to the barrel of the weapon 1 16. The operator 100 points the weapon 1 16 in the desired direction and looks for the point 120 where the laser beam strikes the target. In Figure 3, if the rifle 1 16 is aimed such that the laser point 120 strikes the target 108 as shown, then, but for wind, distance, or other such factors, firing the rifle 1 16 will result in the bullet striking the target.
The conventional targeting systems and improvements to them rely on human factors such as vision, stability, speed, and judgment along with situational factors such as ambient or other light, urgency, risk, and the like. Even with the improvements in the prior art, as has been described in Figures 1 -3, human factors are required to decide the exact instant to fire and then to perform the act of firing, which depending on the skill of the operator, might be quite different from the decision to fire.
BRIEF SUMMARY
Embodiments of the intelligent small weapon targeting system disclosed in the present application illustrate and describe electronic and electro-mechanical devices and methods to improve targeting of small weapons. That is, in some cases, the small weapon becomes an intelligent system that automatically locks on a target and fires when ready. Fewer bullets need be spent while targets are hit faster and more precisely.
The intelligent small weapon targeting system provides an artificial intelligence (Al) layer to participate in the interaction between the small weapon targeting system (e.g., scope), the input (e.g., trigger), the small weapon operator, and the small weapon output (e.g., firing pin strike, combustion, bullet release). The intelligent small weapon targeting system interprets the operator's intent and can aid the operator in timing the firing action or even automatically releasing the bullet when a target is acquired and a shot is deemed effective. In some embodiments the Al layer is partially or completely transparent to the small weapon operator.
Generally speaking, and described in more detail herein, the intelligent small weapon targeting system augments the effective operation of a conventional small weapon. The operator typically does not need additional training, and in some cases, even reduced training can be provided. On a small weapon equipped with an embodiment of the intelligent small weapon targeting system, the operator may pull the trigger to fire the small weapon in a conventional manner and many of the other functions of the small weapon continue to operate just as before, however, the aiming and in some
embodiments, the instant of firing, is assisted by the Al system.
Embodiments of the intelligent small weapon targeting system include an Al layer running in an embedded electronics system coupled to or integrated with the small weapon. The intelligent small weapon targeting system can acquire input data via one or more cameras, accelerometers, environment sensors, and the like. It can use a series of frames of image input data from the cameras to identify and track a target. It can also correlate the image input data with the motion input data from the accelerometers to determine particular information such as the relative motion of the background, the designated target, and characteristics of the weapon itself. In addition, the motion data, environment data, and performance information related to the small weapon can be used to calculate a trajectory and location in the image data where a fired projectile would travel and strike. The intelligent small weapon targeting system can also assess the information, determine a particular instant or range of time during which to fire the weapon so as to hit the target, and present a firing decision signal to the operator. In some cases, the intelligent small weapon targeting system will automatically fire the small weapon.
The intelligent small weapon targeting system can compensate for many factors such as operator or environment induced tremors, distance, relative movement of the small weapon or target, and other environmental factors.
With the intelligent small weapon targeting system, the small weapon will perform better even in low visibility conditions and can better compensate for optical effects like parallax, physical effects, such as bullet drop, wind, and other effects.
In some embodiments, the intelligent small weapon targeting system may also account for a configurable "confidence" threshold, which can speed or delay the actual firing of the small weapon to a point when the system has concluded, to a determined confidence threshold, that the intended target will be hit. As a result, the intelligent small weapon targeting system can remove substantial variability of human capabilities from in the proper and accurate operation of the small weapon.
Some embodiments of the intelligent small weapon targeting system will have a manual but "assisted" mode. In such embodiments, rather than firing automatically when ready, the system will provide the operator with visual cues, auditory cues, tactile clues, or some other indication that the system has identified a reasonable time to fire.
The intelligent small weapon targeting system can further accommodate different and variable tactical requirements. That is, in some cases, parameters that direct the operation of the system are determined, pre- loaded, and unchangeable, and in other cases, some or all of the parameters may be changed by an operator of the small weapon. Many benefits are derived from the small weapon equipped with the intelligent small weapon targeting system described herein. Traditionally, a small weapon is considered to have the capability to strike targets of particular size, moving at a particular speed, and located within a particular distance based on the type, size and construction of the particular small weapon.
Accordingly, as the target's size, speed, or distance approaches the upper limits of the small weapon's capability with traditional targeting systems, even a highly skilled operator may miss, take a long time to aim and fire, or wish to switch to a different weapon. With the inventive targeting system, however, a particular small weapon can be used with a greater assurance of achieving the desired result. Further, weapons with this system can now be used in situations in which they were previously considered too small or not accurate enough. In addition, the time taken by a highly skilled operator to accurately aim and fire the small weapon are greatly reduced while achieving an even higher success rate.
A small weapon equipped with the intelligent small weapon targeting system described herein can be contrasted with the performance of even a skilled operator using the same small weapon while shooting at small or distant moving targets. When a small weapon is equipped with the intelligent small weapon targeting system, nearly any target within the weapon's specification limits of target size, speed, and distance can be hit quickly with substantial accuracy. In addition, targets previously not thought to be within the range of a particular small weapon based on their distance, movement, or size can also be hit with substantial accuracy.
Beneficially, a novice operator of the small weapon equipped with the intelligent small weapon targeting system can be an operator that is just as quick and accurate as a seasoned veteran or even more so. For example, when military small arms are equipped with the intelligent small weapon targeting system, a soldier newly graduated from boot camp can shoot as accurately as a seasoned special operations warrior because the difference in skill between the two fighters is evened out by the capability of the intelligent small weapon targeting system.
The intelligent small weapon targeting system can reduce the threshold of human skill necessary for effective use in military, paramilitary, and non-military situations. Human reflexes can only be pushed so far, training is expensive, and human resources are precious and difficult to replace. For example, small weapons are often used in firefight situations where, under extreme time pressure, accuracy of a shot makes a life saving difference. In such situations, the effective use of an intelligent small weapon targeting system can improve the outcome regardless of the amount of training or experience that the operator has received.
The present embodiments address several problems of traditional targeting of small weapons by using new procedures and devices that provide more accurate targeting information to a controllable firing mechanism.
Targeting is improved with methods and apparatus that electronically recognize and track potential targets. Visual feedback corresponding to the recognition and tracking is provided to the operator of the small weapon. In some cases, audio feedback is provided to the operator, and in some cases, tactile feedback may also be provided to the operator.
An intelligent small weapon targeting system may be summarized as including an imaging system coupled to a small weapon, a memory configured to store a program to target the small weapon, and a processor. The processor is operable to execute instructions of the program, which direct the processor to process a first set of imaging data generated by the imaging system to produce processed image data, identify a target in the processed image data, predict whether a projectile fireable from the small weapon will hit the target, and track the target in a second set of imaging data.
In some embodiments, a method of targeting a small weapon may be summarized as including acquiring a first set of image input data produced by one or more cameras; acquiring motion data produced by one or more accelerometers; acquiring environment data produced by one or more environment sensors; correlating the motion data with the first set of image input data to produce processed image data; identifying a target within the processed image data; tracking the target in a second set of image input data; calculating a location in the second set of image input data where a projectile fired from the small weapon would strike if the small weapon were fired, the calculating being performed based on the motion data, the environment data, and performance information related to the small weapon; determining, based on the location in the second set of image input data, a time instant for firing the weapon to hit the target; and presenting a firing decision signal representative of the time instant.
Computer-readable medium embodiments may have a program to target a small weapon may be summarized as the program comprising logic configured to perform the steps of: enabling an imaging system coupled to the small weapon; processing a first set of imaging data generated by the imaging system; identifying a target in the first set of imaging data, the identified target located in a projectiles path, the projectile being fireable from the small weapon; and tracking the target in a second set of imaging data.
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
Non-limiting and non-exhaustive embodiments are described with reference to the following drawings, wherein like labels refer to like parts throughout the various views unless otherwise specified. The sizes and relative positions of elements in the drawings are not necessarily drawn to scale. For example, the shapes of various elements and angles are not drawn to scale, and some of these elements are enlarged and positioned to improve drawing legibility. Further, the particular shapes of the elements as drawn are not intended to convey any information regarding the actual shape of the particular elements and have been solely selected for ease of recognition in the drawings. One or more embodiments are described hereinafter with reference to the accompanying drawings in which: Figure 1 illustrates an operator targeting a handgun with front and rear gun sights in a conventional way;
Figure 2 illustrates an operator targeting a grenade launcher with an optical scope in a conventional way;
Figure 3 illustrates an operator targeting a rifle with a laser sight in a conventional way;
Figure 4 illustrates components of an intelligent small weapon targeting system;
Figure 5A illustrates components of an intelligent small weapon targeting system in more detail;
Figure 5B illustrates an intelligent small weapon targeting system embodied in an electronic-optical scope;
Figure 6 illustrates an automatic projectile release (APR) feature flowchart;
Figure 7 illustrates a system architecture including non-limiting embodiments of hardware modules, software modules, and signals;
Figure 8 illustrates an activity diagram of an embodiment of the intelligent small weapon targeting system architecture; and
Figure 9 illustrates a functional flow sequence diagram for a round robin scheduler.
DETAILED DESCRIPTION
According to various embodiments as described herein, improvements are provided to targeting small weapons that reduce the reliance on human and situational factors.
In the embodiments now described, the intelligent small weapon targeting system tracks the path of a target in real time and presents a firing decision upon identifying the target with a threshold level of accuracy. The system in the present embodiment includes an electro-mechanical trigger and a computer vision system with one or more cameras aimed in a target direction and one or more CPU's to process the acquired images. Additional
environment sensors may also be included.
When the trigger of the small weapon is squeezed, and when the intelligent small weapon targeting system is operational (i.e. , powered ON and active), the computer vision system is engaged into tracking a target. After successfully identifying and locking on a target, the intelligent small weapon targeting system will automatically present a firing determination. In a fully automatic mode, the firing determination results in a firing command to the weapon output. In a semi-automatic mode, the firing determination results in an indication to the weapon operator. If the intelligent small weapon targeting system is not operational (e.g., turned OFF or malfunctioning), the small weapon operates according to its conventional, mechanically operated firing scheme.
Figure 4 illustrates components of an intelligent small weapon targeting system 122 that includes an electronic scope 124, an electromechanical trigger 160 and a trigger actuator 158. In the embodiment of Figure 4, an electronic scope 124 has integrated electronic components. The electronic scope 124 may be formed and packaged in a conventional electronic scope housing or some other electronic scope housing. Alternatively, the electronic scope and the electronic components may be formed and packaged in an arrangement that is cooperatively coupled to the electronic scope.
The electro-mechanical trigger 160 and trigger actuator 158 illustrated in Figure 4 interact with the electronic scope to fire the weapon. In one embodiment, the electro-mechanical trigger 160 has two positions, Engage and Fire. The electro-mechanical trigger 160 may also have a third position, which can be described as inactive or not engaged. The third position is active when the operator is holding the weapon but has not yet aimed at a target and the system is not yet in operation. This third position may actually be the resting position of the trigger, and it is considered inactive. Once the operator identifies a target, he can cause the weapon to enter the engaged position. In one example, he will enter the engaged position by touching the trigger lightly with his finger. The trigger might be slightly depressed, or it may sense remain stationary and presence of his finger based on the heat, the blood flow, fingerprint data along the length of his finger, or other biometric data without requiring the trigger move may cause the system to enter the engaged position. Once the trigger enters the engaged position, it will send a signal on engage line 159 and the system will begin to interact with the electronic scope as described herein. When the operator decides to fire, he may cause the trigger 160 to enter the fire mode. In one embodiment, he will do this by applying more pressure to the electro-mechanical trigger 160, and the system will enter the fire mode.
In some embodiments, electro-mechanical triggers will have three or more operating positions and in other embodiments, electro-mechanical triggers will have only a single operating position. In addition, electromechanical trigger 160 may be substantially an electronic component with very little or no mechanical structure. It may be composed solely of electronic circuitry that forms the trigger mechanism. Thus, in some cases, electromechanical trigger 160 may be operated by a user manually displacing the trigger 160 to enter the different modes of operation, and in other cases, the electro-mechanical trigger 160 may enter the different modes of operation by receiving various electrical signal inputs.
In some embodiments, the electro-mechanical trigger 160 is formed in a single device, visually resembling a conventional small weapon trigger. In other embodiments, the electro-mechanical trigger 160 may have a different physical shape or even no exposed physical structure at all. For example, in some cases the electro-mechanical trigger 160 is configured to receive electronic input. It might also be a simple bar or sensor on the stock, below the barrel, or some other location. When the weapon is held in an intended position for use, this might be sufficient to enter the engaged mode.
In other embodiments, the electro-mechanical trigger 160 is formed from two or more distinct and separate devices. In such embodiments, a first device of the electro-mechanical trigger 160 can provide an indication of a first position, a second device of the electro-mechanical trigger 160 can provide an indication of a second position, and so forth for each trigger position implemented in the small weapon targeting system 122. In still other embodiments, the different position-input-producing devices of the electro- mechanical trigger 160 can be cooperatively configured into any number of separate devices.
An electrical signal input to the electro-mechanical trigger 160 or to other devices of the intelligent small weapon targeting system 122 includes any known electrical signaling methods. The signaling can be electrical, optical, or some other structure. The signals can be asserted by a determined presence of a voltage or an absence of a voltage thereof.
The electronic scope 124 of Figure 4 includes a target acquisition and tracking module 126 and a firing decision engine 128. The target acquisition and tracking module 126 operates on imaging data input produced by active components of the electronic scope 124 {e.g., a CCD or other imaging device). From the imaging data, a particular target can be identified (i.e., acquired), and the target can be tracked as additional imaging data is captured and processed.
In one example of acquiring a particular target from a set, a duck hunter may notice a flurry of activity when a flock of ducks takes flight. The hunter can raise her rifle, which is equipped with an intelligent small weapon targeting system 122 coupled to an electronic scope 124, and point the rifle toward the flock. The hunter may place her eye so as to see the flock through the electronic scope 124, or she may merely point her weapon with scope pointed in the direction of the flock. The small weapon targeting system 122 is able to identify one particular duck in the center of the viewing window of her electronic scope 124 and consider this a potential target.
In one embodiment of Figure 4, a partial press of the electromechanical trigger 160 to a first position will engage the intelligent small weapon targeting system 122 so as to enable the electronics of the electronic scope 124. Subsequently, the target acquisition and tracking module 126 can retrieve a first set of imaging data frames representing a scene in the target direction that is viewed through the electronic scope 124. The target acquisition and tracking module 126 can further identify a representation of the target {e.g., a specific duck) in the imaging data. As a second set of imaging data frames from the electronic scope 124 are input, and as additional subsequent imaging data is input, the target {e.g., the identified duck) can be tracked.
The target acquisition and tracking module 126 provides image data output to the firing decision engine 128. Based on the operational parameters {e.g., user settings, environmental data, etc.) of the intelligent small weapon targeting system 122, which are accessible by the firing decision engine 128, a real time, dynamic determination is made as to the destination of a projectile if the projectile were fired "now," and whether or not the projectile would hit the target {i.e., the specific duck) being tracked by the target acquisition and tracking module 126. When the determination is made by the firing decision engine 128 that the target would be hit if the projectile was immediately launched, then the firing decision engine 128 will assert a "locked" indication.
In some embodiments, the firing decision engine 128 provides particular predictive feedback data to the target acquisition and tracking module 126. The feedback data, which relates to a predictive position of the target, may be used by the target acquisition and tracking module 126 to enhance the viewing, tracking, and aiming characteristics for the weapon's operator. For example, audible or visual enhancements to beep, colorize, flicker, blink, apply a border around, or provide some other real time indications of the acquired, tracked, and locked target can be presented to the operator through the electronic scope 124.
Once the determination is made by the firing decision engine 128 that the selected target would be hit, a "Fire" determination signal is output on line 161 and presented to the electro-mechanical trigger 160. If the electro- mechanical trigger 160 of Figure 4 is also in the fire position, the combination of the trigger being in the fire position, for example, a full trigger press, and the firing determination from the electronic scope 122 on line 161 will permit the intelligent small weapon targeting system 122 to present an affirmative firing directive on line 157 to the trigger actuator 158.
Referring further to the duck hunter example, the intelligent small weapon targeting system 122 of Figure 4 is used to accurately acquire and track a specific duck as a desirable target when the hunter partially presses the electro-mechanical trigger 160 to a first position. Upon acquisition and tracking of the duck targeted by the target acquisition and tracking module 126, the firing decision engine 128 evaluates environmental conditions, system parameters, user parameters, and other parameters and determines whether or not an immediate firing of the weapon will result in hitting the duck with a substantial likelihood of success that exceeds a determined threshold level. If so, then a "Fire" {e.g., "locked") determination is output on line 161 to the electromechanical trigger 160. The locked determination may also result in an indication to the hunter through the electronic scope 124, such as a beep or flashing icon initiated via feedback to the target acquisition and tracking module 126.
If the hunter has previously fully pressed the electro-mechanical trigger 160 to the second position of fire when a "Fire" signal arrives, it will output a fire signal on line 157 at the instant of receiving the signal from the scope 126. The signal will be presented to the trigger actuator 158, and her rifle will fire with a substantial likelihood that the duck will be hit and brought to the ground. On the other hand, if the fire signal is present on line 161 at the time the trigger is put into the fire mode by the operator, then this will cause the trigger 160 to immediately output a signal on line 157 to fire the weapon.
In some cases, the electro-mechanical trigger 160 has an override position that permits the weapon operator to override the intelligent small weapon targeting system 122 and fire the weapon according to the will of the operator. The override can occur as the result of a third position in the electro- mechanical trigger 160, a separate trigger, or by some other mechanism.
Additionally, the override can occur as the result of the intelligent small weapon targeting system 122 or some part thereof being deactivated, disengaged, or otherwise determined to be malfunctioning. If the trigger 160 is in override mode, then upon the operator placing the trigger in the fire position, this will output the fire signal on line 157 and cause the weapon to immediately fire.
Figure 5A illustrates components of an intelligent small weapon targeting system 122 in more detail. One or more central processing units (CPU) 134 are coupled to one or more internal and/or external memories 136 via a bus 138. Bus 138 includes one or more wired or wireless data paths that communicatively couple some or all of the internal and/or external components of the intelligent small weapon targeting system 122. The various components shown in Figure 5A may be part of the target acquisition and tracking system 126, part of the firing decision engine 128, or separate components from either one of these. The components of Figure 5A might be considered part of the electronic scope 124 or separate components from the scope 124.
Input/output (I/O) ports 140 permit the intelligent small weapon targeting system 122 to output data and receive data from external sources. In some cases, the I/O ports 140 are cooperatively coupled to components of the targeting system and other components not shown in Figure 5A. For example, in some cases, the I/O ports 140 are coupled to keypads, touchscreens, displays, feedback devices, alarms, aiming controllers, gyroscopes, computing devices, distance measuring devices {e.g., Light Detection And Ranging
(LIDAR) devices), or other components operative within the targeting system. In some cases, the I/O ports 140 facilitate the input of new software programs, initialization parameters, control data, and the like.
The I/O ports 140 of Figure 5A include general purpose I/O ports that are configured for determined operation with devices of the intelligent small weapon targeting system 122. The I/O ports 140 also include communication ports that may follow one or more standardized or custom protocols such as RS-232 or RS-485 serial communications, universal serial bus (USB), parallel port, IEEE-1394 communication, and the like. In some embodiments, the I/O ports 140 facilitate the output of recorded data or other parameters of the intelligent small weapon targeting system 122. For example, the intelligent small weapon targeting system 122 may capture image data in memory 136 for some period of time before a projectile is fired from the small weapon. Additional image data may also be captured during a projectile firing and for a time period after a projectile has been fired. The time period before data capture and the time period after data capture may be 10 seconds, 5 minutes, or some other time period. In some cases, the image data may be captured at a higher resolution than when data is not being captured, and in other cases, the resolution may be lower. Additional data related to the targeting system such as environmental parameters, user parameters, and the like may also be captured.
After the stored data is exported from the targeting system via I/O ports 140, the data may be analyzed. A full re-creation or post-event study of a projectile firing event can subsequently be created. For example, in one embodiment, a video showing events before, during, and after a projectile is fired may include high or low resolution images, moving images, and audio. Additional data related to the video may include settings and recorded values related to environmental sensors, accelerometers, and the like. The additional data may be synchronized in time with the video.
The intelligent small weapon targeting system 122 may include audio devices 142. The audio devices may be speakers or other devices capable of reproducing a wide range of frequencies in the audio spectrum. For example, if the intelligent small weapon targeting system 122 includes a feature for voice commands to direct a small weapon operator, the audio device 142 may project the voice commands.
In other embodiments, the audio devices 142 are piezo or like devices that project tones of one or more frequencies to alert the small weapon operator to determined events. The tones may be single beeps, interval beeps, or solid tones. In one embodiment, a beeping series of tones are sounded as a target is acquired and locked. The beeps may rise in volume and/or the frequency to indicate increasing confidence as the target confidence
approaches a determined threshold level. When the confidence threshold is met, the beep may change to a solid tone.
Many embodiments of the intelligent small weapon targeting system 122 include a display or other visual output device 144. The display 144 may be a mono-color, gray-scale {e.g., black and white), or multi-color display. The display 144 may have an integrated touch-screen or a touchscreen capable device may be cooperatively coupled. In some cases, selected icons may be integrated with the display. For example, display icons may include a battery indicator, a target outline delineator (e.g., a circle, a square, a rectangular box icon, etc to emphasize a target), a day or night ambient light sensor indicator, operating mode indicators, and others. The display 144 may be a liquid crystal display (LCD), light emitting diode (LED), or some other technology. The display 144 may have a portrait, landscape, square, or another orientation. The display may have quarter video graphics array (QVGA), half VGA, full VGA, resolution or some other high resolution or low resolution configuration. In some cases, the display may be a substantially transparent display only having particular icons affixed over the optical viewing area of an electronic or optical scope 1 12, 124.
Embodiments of the intelligent small weapon targeting system
122 of Figure 5A include one or more camera devices 146a-n, for example, the computer-vision technology described herein. The cameras 146a-n are typically aimed in a target direction and configurable to provide image data to the targeting system.
In some cases, the cameras 146a-n and/or the display 144 are mounted in a straight line or in parallel. In other cases, the cameras 146a-n and/or the display 144 are configurably mounted such that the cameras 146a-n can be aimed in a first target direction while a display 144 is aimed in a second different direction. For example, if a small weapon is positioned around or corner or above an obstacle, the cameras 146a-n positioned in a first direction can provide image data while the small weapon operator views from a safer position the display 144, which is remotely mounted in a second, different direction.
The cameras 146a-n may include charge couple devices (CCD), complimentary metal-oxide semiconductor devices (CMOS), or some other image sensor technology. The imaging sensors may be arranged as an array of pixels or in some other configuration. The imaging sensors are configurable to provide a single image data frame or a plurality of data frames {e.g., a series of sequential images). The number of pixels in a camera 146a-n array may determine that the camera 146a-n is configured as a high resolution camera, a low resolution camera, or some other resolution.
Embodiments of cameras 146a-n may also include night vision configurations. The night-vision capabilities may be operative as active or passive configurations or with a different technology that permits the capture of image data in low light or dark conditions. Embodiments of cameras 146a-n may include thermal imaging devices. Embodiments of cameras 146a-n may include image enhancing lenses.
The intelligent small weapon targeting system 122 may include one or more environment sensors 148a-n configured to produce environment data samples. The environment sensors 148a-n may sense temperature, wind, humidity, altitude, air density, air pressure, ambient light, flashing light, or other environmental conditions. The sensors 148a-n provide analog or digital environmental data to the targeting system 122. The environmental data may be cooperatively used with other data by the CPU 134 to calculate target acquisition, target lock, projectile trajectory, target distance, confidence threshold, and many other parameters.
Embodiments of the intelligent small weapon targeting system 122 may include an accelerometer module having one or more accelerometers 150a-n. Accelerometers 150a-n may be configured to provide motion data information samples related to the motion of the small weapon. For example if the weapon operator is moving the weapon intentionally while tracking a target or shaking the weapon in reaction to his current predicament, the accelerometers can provide motion data to the targeting system 122. The targeting system can cooperatively use the motion data to acquire, track, and lock on a target. In some cases, the small weapon is being aimed from a moving platform. The motion data from accelerometers 150a-n, along with environmental data from environment sensors 148a-n in some cases, can be used in acquiring, tracking, and locking on a target. Further, the motion and/or environment data can be used to determine a projectile's calculated destination if the small weapon were fired immediately.
Embodiments of the intelligent small weapon targeting system 122 may include one or more system outputs 152a-n. System outputs 152a-n include outputs to control the small weapon, configure the small weapon, and inform the small weapon operator. For example, in some embodiments, the CPU 134 can direct nozzles {e.g., solenoids, actuators, etc) to move the barrel in order to maintain tracking of a target. In other embodiments, micro electro- mechanical system (MEMS) devices such as gyroscopic devices are directed to help track a target. System outputs 152a-n also may include devices to assist the image data producing cameras 146a-n such as infrared beam producing devices for night vision, thermal imaging, autofocus, etc., ultrasound devices, and other devices.
Tactile outputs 154 are also included in some embodiments of the intelligent small weapon targeting system 122. Tactile outputs 154 can provide clues to the small weapon operator related to the acquisition, tracking, and locking on of a target. Tactile outputs 154 can provide clues as to the confidence level of a target being tracked. Vibrations, pulses, or other tactile outputs that change in frequency or intensity may indicate that a target is acquired, tracked, and that a determined confidence threshold has been met.
In some embodiments, the intelligent small weapon targeting system 122 includes one or more user input devices 156a-n. The user input devices may include sliding switches, push buttons, scroll wheels, rotating knobs, and the like connected to electronic input devices such as switches, potentiometers, capacitive input devices, and other components. The input devices 156a-n may be configured by a small weapon operator to provide particular determined parameters for use by the targeting system. For example, a small weapon operator can manipulate input devices 156a-n to provide an override to the targeting system, a safety lockout of the small weapon, a threshold of confidence level, an operating mode, and many others. Additionally, the input devices 156a-n may be used to provide data inputs to calibrate environment sensors 148a-n, accelerometers 150a-n, cameras 146a- n, and other devices. Additionally still, the input devices 156a-n may be used to power on/off the system, enable/disable the system, change operating modes, change display views on the display 144, store, review, and/or delete data from memory, and perform many other functions.
In embodiments of the intelligent small weapon targeting system 122 of Figure 5A, a trigger input device 160 provides a control input. In many cases, the trigger input device includes a rounded lever arranged to conform to a human finger. In many cases, part of the trigger input device 160 will be observable and operable on a small weapon. In such cases, the trigger input device 160 will appear to resemble the trigger of a conventional small weapon. In such cases, the trigger input device 160 is spring-biased in such a way as to resistively oppose the lateral motion of the trigger input device 160 when pulled with a finger closing into the palm. In such embodiments, the trigger input device 160 of the intelligent small weapon targeting system 122 may operate in a way that is nearly indistinguishable from the trigger operation of a
conventional small weapon.
In many embodiments, the intelligent small weapon targeting system 122 will provide feedback to an operator to indicate the recognized position of the trigger input device 160. In some embodiments, visual feedback may be provided. In other embodiments, different or additional feedback may be provided. For example, in some cases the trigger input device 160 provides one or more mechanical or electronic tactile feedback outputs during the time the trigger input device 160 moves laterally. For example, when the trigger input device 160 moves, one or more mechanically or electronically produced tactile outputs may be felt by the small weapon operator. In some cases, audio and/or video outputs may also be produced when the trigger input device 160 moves. When the trigger input device 160 advances to further lateral positions, one or more additional outputs may be produced. The additional outputs may be the same or similar to the outputs produced when the trigger input device 160 first begins to move.
In one embodiment, the trigger input device 160 is an electromechanical device. The trigger input device 160 provides a trigger input to a small weapon in a conventional manner when the intelligent small weapon targeting system 122 is appropriately configured, powered off, disabled, or not functioning. In such cases, for example, a small weapon operator will pull the trigger input device 160 and the small weapon will fire a projectile.
In such an embodiment, when the intelligent small weapon targeting system 122 is appropriately configured, the trigger input device 160 will cause a feedback output to the operator when the trigger input device 160 is advanced to a first position. Subsequently, the trigger input device 160 may also cause feedback outputs to the operator when the trigger input device 160 is advanced to further positions, for example to a second position, third position, or other position,
The feedback output may include a recognizable mechanical output felt through the operator's finger. The feedback output may include a beep or other audio output. The feedback output may include an indication on a visual display such as the illumination of an icon or shaped indicator {e.g., a square shape located so as to surround a target viewable on an optical or electronic scope), a fading or intensifying on the visual display, or some other visual indicator.
The feedback output of the trigger input device 160 may be produced mechanically or electronically. For example, in some cases, the trigger input device 160 is configured with electrical contacts. When the trigger input device 160 moves laterally, one or more sets of electrical contacts may be coupled to produce a detectable input signal. The input signal from the trigger input device 160 can be analyzed and processed by components of the intelligent small weapon targeting system 122, and in response to the input signal, the feedback output may be produced and other processing by the CPU 134 may be initiated or advanced.
In other embodiments, the trigger input device 160 receives one or more electronic signals as inputs. The electronic signals may be provided by a program executing on the intelligent small weapon targeting system 122 or the electronic signals may be provided by some other device (i.e., via ports 140). A first signal input, for example, may cause a feedback output to the operator similar to the feedback produced when the electro-mechanical trigger input device 160 is advanced to a first position. Second, third, and other electronic signals may also be input via the trigger input device 160, and particular feedback outputs and operations may be caused.
In the embodiments, the CPU 134 will execute software instructions that direct the operations of the intelligent small weapon targeting system 122. In some cases, the operations of the CPU 134 cause the small weapon to receive, analyze, and act on input from the trigger input device 160. The operations may include a determination to fire the small weapon. In such cases, an electronic or mechanical signal will be provided to a weapon output 158.
The weapon output 158 typically includes the structure and/or electronic circuits that initiate a projectile firing sequence. Once the projectile firing sequence begins, it is generally irreversible. That is, once the intelligent small weapon targeting system 122 makes the decision to a fire a projectile, determined signaling is presented to the weapon output 158, and the projectile is fired.
In one embodiment, the weapon output 158 includes an electronic circuit that provides an electronic signal voltage to the projectile. In another embodiment, the weapon output 158 includes an electro mechanical structure that receives an electronic signal voltage that causes a mechanical pin device to strike the firing point of a projectile. The weapon output 158 may further include one or more safety and/or override inputs. The assertion of a signal on a safety input will prevent the weapon output 158 from firing a projectile. The safety input may be electronic, mechanical, or some combination thereof. The assertion of a signal on the override input will permit the weapon output 158 to fire a projectile if a mechanical trigger on the small weapon {e.g., the trigger input device 160) is manually squeezed. The override input may be electronic, mechanical, or some combination thereof.
The intelligent small weapon targeting system 122 includes a power source 162. In some cases, the power source 162 includes a disposable or rechargeable battery. The power source 162 provides electrical power to the CPU 134, memory 136, environmental sensors, and other components of the intelligent small weapon targeting system 122.
Figure 5B illustrates an intelligent small weapon targeting system 122 embodied in an electronic-optical scope 188. The electronic-optical scope 188 of Figure 5B includes a housing 190 having a substantially cylindrical shape and various tapers, knurls, mountings, user controls, and other features. It is recognized that the housing 190 may have other shapes and features in any suitable combination. The housing 190 may be formed of a metal, plastic, composite substance, or combination of any such suitable materials. The housing may include mounting structures 196 suitable to removably or irremovably attach the housing to the small weapon. Components of the intelligent small weapon targeting system 122 may be integrated with the housing 190 or the mounting structures 196.
In some cases, the electronic-optical scope 188 has one or more optical lenses 192, but in other cases, the electronic-optical scope 188 has only electronic imaging sensors such as cameras 146a-n and visual output display devices such as display 144.
The electronic-optical scope 188 may have transparent, translucent, or opaque markings screen/lens markings 194a-e. The
screen/lens markings provide information to the operator of the small weapon. Some of the screen/lens markings 194a-e may be etched, painted, or otherwise implemented. In some cases, the screen/lens markings 194a-e are
electronically controlled to illuminate or change contrast so as to be
substantially visible or invisible. Enabling the screen/lens markings 194a-e can cause the enabled marking to be substantially visible to draw the attention of the small weapon operator. Disabling the screen/lens markings 194a-e can cause the disabled markings to be substantially invisible and generally disregarded by the small weapon operator.
One screen/lens marking 194a includes a plurality of segmented indicators formed around a central portion of a lens or screen. In one embodiment, one or more of the plurality of segmented indicators may be progressively enabled. For example, if the screen/lens markings 194a are formed in a horizontal line as shown in Figure 5B, segments of the screen/lens markings 194a can be progressively enabled, beginning from the segments farthest from the center and continuing to enable segments sequentially closer to the center so as to draw the attention of the small weapon operator to the center. The segments of the screen/lens markings 194a can be flashed, left illuminated, or enabled and disabled in any suitable pattern.
Another screen/lens marking 194b includes a cross-hairs pattern. The cross-hairs pattern may be used to help a small weapon operator aim the small weapon at a suitable target. In some cases, progressive confidence in the accuracy of an acquired and locked target will cause the screen/lens markings 194b to become more brightly illuminated.
Another screen/lens marking 194c includes a shaped pattern around a particular area of the lens or screen. The screen/lens marking 194c shape can be a circle, square, or any other suitable shape. The screen/lens marking 194c shape can be formed in the central area of the lens/screen or in some other area. In some cases, the screen/lens marking 194c shape is enabled in an area where the intelligent small weapon targeting system 122 determines a projectile would hit if fired at that time. For example, if a small weapon operator is aiming at a target, and if the intelligent small weapon targeting system 122 is enabled, a screen/lens marking 194c shape may faintly encircle a target where the system believes a projectile would travel if fired immediately. As time passes, the system may acquire and process more input data and increase the confidence level of where a projectile would strike. As the confidence level increases, the screen/lens marking 194c shape may illuminate more brightly, flash more vigorously, or provide some other indication to the operator of the increased confidence.
Still another screen/lens marking 194d includes a point-target indicator. In an electronic-optical scope 188 having one or more optical lenses, the screen/lens marking 194d may include a single point formed at the center of view in the lens system. In an electronic-optical scope 188 having one or more imaging cameras 146a-n and/or visual output screen devices 144, the screen/lens marking 194d point-target indicator may include a single point where the intelligent small weapon targeting system 122 determines a projectile would travel or strike if fired immediately. The screen/lens marking 194d point- target indicator is used in some embodiments in a manner similar to how a laser aiming device might be used with a conventional small weapon. The screen/lens marking 194 point-target indicator, however, has several
advantages over a laser aiming device.
For example, the screen/lens marking 194 point-target indicator may be used without providing any visual clues that may be observed at the target. That is, since the screen/lens marking 194d point-target indicator is derived by the intelligent small weapon targeting system 122, the point-target indicator may be entirely formed within the system. This is an advantage over traditional laser aiming devices wherein a point of red light or another color is visible on the target that is being aimed at.
Another advantage of the screen/lens marking 194d point-target indicator is that additional input to the intelligent small weapon targeting system 122 may be used to determine where the screen/lens marking 194d point-target indicator is formed. Thus, additional data, such as motion data, environmental data, distance data, and the like can be used to determine where the screen/lens marking 194d point-target indicator is formed. A traditional laser point is formed in a straight line without any appreciable respect or
compensation to outside factors. Thus, from a long distance or on a windy day, the laser pointer may be accurately aimed at a target, but a fired projectile would be subject to the outside conditions and miss the target.
Advantageously, the intelligent small weapon targeting system 122 may improve accuracy by using input from accelerometers 150, environmental sensors 148, and the like to compensate for outside factors when the
placement position of the screen/lens marking 194d point-target indicator is determined.
Another screen/lens marking 194e includes particular icons to provide system information to the small weapon operator. The icons may indicate power source level, day/night conditions, mode of system operation, or any other suitable information.
Embodiments of the electronic-optical scope 188 include some or all of the components of the intelligent small weapon targeting system 122. A CPU 134 and memory 136 may be included, along with other circuits coupled by a bus 138. Communication ports 140, audio outputs 142, system outputs 152a-n, tactile outputs 154, user inputs 156a-n, weapon output 158, trigger input 160, and a power source 162 may also be included.
Particular embodiments of the intelligent small weapon targeting system, such as those illustrated in Figures 4, 5A, and 5B, may include or exclude a wide variety of features described herein. Typically, the intelligent small weapon targeting system is substantially contained in a portable housing; however, the level of portability may be measured differently for different applications. For example, some SMITH AND WESSON .357 magnum models are less than 170 mm (6.5 inches) and 400 g (about 14 oz.), and a U.S. Military REMINGTON M24 sniper rifle is over 1000 mm (nearly four feet) long and, fully equipped, will weigh almost 7.5 kg (about 16 lbs). The U.S. Military XM307 automatic grenade launcher is over 1300mm long and can weigh more than 22 kg (over 50 lbs). Accordingly, a portable intelligent small weapon targeting system for one system may not be portable for another. Nevertheless, the embodiments described herein may vary a feature set to include, exclude, or adjust the size and/or capability of features to accommodate a desired level of portability.
A non-limiting variety of features of the intelligent small weapon targeting system is now described. When compared to the small weapon to which they are attached, the targeting system embodiments will be relatively lightweight, compact, maintenance free, energy efficient, and substantially devoid of undesirable electromagnetic emissions. As a result, most existing small weapons may be outfitted with an intelligent small weapon targeting system at the time of manufacture or retrofitted at some time after manufacture. The addition of the intelligent small weapon targeting system to a small weapon can thus be accomplished quickly and at a favorable cost, which will lead to more positive mission outcomes and efficient resource utilization.
A non-limiting list of features of the intelligent small weapon targeting system is presented in Table 1 . The features are further described herein.
Figure imgf000027_0001
Feature Feature Description
TgtAcq • Target acquisition under different illumination conditions such as:
o Visible spectrum for daylight conditions, and o Infrared spectrum for night time, low light, or indoors use.
TgtEnv • Compensated target tracking for environmental parameters such as:
o Wind,
o Humidity,
o Altitude, and
o Air density.
DblFire • Double-firing action of the small weapon.
CfgOp • Configurable operation modes such as:
o Automatic-Unassisted operation wherein the system will fire a projectile upon derivation above a determined threshold of a locked target,
o Automatic-Assisted operation wherein the system will provide aiming cues and prompt a small weapon operator when the target is locked; and the small weapon may also be fired if a determined confidence threshold is met,
o Override operation wherein an operator may override the system and fire at will through a full squeeze of the trigger,
o Manual-Assisted wherein the system will track a target and provide aiming cues to an operator such that the operator will manually fire the small weapon when ready, and
o Manual-Unassisted wherein the system is fully disabled and does not interact with the "native" operation of the small weapon.
FailSafe • Automatic failsafe mode which overrides the system upon detection of conditions such as low power, high input noise, or detected malfunction.
CfgSys • Configurable system modes feature setup, including:
o Target tracking confidence,
o Rate of fire,
o Selective target locking,
o General target locking,
o Weapon characteristics,
o Environment characteristics, and
• Electronic configurations (detachable display, wired headset, configuration parameter memory card, etc.).
Table 1 . Features of the Intelligent Small Weapon Targeting System Some embodiments of an intelligent small weapon targeting system have an automatic projectile release (APR) feature as identified in Table 1 . Generally speaking, the APR feature makes a determination based on target acquisition, tracking, and recognition, and presents the determination to one or more modules of the intelligent small weapon targeting system.
In such embodiments, the intelligent small weapon targeting system will acquire a target from image data input by one or more electronic imaging devices {e.g., cameras). The image data will be processed so as to acquire, track, and lock on the desired target. Subsequently, the intelligent small weapon targeting system can calculate, derive, or otherwise, identify the timing characteristics relevant to a firing determination.
Figure 6 illustrates an automatic projectile release (APR) feature flowchart 600. In the APR feature flowchart 600 of Figure 6, and also with respect to Figures 4 and 5, several modes are illustrated. For example, if the target tracking and acquisition module 126 is powered off or otherwise disengaged, the small weapon will operate in a Manual-Unassisted mode. That is, the weapon will operate as a conventional firearm. The operator will aim the weapon, fully pull the trigger, and the weapon will fire.
If the target tracking and acquisition module 126 is powered on, and if the intelligent small weapon targeting system 122 is configured in a Manual-Assisted mode, the cues will be presented to the weapon operator {e.g., firing readiness state will be shown on the Ul), but firing the weapon to release the projectile will be accomplished by fully pulling the trigger.
Alternatively, if the target tracking and acquisition module 126 is powered on, and if the intelligent small weapon targeting system 122 is configured in an Automatic-Assisted mode, then the electro-mechanical trigger 160 will operate as a two-level switch cooperatively integrated with the electronic scope 124 and the trigger actuator 158. In such operation, partially pulling the electro-mechanical trigger 160 to a first position will engage acquisition and tracking of the target; and fully pulling the electro-mechanical trigger 160 while the determined Fire signal is present will fire the small weapon. During acquisition, tracking, and locking of the target, particular aiming cues may be displayed on a user interface, particular sounds may emanate from an audio interface, and particular feedback may be provided through a tactile interface.
Finally, if the intelligent small weapon targeting system 122 is configured in an Automatic-Unassisted mode, then the electro-mechanical trigger 160 will operate as a single-level switch cooperatively integrated with the electronic scope 124 and the trigger actuator 158. Pulling the electromechanical trigger 160 will engage acquisition, tracking, and locking on a target. When the intelligent small weapon targeting system 122 determines that a confidence threshold is met, a determined Fire signal will be presented to the trigger actuator 158, and the small weapon will be fired.
A summary of operating modes of embodiments of the intelligent small weapon targeting system 122 is illustrated in Table 2. The operating mode can be controlled in many ways. For example, physical switches or electro-mechanical inputs, an interactive user interface, software variables loaded by a host system, or other ways. The choice of operating mode may be made in consideration of the expected use for the small weapon or the real time situation in which the small weapon is engaged.
Figure imgf000030_0001
Operation Automatic Manual
Unassisted • One position trigger. • One position trigger.
• Weapon fires automatically • Native operation mode when confidence threshold of the weapon.
is met. • System defaults to this
• Preferred operation mode for mode in case of
instinctive shooting. malfunction.
Table 2. Operating Modes of the Intelligent Small Weapon Targeting System
The automatic projectile release (APR) feature flowchart 600 of Figure 6 begins at 602 where the electronic scope 124 and the electro- mechanical trigger 160 are powered off. At such time, the small weapon remains operable in the Manual-Unassisted mode. At 604, the weapon operator may fully pull the electro-mechanical trigger 160 on the weapon, and at 628, the small weapon will fire.
Alternatively, instead of fully pulling the electro-mechanical trigger 160, and in transition from 602 to 606, the weapon operator may enable the electronic scope 124 and the electro-mechanical trigger 160. Powering on or otherwise enabling {e.g., waking from a sleep mode) the electronic circuits will initialize the intelligent small weapon targeting system 122 to a known state. The initialization process includes powering the target acquisition and tracking module 126 and the firing decision engine 128 and placing the circuitry in a known state.
At 608, the operator may partially pull the electro-mechanical trigger 160 to a first position. The act of transitioning the electro-mechanical trigger 160 to the first position enables the electronic scope 124 at 610 to render and process image data. Prior to the acts of 610, the electronic scope 124 may be operating and capturing image data; however at 610, the image data is actively used in additional processes of the intelligent small weapon targeting system 122. As the target acquisition and tracking module 126 of the electronic scope 124 captures and processes image data, particular cues may be presented to the weapon operator at 612 via a user interface (U I). The particular cues may include some or all of visual, audio, and tactile cues. At 614, the target acquisition and tracking module 126 and the firing decision engine 128 cooperate to acquire and lock a target of interest that is represented in the image data captured by the electronic scope 124. The firing decision engine 128 evaluates particular parameters of the intelligent small weapon targeting system 122. Such parameters assist the firing decision engine 128 in deriving or calculating whether or not a determined or user- configurable "confidence" threshold is met or exceeded. In some cases, the determination of whether or not the threshold is met serves to speed up the aiming and firing of the small weapon, and in other cases, the determination serves to delay the aiming and firing of the small weapon. Until the confidence threshold is met, and in some cases even after the threshold is met, at 610 the target acquisition and tracking module 126 of the electronic scope 124 will continue to process image data along with cooperative use of feedback from the firing decision engine 128.
Upon a determination that a determined or user-configurable
"confidence" threshold is met or exceeded at 614, the current operating mode of the intelligent small weapon targeting system 122 may be evaluated at 616.
If the intelligent small weapon targeting system 122 is not operating in an automatic mode, it is determined to be operating in a Manual- Assisted mode at 618. Since the confidence threshold has been exceeded at this point, the intelligent small weapon targeting system 122 may enter a "Ready-To-Fire" state at 618. At this state, the operator can use visual or audio or tactile feedback cues, or some combination thereof, to assess their confidence level in the target and use the information to decide whether to fire the small weapon. From the ready to fire state at 618, if the operator, at 620, fully pulls the electro-mechanical trigger 160 on the weapon to the second position, the small weapon will fire at 628.
At 622, the intelligent small weapon targeting system 122 will determine if the system is configured for an assisted mode or an unassisted mode. In an Automatic-Unassisted operation mode, a "Fire"
determination may be presented to the electro-mechanical trigger 160 at 624. At this point, the operator has configured the small weapon to fire automatically after a target has been acquired, tracked, and locked. A sufficient confidence threshold has been met, and the weapon will automatically transition to 628 where the weapon will fire. In some cases, configuring the system in
Automatic-Unassisted mode will permit the small weapon to fire when the electro-mechanical trigger 160 is in the first position. In other cases, the system will not to transition to state 628 (where the weapon will fire) until the operator to further advances the electro-mechanical trigger 160 to a second position.
Alternatively, in an Automatic-Assisted operation mode at 626, the intelligent small weapon targeting system 122 may enter a "Ready-To-Fire" state. In the ready to fire state, the target acquisition and tracking module 126 and firing decision engine 128 are dynamically monitored in real time to determine if the ready to fire condition can be maintained {e.g., the target remains tracked, acquired, and locked). From the ready to fire state at 626, if the operator fully pulls the electro-mechanical trigger 160 on the weapon to the second position, a "Fire" determination may be presented to the electromechanical trigger 160 and a further fire command may be presented to the trigger actuator 158, and the small weapon will fire a projectile at 628.
In any of the Manual-Assisted, Automatic-Unassisted, or
Automatic-Assisted operation modes, particular indications of target tracking, acquisition, locking, and the confidence threshold being met can be presented to the operator at 612. Such indications may also be presented in other modes of the intelligent small weapon targeting system 122 whenever the system is enabled and operating.
As expressed in Table 1 , some non-limiting embodiments of the intelligent small weapon targeting system 122 have a feature that compensates target tracking (TgtTrk). According to the TgtTrk feature, a motion
compensation feature may operate when the small weapon moves, which may occur when the weapon operator is shaking or by other causes. Motion compensation may also be enabled to account for motion of the target.
In embodiments having the TgtTrk feature, the intelligent small weapon targeting system 122 will iteratively attempt to identify a desirable target in a scene of image data captured by an electronic scope 124. If the small weapon is configured in an Automatic-Assisted operating mode, then acquisition of a target to a selected confidence threshold will automatically fire the weapon. If the small weapon is configured in an Automatic-Unassisted or Manual-Assisted operating mode, then particular indications can be presented to the weapon operator via the user interface. For example, during and after acquisition and locking of a target, visual indications of confidence level, motion, and other relevant information can be presented to the weapon operator. In some cases, the information suggests to the weapon operator in real time the best direction to track the target and a confidence level in potentially hitting the target at that moment.
In some embodiments, aiming cues may be expressed as superimposed arrows or other icons 194a-e. In some embodiments, confidence cues may be presented as super-imposed circles of decreasing diameter. Such visual cues may be provided relative to a single shot or even relative to multiple shots.
As described herein, embodiments of the intelligent small weapon targeting system 122 have the ability to "acquire" a target. For example, the target acquisition and tracking module 126 of Figure 4 is able to identify a target from image data that is captured by a camera system 146a-n. In some cases, additional features may be integrated and/or cooperative with the target acquisition and tracking module 126. For example, some embodiments include features wherein multiple image inputs are used independently or together. The image input sources include optical inputs sourced by natural light in the visible spectrum and inputs sourced by light in the infrared spectrum. The image input source can be configurable or determined automatically. Additional features, as disclosed in Table 1 , may be included embodiments of the intelligent small weapon targeting system 122. For example, consideration of environmental parameters may be used to increase the accuracy of the system. The environmental inputs may include, among other things, humidity, illumination, wind speed, altitude, air density,
temperature. A determination of the environmental conditions may be provided by sensor inputs {e.g., environmental sensors 148a-n) or by other inputs {e.g., user input 156a-n) such as user configuration. The environment data can be used in either or both of the target acquisition and tracking module 126 and firing decision engine 128.
The confidence threshold is met after the intelligent small weapon targeting system 122 has acquired and tracked a target. With respect to the Figures 4, 5A, and 5B, the target acquisition and tracking module 126 and firing decision engine 128 cooperate to identify a particular target in a stream of image data and maintain the identification of the target as the image data is updated.
In some cases, the multiple sets of image data may present conflicting derivations as to the position of the target or even the identification of the target. In such cases, variable levels of confidence threshold settings may permit the intelligent small weapon targeting system 122 to maintain tracking and/or locked indications or lose tracking and/or locked indications.
In some embodiments, the weapon operator can define different confidence thresholds for an indication that a target is locked. In cases where the system is configured in an Automatic-Assisted operating mode, the firing of the projectile occurs after presentation of a Fire signal, and the Fire signal is presented after the target is locked. The different confidence thresholds may be set with a sliding scale, numerical input, or another like mechanism. In some cases, a weapon operator sets the confidence threshold via a user interface 156a-n. In other cases, the confidence threshold may be set by a host computer, an initialization setting in the system, a learning algorithm, or by some other process. With respect to the confidence threshold, in some cases, a "high" or first threshold indicates that the intelligent small weapon targeting system 122 has determined with a substantial likelihood that an identified target would be hit if the projectile was fired at that time. Conversely, a "low" or second threshold indicates that the intelligent small weapon targeting system 122 has determined with a less than substantial likelihood that an identified target would be hit if the projectile was fired at that time. In other cases, the distinction between a "high" threshold and a "low" threshold is reversed. Accordingly, it is understood that even though the embodiments described herein use a high threshold to indicate a higher level of confidence in identification of a target, the determination of confidence is not limited to any particular choice of
measurement designation, point of reference, or other criteria.
In the embodiments described, the difference between a "high" threshold and a "low" threshold of confidence can be measured, evaluated, or calculated based on one or more pieces of information. For example, the confidence level can be calculated with relation to the length of time that an acquired target is tracked (e.g., 1 second, 5 seconds, 10 seconds, etc.), the number of sequential sets of image data wherein the target was identified, the number of data points related to the target that are iteratively identified, or by some other measure. In some embodiments, the confidence level is calculated on a scale of one to one hundred, and a target is locked if the confidence level is at least seventy-five. In some embodiments, one or more target "patterns" are stored in a memory of the intelligent small weapon targeting system 122, and the confidence level is related to the identification of a current target as a likely match to a stored pattern.
In some embodiments, a high confidence threshold indicates that the targeting system 122 will present a locked indication or firing decision only when the firing decision engine 128 has a high confidence that the target pattern identified by the target acquisition and tracking module 126 is in fact a legitimate target. That is, in some embodiments, for example, the system is configured to only lock onto a specific instance or certain class of target [e.g., via face recognition, pose recognition, plate recognition for vehicles, or other like pattern analysis).
In some embodiments, a low confidence threshold indicates that the targeting system 122 may present a locked indication or firing decision even when it has less confidence that the target has been properly identified. For example, if intelligent small weapon targeting system 122 has set a low threshold, the locked indication may be presented if the determined projectile trajectory is merely compatible with the current position of the weapon operator and the direction the weapon is being pointed.
Additional embodiments of an intelligent small weapon targeting system may present the weapon operator with multiple "candidate" targets that match a selected pattern (e.g., "person"). Some targets may be "friendly," some targets may be "hostile," and still other targets may be undetermined. The weapon operator may select a specific target via a user interface {e.g., touch screen) from a presentation of multiple candidates. In one embodiment, a scroll wheel or some other input mechanism may present the multiple candidates for selection by the weapon operator. In some embodiments, the lock indication will be presented when the specified target is acquired and determined to be hit with substantial likelihood according to the currently selected confidence threshold.
In some cases, the intelligent small weapon targeting system may track multiple targets, and the multiple targets may be tracked by two or more intelligent small weapon targeting systems operating in relatively close proximity to each other. In such cases, external input may be provided through an I/O port to an intelligent small weapon targeting system to select a particular target. In some cases the external input may be provided by another intelligent small weapon targeting system. Accordingly, an intelligent small weapon targeting system may use its I/O ports in a networked configuration to provide information about one or more targets recognized by the system.
Figure 7 illustrates an intelligent small weapon targeting system architecture 122a including non-limiting embodiments of hardware modules, software modules, and signals. In particular, the intelligent small weapon targeting system architecture 122a of Figure 7 illustrates the cooperative data sharing, data transfer, and executive control between modules of the system. The embodiment of Figure 7 illustrates an architectural representation of the components of an intelligent small weapon targeting system 122 illustrated in Figures 4, 5A, and 5B. Accordingly, the intelligent small weapon targeting system architecture 122a includes hardware and software modules having similar or overlapping hardware and software features to the intelligent small weapon targeting system 122. For example, the intelligent small weapon targeting system architecture 122a includes software modules that are stored in memory 136 and executed with CPU 134. Additionally, the intelligent small weapon targeting system architecture 122a includes inputs and outputs provided by and to environmental sensors 148a-n, accelerometers 150a-n, user inputs 156a-n, electro-mechanical trigger inputs 160, I/O ports 140, audio outputs 142, visual outputs 144, system outputs 152a-n, tactile outputs 154, and the like.
Table 3 presents a non-limiting list of categorized signals input into the intelligent small weapon targeting system architecture 122a of Figure 7. The input signals are managed by the system architecture, and the input signal categories are further described herein.
Figure imgf000038_0001
Input Category
Optical Environmental Other
Illumination Level X
Time X
Orientation X
Location (e.g., GPS based) X X
Radio/Satellite feed X
User Settings X
Security Features (e.g. X authentication)
Table 3. Input Categories of the Intelligent Small Weapon Targeting System
In the intelligent small weapon targeting system architecture 122a of Figure 7, a camera module includes one or more cameras 170. One embodiment has one camera designated for daytime operation, and another embodiment has one camera designated for nocturnal or other low light operation. In still other embodiments, such as for military or tactical use, a single camera 170 can be used in both day and night-time operations. A Decision Engine Module 164 will activate cameras based on environmental conditions (e.g., day or night conditions).
Once activated, the camera module 170 will provide image data to an image acquisition module 176. The image acquisition module 176 may be in whole or in part a software module. In one embodiment, the image acquisition module 176 is integrated in an electronic scope 124. The image data may be compressed, encoded, and formatted according to a particular standard or the image data may be raw. The image acquisition module 176 filters and extracts features from the image data supplied by the camera modules.
The image acquisition module 176 cooperatively communicates with a target tracking module 178. Image data from image acquisition module 176 is made available to the target tracking module 178. The target tracking module 178 may be in whole or in part a software module. The image data may be compressed, encoded, and formatted according to a particular standard or the image data may be raw.
The target tracking module 178 computes image evolution data from a sequence of filtered images and features. That is, a particular image feature in one or more images may be identified and tracked in a sequence of images. The target tracking module 178 communicates image feature data to a decision engine module 164.
In some embodiments, the target tracking module 178 and decision engine module 164 are integrated in a single module. The modules may share software and hardware circuitry, or they may have dedicated software and hardware circuitry. The target tracking module 178 and decision engine module 164 collaborate to provide feedback to the image acquisition module 176 such as image position and/or sight position prediction.
Additionally, accelerometer data from sensors 174 can provide approximate information about the small weapon platform's movement.
The decision engine module 164 receives inputs from various sensors 174 in the intelligent small weapon targeting system architecture of Figure 7. The sensors include environment sensors {e.g., wind, humidity, ambient light, location data, and the like), accelerometers, operational sensors, target distance sensors {e.g., LIDAR), and the like. The sensors 174 typically include electronic circuitry and software {e.g., drivers) to configure and operate the sensors. The decision engine module 164 evaluates the environmental and operational inputs and provides feedback to the image acquisition module 176 and the target tracking module 178.
The decision engine module 164 can generate aiming queues and make or recommend a projectile release decision. If a tracked target is acquired and locked, and if determined user input is present, the decision engine module 164 can present a trigger release signal to a platform 168. The trigger release signal can be made when a confidence level exceeds a threshold. The platform 168 includes the projectile firing mechanisms of a small weapon {e.g., electro-mechanical trigger, trigger actuator, etc.).
The decision engine module 164 computes a confidence level across a sequence of image data frames generated by the image acquisition module 176.
In one embodiment, a confidence level is determined from operations performed on three types of data: a classification of the target and/or target features, a position prediction for the target, and a position prediction for the platform 168. In some cases, a confidence level is determined for each of the three types of data, and an overall confidence level for a target can be a calculated from a combination of the various individual confidence levels.
Several factors can be used in the decision engine module 164 to determine a confidence level for each type of data. For example, a determined quality value related to motion information for the platform 168 derived from sensors (e.g., accelerometer) applied to the platform 168 can be used.
Approximation values representing predictions at the moment of firing can be used. In addition, other factors can also be used to determine a confidence level.
With respect to the first of the three types of data used to determine a confidence level in a particular target, the system may employ one or more modules to "classify" several features of the particular target in various ways. For example, the target features that can be classified include moments, contour, texture, pose, bounding box, and the like. One way that each feature may be classified includes a system determination of the quality of the features as extracted from the image data. Other ways to classify features may include determinations related to the nature of the intended target [e.g., whether the target is determined to be an animate or inanimate object) and determinations related to the pose of the target, [e.g., whether the target is a standing or sitting human). The modules that perform the classifications may provide an output as a level of confidence for the feature. For example, one module output may be a value representing an 80% confidence level that a target is human. With respect to the second and third of the three types of data used in the decision engine module 164 in further processing to determine a confidence level in a particular target, the system may predict the position of the target and the position of the platform 168 at substantially the moment the bullet is fired. The positions may be computed using target and platform motion information across a sequence of image frames used to classify the target. The system can calculate or store as a parameter a delay between a firing decision and an actual firing action. The system can further use information such as the target's motion, the determined target pose, the platform's motion, and the like to compute a firing time that is likely to be successful.
In some cases, the precision of the predicted position values can be increased using feedback including actual measurements in subsequent sampled image data. For example, the predicted target position at any given time can be adjusted in the decision engine 164 based on the target's actual position when the target is successfully classified from the sequence of frames. The approximation of the prediction for the target position may be related to the classification of the target while the approximation of the prediction for the platform 168 position may be related to the quality of raw sensor measurements and the sampling frequency of the sensors.
In the decision engine 164, the target feature classification information and confidence level is combined with the target position prediction information and platform position prediction information to create a confidence level in the target. The confidence level in the target can be matched in the decision engine 164 to a configurable threshold, and a firing decision can be determined.
The target tracking module 178 and decision module 164 send and receive data signals to and from a user interface 166. For example, the user interface 166 can communicate real time and predetermined user settings to the target tracking module 178 and decision module 164. The target tracking module 178 and decision module 164 can provide aiming cues and other signals to the user interface 166. The communicated data can include trigger pressure, distance measurement calculations or settings, location information, radio/satellite feed data, security data, and the like.
In some embodiments, the communication of image or other data between modules involves an actual transfer of data from one hardware or software module to another. In other embodiments, data is communicated by way of shared memory space wherein one module locates processed data in a determined area of memory and another module accesses the data from the same area of memory. The communication of data may be unidirectional or bidirectional.
The software modules of intelligent small weapon targeting system architecture 122a include one or more sub-modules in some
embodiments. That is, particular tasks of the image acquisition module 176, the target tracking module 178, and the decision engine module 164 can be designed in many ways, including as a hierarchical system of software with deterministic response times and predictable resource consumption.
With particular respect to software modules of the intelligent small weapon targeting system architecture 122a, the system typically operates under tight computational budgets. Inputs can be analyzed and processed in real time, which means that an operator of the small weapon has the perception of substantially instantaneous operation. That is, to an operator of a properly configured small weapon, the intelligent small weapon targeting system architecture 122a can appear to provide nearly transparent operation.
A non-limiting embodiment of software modules and data structures of an intelligent small weapon targeting system architecture 122a is now described.
An embodiment of the intelligent small weapon targeting system architecture 122a has a single thread of execution. The single execution thread typically uses less system resources than a multi-tasking operating system. Each subsystem executes a specific module (e.g., image acquisition, target tracking, decision engine) to implement its functionality and to keep track of the input and output data signals. A scheduler interface is implemented by each module to permit the subsystem tasks to execute cooperatively. The scheduler may permit tasks to execute asynchronously or with the appearance to an operator of asynchronous execution.
The software subsystems do not need to communicate directly with each other; instead, each subsystem module can acquire data and post computational results as instances of a workitem() type to a first-in, first-out (FIFO) queue. A module can access an instance of the queue type for input and an instance of the queue type for output. The output queue of one subsystem typically is an input queue of a following subsystem.
Each module is typically responsible for publishing and keeping track of its own execution parameters, such as the number of pending work items. A scheduler can select a highest pending priority task based on the module's execution parameters. To avoid CPU cycle starvation, each task/module can be associated with an execution cost based, for example, on a moving average of the task/module's historical execution time. Each task can be placed on an execution list based on its expected execution time, the size of its input parameters (i.e., the number of pending work items), and other parameters. A scheduler assesses the outstanding parameters, evaluates task priorities, and directs execution control to each task.
After executing a task, the scheduler can update an expected execution time for that task and place the task back in the execution list in an appropriate spot. The number of outstanding work items and the historical execution time can be used to determine the load of a module.
Embodiments of the intelligent small weapon targeting system architecture 122a can implement two types of schedulers, and other
implementations are possible as well. In many configurations, a single scheduler module is typically selected, compiled, loaded into the targeting system, and configured for deployment. In other circumstances, multiple scheduler configurations can be stored in memory and one particular scheduler can be chosen at runtime. In some circumstances, user selections or environment conditions will direct the use of one particular scheduler. One type of scheduler used in some embodiments is a round robin scheduler. A round robin scheduler executes one or more tasks from each module according to a determined allocated time. A round robin scheduler embodiment is described in more detail below.
Another type of scheduler is a fixed priority scheduler. A fixed priority scheduler executes one or more tasks from a module according to highest outstanding load.
In one embodiment, a fixed priority scheduler embodiment for example, the intelligent small weapon targeting system architecture 122a operates in a closed execution system. That is, the system will not use a general purpose real time operating system (RTOS) with threading support or synchronization calls. Instead, the system will use a dedicated task scheduler configured to avoid task starvation and increase throughput of the overall system.
The dedicated task scheduler in such an embodiment will configure a single thread of execution to spool active tasks from a list in a determined priority order. Tasks associated with hardware modules may be implemented by actual threads of execution when running in an emulation environment, but the hardware module tasks will be treated in the same co- operative fashion by the scheduler.
Additionally, the system may implement abstraction interfaces for hardware modules (e.g., accelerometers, cameras, user interface, projectile release, etc). Hardware subsystems, such as accelerometers, may be configured to provide interrupting alerts in system hardware, and the hardware subsystems can also be abstracted and/or emulated in different operating modes of the system. Abstracted hardware interfaces can be configured as tasks or data inputs that are processed by tasks, and the tasks can be managed by the scheduler.
In one embodiment, a "scene" data structure tracks global properties of a universe identified in a stream of image data provided by a camera. For example, global properties include particular identifications of foreground and background features, the number of identified targets, the number of actively tracked targets, sights position, and the like.
In many circumstances, scene background extraction is a useful component of the scene data structure. A scene background can be calculated in a variety of ways, included but not limited to, frame differencing, temporal and spatio-temporal averaging, median extraction, background tessellation reconstruction, and others. Typically, scene background extraction methods work on a variable and configurable frame depth. Alternatively, a frame differencing background extraction method works on two frames only.
In an embodiment, work items are instantiated as transparent containers for data extrapolated from a frame. Work items include connected components, target and sight position, and the like. Work items are linked to each other across one or more queues, and the work items can be used to implement a feedback mechanism across successive frames. A module can acquire contextual data from a previous work item by walking backwards up a work item chain. In the same way, newly discovered contextual data can be pushed to a successive work item.
Image data having at least one potential target, or image data of global interest such as an identified target or a predicted sights position, will be tracked in the scene data structure. The scene data structure can be
recognized as a distillation of information contained in successive work items. The scene data structure acts as a global property holder.
Particular tasks and work items can retrieve information
generated by previous computations and modules. Relevant background scene information is typically available as global data to other modules in the intelligent small weapon targeting system. For example, if a target is identified and actively tracked, then that area of a scene can be so identified such that the particular computational area is restricted. That is, except for the
designated target area of the scene, the rest of the scene is no longer of particular interest to the system and can be disregarded. Accordingly, feedback can provide benefits that reduce the computational requirements of the system and help to distinguish a desired target from other targets or noise.
In an embodiment of the intelligent small weapon targeting system architecture 122a, a system of one or more accelerometers provides motion data that is used to help track the small weapon platform's position. The accelerometer inputs can be used to establish a baseline reference position and a direction of platform motion. The input motion data from the
accelerometers can be used to bootstrap a frame synchronization process. In the frame synchronization process, a background can be distinguished and, in some cases, extracted from the scene. In some embodiments, a single accelerometer device provides the motion data.
In a second level of the frame synchronization process, motion of the small weapon platform can be refined using techniques such as optical flow, CAM shift, or other processes. One advantage of using motion data provided by accelerometers is that motion recognition and compensation can be achieved with a coarse synchronization process. Subsequently, motion data can be used to restrict a search space of processing algorithms employed at second level fine synchronization process. Using a two-level synchronization process provides more accurate target tracking results with fewer
computational resources.
An activity diagram of an embodiment of the intelligent small weapon targeting system architecture 122a is illustrated in Figure 8. The targeting system 122a of Figure 8 shows components for the targeting system embodiment, and illustrates processing of an initial capture of an image, tracking a potential target, recognizing and acquiring a target, and making a firing decision.
The abstracted scheduler 180 of Figure 8 directs processing across five modules. The modules typically comprise software code, data structures, and storage space. The software modules typically interact with electronic circuitry including decoders, timers, interrupts, arithmetic units, and the like. The modules illustrated in Figure 8 include a sensor module 172, an image acquisition module 176, a target tracking module 178, a decision engine module 164, and a user interface module 166.
One or more CPU's may be engaged to carry out the tasks of the scheduler 180 and the five modules. The scheduler 180 works according to its particular architecture, several of which embodiments have been identified herein. Depending on the nature of the scheduler's architecture, the scheduler 180 directs modules to execute tasks. In one embodiment, the scheduler presents a determined time budget to an identified module. The identified module is given execution control and can continue execution until the determined time budget is consumed. In some cases, the module will cyclically continue processing for the fully budgeted time, and in other cases, the module will process through a full queue cycle and return execution control back to the scheduler even if time remains in the time budget.
For example, to commence processing, the scheduler 180 directs a sensor module 172 to commence processing by allotting a particular time budget to the sensor module. The sensor module 172 can control processing of one or more cameras and acquire frame image data. The sensor module 172 creates a frame processing task to process the frame image data. The frame image data can be tagged inside the task with the identified sights and image position as global coordinates. Frame image data can additionally be preprocessed to eliminate noise and to enhance determined features.
Subsequently, the sensor module 172 can store or use pointer techniques to make the frame image data available to a scene background extraction queue.
After the frame image data is entered in the background extraction queue, the frame processing task can itself be queued for operations by other modules. The other modules can cooperatively engage the frame processing task to perform additional image processing operations. After the sensor module 172 queues the frame processing task, if there is any time left from the budget granted by the scheduler 180, the sensor module 172 can execute another task {e.g., process additional image data). Alternatively, the sensor module 172 can return execution control to the scheduler 180. The scheduler 180 of Figure 8 further directs an image acquisition module 176 to commence processing by allotting a particular time budget to the acquisition module 176. The image acquisition module 176 extracts and processes tasks (e.g., frame processing tasks) queued by the sensor module 172.
In one embodiment, the image acquisition module 176 uses the frame processing tasks to extract a background. In some cases, the successful extraction of a background is possible only after a determined number of frames have been acquired. The determined number of frames can be adjustable. Typically, a small number of frames (e.g., 32 acquired frames, n frames acquired in 500msec, n frames of substantially identical image data, etc.) will permit a faster response time of the intelligent small weapon targeting system architecture 122a, which can be a trade off from processing until a determined level of confidence in the background recognition is achieved.
Alternatively, a successful background extraction can determined when a larger number of frames have been acquired.
The processes employed to extract a background can be supplemented with additional sensor data and motion extrapolation techniques. For example, image frame data can be coarsely synchronized using motion data provided by one or more accelerometers. The synchronization can further be refined with optical flow computation algorithms.
After the image acquisition module 176 performs acts to extract a background, the background image data can be used to calculate and filter a foreground image or target of interest. The image acquisition module 176 performs particular calculation acts to connect components of the background image with components of the foreground image. Upon extracting, analyzing, and processing the tasks, the image acquisition module 176 re-queues the tasks for further processing by target tracking module 178.
The target tracking module 178 of Figure 8 operates to extract queued tasks and perform particular operations. For example, the target tracking module 176 will begin operations to label and segment the connected background and foreground components using determined acts (e.g., clustering, classifying, and other methods). The target tracking module 178 can then identify a target to track. After performing acts to extract features from the identified target, the target tracking module 178 can then queue the tasks to other modules for additional processing. In some cases, the image acquisition module 176 picks up the queued tasks for additional processing. In some cases, a decision engine module 164 picks up the queued tasks for processing.
The decision engine module 164 can operate on queued tasks to perform particular classifications. For example, the decision engine module 164 can present particular target features to a classifier task. The classifier task can recalculate identified sights and target position based on information acquired from the corresponding frames. The decision engine module 164 can further evaluate particular configuration settings, confidence level threshold settings, determined targeting information, and the like, and make a firing decision.
In some embodiments, target and sight position data are local properties of a work item associated with a specific frame. Target and sight and position data and tracking data is typically analyzed and processed in conjunction with a determined confidence level. The sight and target position data can be propagated to particular scene data structures when the
confidence level is exceeded. The global properties of a scene are available to each task and module that uses the data for processing. Thus, when a frame processing task is re-queued by the decision engine module 164, other modules have access to the data.
In the intelligent small weapon targeting system architecture 122a of Figure 8, a user interface module 166 is also provided with a time budget and execution control by the scheduler 180. Among other operations, the user interface module 166 can extract queued frame processing tasks and use calculated image data to provide aiming cues and other information to an operator of the intelligent small weapon targeting system architecture 122a. The user interface module 166 can operate according to the granted time budget or the number of queued tasks. In some cases, if the user interface module 166 determines that sufficient useful information has been processed, the operational data can be effectively cleared or deleted, and the particular tasks can be terminated. Execution control is returned to the scheduler 180.
Figure 9 illustrates a functional flow sequence diagram for a round robin scheduler 182. The robin scheduler 182 can be found in the intelligent small weapon targeting systems 122, 122a described herein.
In the robin scheduler 182, the intelligent small weapon targeting system processes images sequentially. That is, one or more sets of image data are processed in an uninterrupted flow from beginning in a first module and ending in a last module.
In the embodiment of Figure 9, the system begins the acquisition and analysis processing of a scene upon activation of small weapon trigger (e.g., a trigger squeeze to a first, engaged position). For example, after the operator aims the small weapon at a target and places the trigger in the enabled position, the trigger, sensors 170, 174 begin providing data. The data includes first image data. A sensor module 172 may initiate the process of data acquisition from the sensors 170, 174. Prior to queuing the first image data to an image acquisition module 176, the sensor module 172 may further query data from the sensors 174 [e.g., accelerometer motion data samples). The queried data may be attached (i.e., tagged) to the first image data. For example, if first image data is designated as target visual n-1 , the sensor module 172 may tag the target visual n-1 image data with sensor data by linking particular data structures together.
The image acquisition module 176 processes the first image data.
The processing by the image acquisition module 176 may include filtering out noise, enhancing image features, and segmenting image elements in a process that produces a set of connected components and features.
The image acquisition module 176 may engage other modules or sub-tasks to perform processing on the first image data (e.g. , target visual n-1 ) and subsequent image data (e.g., target visual n). For example, a filtering and enhancement module 184 may process the image data by known imaging techniques to smooth edges, extrapolate motion, reduce jitter, and the like. In some cases, the tagged sensor data may be employed to further filter and enhance the image data.
Another module or sub-task directed by the image acquisition module 176 may include a connection and segmentation module 186. The connection and segmentation module 186 identifies and segments particular features in the first image data and subsequent image data. The segmented features can then be associated or connected. The levels and strength of particular connections may be cumulative through iterative processing of the first and subsequent image data as particular data structures are created and filtered.
During processing, the image acquisition module 176 may place greater weight to data that is determined to be at or near the location of a target identified in the scene. In one embodiment, the image acquisition module 176 initializes first image data including an assumption that the operator is likely to be aiming the small weapon in the general direction of a desired target. Thus, in the one embodiment, the system can select the object that is at the target point in the scope as the desired target. The system can identify the
boundaries of this target object and distinguish one target object from another.
In some cases, when two or more targets are present in the scene, a heuristic based on "pointing time" can be used to disambiguate which target the operator is determined to be pointing to. For example, the operator can point first to a target, and the system can lock and track it. The system will consider this the target unless changes are made. Subsequently, if the system determines that the operator is pointing to a different target, the system can use a determined amount of time to disambiguate whether pointing to the different target is determined to be intentional or un-intentional. During the
disambiguation time, the system may track both targets or drop the first target. Later, if the system has maintained tracking of both targets, the system may continue tracking the first target or switch to the new target after a time threshold is exceeded.
The operator can also put the system into different target acquisition modes. If the target is hunting deer or elk, the system can be configured in a "deer or elk" mode to consider any deer or elk in the image as the intended target and not some other object that happens to be in the frame. If the operator is a police officer, the system can be configured in a human mode, which can included either torso or face or both. Then, if the system identifies a human in the image, it will consider this the target and lock on the human to assist the police officer. Similar modes can be entered for flying birds or other game that might be hunted. Also, other image modes can be entered if the operator is using the weapon against other identifiably shaped or posed targets.
In one embodiment, the user may put the system into "safe use" mode in which it will refuse to fire if a human is within the image and within the target area. As previously mentioned, the system can quickly identify and recognize any people in an image. If the scope is placed in safe use mode, then if a human image is within the range, the system will refuse to let the weapon fire. Thus, a deer hunter can place the system in deer mode and into the safe use mode while hunting. If the hunter's buddy or some other person happens to be identified in the image frame when the trigger is pulled, the fire signal will not be sent and the gun will not fire. Even if the intended target, such as deer or elk is in the image, the presence of a human in the image will prevent the weapon from firing. The safe use mode will avoid many accidents that might otherwise happen.
The processed image data from first image data or subsequent image data is made available to a target tracking module 178 and a decision engine 164. The processed image data, designated image n, has been filtered and enhanced by the image acquisition module. Typically, image n exists as a data structure that is operated on by several modules.
The target tracking module 178 performs processing to extract identified background information and segment connected components of an identified foreground. In some embodiments, background extraction is performed with two or more frames of image data. Particular configuration settings can variably direct which extraction method is chosen and further direct a determined accuracy level.
In some cases, the target tracking module 178 will discard image data having determined insignificant feature sets or image data tagged with inconsistent sensor data.
The target tracking module 178 will make segmented, connected {e.g., clustered), component data structures available to the decision engine 164. The decision engine 164 will perform processing to try and identify a desired target.
The decision engine 164 may further provide feedback related to both an identified target and the sight position to the image acquisition module 176 and the target tracking module 178. The feedback allows for performance optimizing techniques to be conducted. The feedback further reduces the computational resources utilized by the system.
After a target is identified, the target and the sight position can be predicted using techniques such as Kalman filter, condensation filter, or the like. The image acquisition module 176 can use the feedback information from the decision engine 164 to adjust the processing of the first image data or subsequent image data in an acquisition phase. For example, by adjusting filtering parameters in determined areas of image data where an object presence has been predicted, the image acquisition module 176 can extract crisper features in subsequent processing iterations.
The target tracking module 178 may use stored information from multiple processing iterations to calculate which elements in a set of image data are connected and which ones are independent. Such connections, or clustering, may be later used by the decision engine module 164 to identify prospective targets that can be hit by releasing a projectile at a determined moment in time. In one embodiment, the decision engine 164 operates on a set of image data information including raw shape data and macro data. Evolution data from multiple images and sensor data tagged to the image data from multiple images is further associated to raw image data. In such embodiments, the decision engine 164 operates to associate the evolution data with both the raw data and the motion data to lock on an identified target. The association of multiple sets of data and types of data from various sensors can be used by the decision engine 164 to provide aiming cues to the operator. The association further can produce a projectile {e.g., bullet) release decision with a higher level of confidence.
In some embodiments, the decision engine 164 uses artificial intelligence (Al) methods to improve its operation. Particular Al techniques include configuring a neural network to determine what target the operator is aiming at and/or the nature of the target the operator is aiming at. Accordingly, while the decision engine 164 can make a projectile release decision based on raw data from an image acquisition module 176, other available sensor and system input data may also be taken into account. In one example, operator configured settings are taken into account. If an operator configured
confidence threshold is met, and if the system is in an automatic firing mode, the small weapon will fire. In a manual firing mode, if the operator configured level of confidence is low, the system can present aiming cues to the operator via the user interface {e.g., move 10 degrees to the left).
In the fixed priority scheduler 180 of Figure 8, the round-robin scheduler 182 of Figure 9, and in other scheduler configurations, work items can be organized and managed in a queuing system. The queuing system is implemented by creating abstract data structure objects and pointers to the objects. The abstractions, which are possible with high-level software architectures, reduce the amount of data copying and replicating often found in embedded computing systems. Instead of creating copies of data for each task module, the same data within the queue structures can be accessed by multiple task modules. Thus, the queue structures increase the efficiency of the small weapon targeting system by conserving memory and reducing CPU processing cycles.
As work items flow from one queue to the next, the scheduler updates an execution list and services a determined highest priority work item. The scheduler processing and efficient data sharing of the work item flow provides time and resources used to keep the system in operating within acceptable real time limits. Accelerometer-assisted motion compensation and image position prediction further allow the system to compensate for computational latency. That is, each frame of image data can be processed quickly so that target acquisition and projectile release decisions can be made within an acceptable level of real time operation.
The efficient scheduler architectures described herein implement a self-balanced computing system. As the small weapon targeting system is operated, the camera module begins producing image data. The image data does not accumulate and overwhelm the system because the level of processing for each image frame is balanced with processes that implement target acquisition, target locking, decision making, and user interface updating. Some modules process image data in higher volume. Those modules are granted longer time budgets and perform streamlined, dedicated processing. Other modules are designed to process data more quickly, so even with higher volumes, image frames are processed and passed along quickly.
For example, accelerometer assisted motion compensation processing operates on sequences of images. Motion is detected through comparisons of temporal images aptly spaced in time. Accordingly, it is expected that in some embodiments, target tracking modules and decision engine modules operate on input queues that are fuller than the input queue of an image acquisition module.
One mechanism used by the scheduler to self-balance is a configuration of relative computational balance of the subsystems. That is, each subsystem is configured to process information in a timeframe that is compatible with the other subsystems computation times. The scheduler can be configured to provide time budgets commensurate with the level of computing resources expended to pass one or more sets of image data to a task.
Another mechanism used by the scheduler to self-balance is a configuration of the computational latency of the system. The computational latency can be calculated from the time a first frame of image data is received until the time a decision engine projectile release signal or an aiming cue is produced. That is, the time taken by each work item to flow through the whole system across all subsystem task modules.
The real time performance of the system can be configured so that desired steady-state equilibrium is reached between the subsystem task modules. The system can recognize and process momentary variations of execution time, motion in the image, relative motion of an image to the small weapon platform, and the execution time entropy of each task. The result of such processing is used by the scheduler to adjust the relative priorities of the task queues and subsystem task module scheduling.
In some cases, the adjustments will include directed degradation of computations provided by the subsystem task modules. That is, particular configurations may allow computations to be completed more quickly. For example, in circumstances where substantial user input or sensor input is present {e.g., heavy shaking may produce a high volume of disparate accelerometer data), full subsystem task module processing may be compatible with real time operation. In such circumstances, parameters, such as the number of motion compensation iterative calculations, can be adjusted by the scheduler.
Several terms used in the description are now further described. A "small weapon," as used herein, includes small firearms, light weapons, non-lethal kinetic systems, and other devices. A small weapon may also be any weapon that fires any of bullets, shot, arrows, darts, sound, light or other electromagnetic energy, water, or some other projectile. The propellant used to fire the projectile may include systems that employ some or all of combustion, chemical reaction, electronic operation, pressure, or any other system.
Small firearms are weapons generally designed for individual use. Small firearms include, among other things, revolvers; pistols, whether semi- automatic or fully automatic; rifles, whether single shot, semi or fully automatic, sub-machine guns, assault rifles and light machine guns.
Light weapons, broadly speaking, are designed for use by two or three persons serving as a crew, although some light weapons may be carried and used by a single person or four or more persons. Light weapons generally include, among other things, heavy machine guns, hand-held under-barrel and mounted grenade launchers, portable anti-aircraft guns, portable anti-tank guns, recoilless rifles, portable launchers of anti-tank missile and rocket systems, portable launchers of anti-aircraft missile systems, and mortars of a small or mid-size caliber, for example of less than 100 millimeters.
Small weapons may fire a single shot or multiple shots. Small weapons may be loaded and/or fired mechanically, manually, electronically, or the like in a non-automatic, semi-automatic, or fully automatic way.
Embodiments of the intelligent small weapon targeting system can be implemented, fully or in part, with embedded electronic systems. In some embodiments, the embedded electronic systems are cooperatively coupled to mechanical and/or electro-mechanical parts.
In the embodiments described, non-limiting references may be made to central processing units (CPU), microcontrollers (MCU), digital signal processors (DSP), application specific integrated circuits (ASIC), input/output (I/O) ports, network connectivity ports, memory, logic, circuits, and the like. According to methods and devices referenced herein, embodiments describe a CPU and a memory having software. The software is executable by the CPU and operable to execute the method acts.
CPU's, MCU's, and other processors/controllers as used herein interchangeably refer to any type of electronic control circuitry configured to execute programmed software instructions. The programmed instructions may be high-level software instructions, compiled software instructions, assembly- language software instructions, object code, binary code, micro-code, or the like. The programmed instructions may reside in internal or external memory or may be hard-coded as a state machine or set of control signals.
The internal and external memory may be volatile or non-volatile.
Volatile memory includes random access memory (RAM) that may be static, dynamic, or any other type. Non-volatile memory is any non-volatile computer- readable media including, for example, flash memory, phase-change memory, magnetic media such as a hard-disk, an optical disk, a flexible diskette, a CD- ROM, and/or the like.
Input/output (I/O) ports include serial, parallel, or combined serial and parallel I/O circuitry compliant with various standards and protocols. The protocols may be proprietary or they may follow an accepted, published standard.
Embodiments of the intelligent small weapon targeting system may include computer vision technology. Computer-vision technology typically includes high-resolution camera modules. The camera modules may be low- cost. The camera modules can be developed for integration with many types of devices, including optical scopes for small weapons. Additionally, advanced memory, computing, and battery technologies have now made it efficient and affordable to store many images and mine, match, and analyze visual data contained in the images. In some cases, digital camera technology can even be used to recognize determined targets, gestures, and motions.
The computer-vision technology described herein in singular or plural as "cameras" include one or more object detectors. In some
embodiments, an object detector includes a charge coupled device (CCD) formed as an array of pixels, although other imaging technologies may be used. The object detector may further employ an optical mechanism for focusing a target on the plurality of object detectors. In some embodiments, an infrared projection or other device may be used to separately or in cooperation to provide "night vision" capabilities. Various lenses and mirrors may be employed. Various focusing and other camera technologies may be employed.
In the foregoing description, certain specific details are set forth in order to provide a thorough understanding of various disclosed embodiments. However, one skilled in the relevant art will recognize that embodiments may be practiced without one or more of these specific details, or with other methods, components, materials, etc. In other instances, well-known structures associated with electronic and computing systems including client and server computing systems, as well as networks have not been shown or described in detail to avoid unnecessarily obscuring descriptions of the embodiments.
Unless the context requires otherwise, throughout the specification and claims which follow, the word "comprise" and variations thereof, such as, "comprises" and "comprising" are to be construed in an open, inclusive sense, e.g., "including, but not limited to."
Reference throughout this specification to "one embodiment" or
"an embodiment" and variations thereof means that a particular feature, structure, or characteristic described in connection with the embodiment is included in at least one embodiment. Thus, the appearances of the phrases "in one embodiment" or "in an embodiment" in various places throughout this specification are not necessarily all referring to the same embodiment.
Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
As used in this specification and the appended claims, the singular forms "a," "an," and "the" include plural referents unless the content clearly dictates otherwise. It should also be noted that the term "or" is generally employed in its sense including "and/or" unless the content clearly dictates otherwise.
The headings and Abstract of the Disclosure provided herein are for convenience only and do not interpret the scope or meaning of the embodiments. These and other changes can be made to the embodiments in light of the above-detailed description. In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled. Accordingly, the claims are not limited by the disclosure.

Claims

CLAIMS What is claimed is:
1 . An intelligent small weapon targeting system, comprising:
an imaging system coupled to a small weapon;
a memory configured to store a program to aim the small weapon; and a processor operable to execute instructions of the program, the instructions configured to direct the processor to:
process a first set of imaging data generated by the imaging system to produce processed image data;
identify a target in the processed image data;
predict whether a projectile fireable from the small weapon will hit the target; and
tracking the target in a second set of imaging data.
2. The intelligent small weapon targeting system of claim 1 wherein the small weapon is a handgun.
3. The intelligent small weapon targeting system of claim 1 wherein the instructions are further configured to direct the processor to:
determine an operating mode of the intelligent small weapon targeting system;
produce an indication of a locked target when the tracked target meets a confidence threshold; and
generate a firing decision based on the operating mode and the indication of the locked target.
4. The intelligent small weapon targeting system of claim 1 wherein the instructions are further configured to direct the processor to:
generate the processed image data by processing the first set of imaging data with a corresponding set of motion data.
5. The intelligent small weapon targeting system of claim 1 , further comprising:
a user interface having a visual output device wherein the instructions are further configured to direct the processor to cause the visual output device to present a target outline delineator based on a confidence level in the tracked target.
6. The intelligent small weapon targeting system of claim 1 , further comprising:
an audio output device wherein the instructions are further configured to direct the processor to cause the audio output device to present at least one tone corresponding to a confidence level in the tracked target.
7. The intelligent small weapon targeting system of claim 1 , further comprising:
an environment sensor configured to produce environment data samples, wherein the instructions are further configured to direct the processor to:
process the environment data samples in conjunction with motion data samples and the first set of imaging data to produce the processed image data;
determine a calculated destination of the projectile based in the processed image data; and
cause a visual output device to present a point-target indicator representative of the calculated destination of the projectile.
8. A method of targeting a small weapon, the method comprising: acquiring a first set of image input data produced by one or more cameras;
acquiring motion data produced by one or more accelerometers;
acquiring environment data produced by one or more environment sensors;
correlating the motion data with the first set of image input data to produce processed image data;
identifying a target within the processed image data;
tracking the target in a second set of image input data;
calculating a location in the second set of image input data where a projectile fired from the small weapon would strike if the small weapon were fired, the calculating being performed based on the motion data, the environment data, and performance information related to the small weapon;
determining, based on the location in the second set of image input data, a time instant for firing the weapon to hit the target; and
presenting a firing decision signal representative of the time instant.
9. The method of targeting a small weapon of claim 8, further comprising:
acquiring altitude data produced by an environment sensor; and calculating the location in the second set of image input data based on the altitude data.
10. The method of targeting a small weapon of claim 8, further comprising:
acquiring distance data; and
calculating the location in the second set of image input data based on the distance data.
1 1 . The method of targeting a small weapon of claim 8, further comprising:
acquiring wind speed data produced by an environment sensor; and calculating the location in the second set of image input data based on the wind speed data.
12. The method of targeting a small weapon of claim 8, further comprising automatically firing the small weapon based on the firing decision signal.
13. The method of targeting a small weapon of claim 8 wherein acquiring the first set of image input data includes acquiring image data from a night vision camera.
14. The method of targeting a small weapon of claim 8 wherein determining, based on the location in the second set of image input data, the time instant during which firing the weapon would hit the target includes:
retrieving a confidence threshold from a memory;
generating a confidence level from a target classification and a predicted target position; and
comparing the confidence level to the confidence threshold.
15. The method of targeting a small weapon of claim 8, further comprising:
positioning at least one of the one or more cameras in a first direction; and
positioning a display in a second direction, the second direction different than the first direction, the display configured to present the processed image data.
16. A computer-readable medium having a program to target a small weapon, the program comprising logic configured to perform the steps of:
enabling an imaging system coupled to the small weapon; processing a first set of imaging data generated by the imaging system; identifying a target in the first set of imaging data, the identified target located in a projectiles path, the projectile being fireable from the small weapon; and tracking the target in a second set of imaging data.
17. The computer-readable medium having the program to target the small weapon of claim 16, the program comprising logic further configured to perform the steps of:
presenting a real time scene representing the second set of imaging data on a display device.
18. The computer-readable medium having the program to target the small weapon of claim 17, the program comprising logic further configured to perform the steps of:
presenting a target outline delineator on the display device to
emphasize the target.
19. The computer-readable medium having the program to target the small weapon of claim 17, the program comprising logic further configured to perform the steps of:
correlating motion data generated by an accelerometer with the first set of imaging data to produce the second set of imaging data.
20. The computer-readable medium having the program to target the small weapon of claim 17, the program comprising logic further configured to perform the steps of:
storing the scene as a stream of real time image data in a memory.
PCT/US2011/027986 2011-03-10 2011-03-10 Apparatus and method of targeting small weapons WO2012121735A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2011/027986 WO2012121735A1 (en) 2011-03-10 2011-03-10 Apparatus and method of targeting small weapons

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2011/027986 WO2012121735A1 (en) 2011-03-10 2011-03-10 Apparatus and method of targeting small weapons

Publications (1)

Publication Number Publication Date
WO2012121735A1 true WO2012121735A1 (en) 2012-09-13

Family

ID=46798497

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2011/027986 WO2012121735A1 (en) 2011-03-10 2011-03-10 Apparatus and method of targeting small weapons

Country Status (1)

Country Link
WO (1) WO2012121735A1 (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140182187A1 (en) * 2012-12-31 2014-07-03 Trackingpoint, Inc. Software-Extensible Gun Scope and Method
WO2014169107A1 (en) 2013-04-11 2014-10-16 Hall Christopher J Automated fire control device
EP2749834A3 (en) * 2012-12-31 2015-08-05 TrackingPoint, Inc. Heads up display for a gun scope of a small arms firearm
WO2015156855A3 (en) * 2014-01-08 2015-12-03 Trackingpoint, Inc. Precision guided handgun and method
US10663256B1 (en) 2018-11-19 2020-05-26 Vartan Frank Garbouchian Firearms sight
CN112258479A (en) * 2020-10-22 2021-01-22 中国人民解放军63620部队 T0 detection method and device based on image features and storage medium
EP3819585A1 (en) * 2019-11-11 2021-05-12 Israel Weapon Industries (I.W.I.) Ltd. Firearm with automatic target acquiring and shooting
CN114063185A (en) * 2021-11-18 2022-02-18 天津全谱光电科技有限公司 High-precision light curtain detection method for high-speed target landing elastic point
US20220163291A1 (en) * 2013-10-31 2022-05-26 Aerovironment, Inc. Interactive Weapon Targeting System Displaying Remote Sensed Image of Target Area
WO2023192399A1 (en) * 2022-03-30 2023-10-05 Sheltered Wings, Inc. D/B/A Vortex Optics User interface for viewing optic with wind direction capture

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5379676A (en) * 1993-04-05 1995-01-10 Contraves Usa Fire control system
US20060005447A1 (en) * 2003-09-12 2006-01-12 Vitronics Inc. Processor aided firing of small arms
US7693325B2 (en) * 2004-01-14 2010-04-06 Hexagon Metrology, Inc. Transprojection of geometry data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5379676A (en) * 1993-04-05 1995-01-10 Contraves Usa Fire control system
US20060005447A1 (en) * 2003-09-12 2006-01-12 Vitronics Inc. Processor aided firing of small arms
US7693325B2 (en) * 2004-01-14 2010-04-06 Hexagon Metrology, Inc. Transprojection of geometry data

Cited By (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10782097B2 (en) 2012-04-11 2020-09-22 Christopher J. Hall Automated fire control device
EP2749835A3 (en) * 2012-12-31 2015-07-29 TrackingPoint, Inc. Software-extensible gun scope and method
EP2749834A3 (en) * 2012-12-31 2015-08-05 TrackingPoint, Inc. Heads up display for a gun scope of a small arms firearm
US20140182187A1 (en) * 2012-12-31 2014-07-03 Trackingpoint, Inc. Software-Extensible Gun Scope and Method
WO2014169107A1 (en) 2013-04-11 2014-10-16 Hall Christopher J Automated fire control device
EP2984440A4 (en) * 2013-04-11 2016-12-21 Christopher J Hall Automated fire control device
US11619469B2 (en) 2013-04-11 2023-04-04 Christopher J. Hall Automated fire control device
US20220163291A1 (en) * 2013-10-31 2022-05-26 Aerovironment, Inc. Interactive Weapon Targeting System Displaying Remote Sensed Image of Target Area
US11867479B2 (en) 2013-10-31 2024-01-09 Aerovironment, Inc. Interactive weapon targeting system displaying remote sensed image of target area
US20230160662A1 (en) * 2013-10-31 2023-05-25 Aerovironment, Inc. Interactive Weapon Targeting System Displaying Remote Sensed Image of Target Area
US11592267B2 (en) * 2013-10-31 2023-02-28 Aerovironment, Inc. Interactive weapon targeting system displaying remote sensed image of target area
US9366493B2 (en) 2014-01-08 2016-06-14 Trackingpoint, Inc. Precision guided handgun and method
WO2015156855A3 (en) * 2014-01-08 2015-12-03 Trackingpoint, Inc. Precision guided handgun and method
US10663256B1 (en) 2018-11-19 2020-05-26 Vartan Frank Garbouchian Firearms sight
AU2020267163B2 (en) * 2019-11-11 2022-02-17 Israel Weapon Industries (I.W.I) Ltd. Firearm with automatic target acquiring and shooting
EP3819585A1 (en) * 2019-11-11 2021-05-12 Israel Weapon Industries (I.W.I.) Ltd. Firearm with automatic target acquiring and shooting
US12007188B2 (en) 2019-11-11 2024-06-11 Israel Weapon Industries (I.W.I) Ltd Firearm with automatic target acquiring and shooting
CN112258479A (en) * 2020-10-22 2021-01-22 中国人民解放军63620部队 T0 detection method and device based on image features and storage medium
CN112258479B (en) * 2020-10-22 2023-10-27 中国人民解放军63620部队 T0 detection method and device based on image characteristics and storage medium
CN114063185A (en) * 2021-11-18 2022-02-18 天津全谱光电科技有限公司 High-precision light curtain detection method for high-speed target landing elastic point
WO2023192399A1 (en) * 2022-03-30 2023-10-05 Sheltered Wings, Inc. D/B/A Vortex Optics User interface for viewing optic with wind direction capture

Similar Documents

Publication Publication Date Title
WO2012121735A1 (en) Apparatus and method of targeting small weapons
US10839566B2 (en) Weapon targeting system
AU2020267163B2 (en) Firearm with automatic target acquiring and shooting
US10097764B2 (en) Firearm, aiming system therefor, method of operating the firearm and method of reducing the probability of missing a target
US8141473B2 (en) Apparatus for synthetic weapon stabilization and firing
US20060005447A1 (en) Processor aided firing of small arms
EA030649B1 (en) Firearm aiming system with range finder, and method of acquiring a target
CN109654945A (en) With trajectory expressive ability and injure multifarious confrontation fire analogue technique
US9366493B2 (en) Precision guided handgun and method
WO2021048307A1 (en) Imaging system for firearm
US11293722B2 (en) Smart safety contraption and methods related thereto for use with a firearm
US11698238B2 (en) Smart trigger
US20140168447A1 (en) Optical Device Including a Mode for Grouping Shots for Use with Precision Guided Firearms

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11860434

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11860434

Country of ref document: EP

Kind code of ref document: A1