CN114500857A - Image shooting method and device, terminal equipment and storage medium - Google Patents
Image shooting method and device, terminal equipment and storage medium Download PDFInfo
- Publication number
- CN114500857A CN114500857A CN202210303394.2A CN202210303394A CN114500857A CN 114500857 A CN114500857 A CN 114500857A CN 202210303394 A CN202210303394 A CN 202210303394A CN 114500857 A CN114500857 A CN 114500857A
- Authority
- CN
- China
- Prior art keywords
- image
- sub
- camera module
- target
- regions
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 91
- 230000008569 process Effects 0.000 claims abstract description 41
- 230000000875 corresponding effect Effects 0.000 claims description 56
- 230000001276 controlling effect Effects 0.000 claims description 29
- 238000004590 computer program Methods 0.000 claims description 14
- 230000002596 correlated effect Effects 0.000 claims description 5
- 230000000694 effects Effects 0.000 abstract description 10
- 230000015572 biosynthetic process Effects 0.000 abstract description 4
- 238000010586 diagram Methods 0.000 description 11
- 230000006870 function Effects 0.000 description 5
- 238000003384 imaging method Methods 0.000 description 5
- 238000005192 partition Methods 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 238000012015 optical character recognition Methods 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 1
- 230000008859 change Effects 0.000 description 1
- 230000000750 progressive effect Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/667—Camera operation mode switching, e.g. between still and video, sport and normal or high- and low-resolution modes
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/57—Mechanical or electrical details of cameras or camera modules specially adapted for being embedded in other devices
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
- H04N23/675—Focus control based on electronic image sensor signals comprising setting of focusing regions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/695—Control of camera direction for changing a field of view, e.g. pan, tilt or based on tracking of objects
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention discloses an image shooting method, an image shooting device, terminal equipment and a storage medium. Be applied to terminal equipment, terminal equipment is including the camera module, but the camera module free rotation is in order to shoot the image in different regions, and the method includes: determining a shooting mode of a target area; controlling the camera module to rotate according to the shooting mode and the pre-divided sub-regions so as to enable the camera module to shoot images of different sub-regions in the rotating process; the sub-area is obtained by dividing a target area in advance; target information is acquired from the image. According to the embodiment of the invention, the camera module is controlled to rotate according to the shooting mode and the pre-divided sub-regions, so that the camera module can shoot images of different sub-regions. When the shooting scope of camera module has been guaranteed, can reduce the edge distortion of the image of shooing, solved the poor technical problem of learning auxiliary assembly's formation of image effect among the prior art.
Description
Technical Field
The embodiment of the application relates to the field of images, in particular to an image shooting method, an image shooting device, terminal equipment and a storage medium.
Background
At present, in order to assist students in learning, learning assistance devices have been widely used in teaching. The existing learning auxiliary equipment is generally provided with a lifting rotating camera or a reflective mirror and a camera on electronic equipment such as a tablet personal computer, so that when the tablet personal computer is used in a vertical mode, the camera can photograph targets such as exercise books or books. However, in actual use, the photographing effect is poor. For example, the learning aid is limited by the field angle of the camera, the area that the camera can shoot is limited, the depth of field of the camera is limited, and the learning aid cannot be applied to scenes with uneven book thickness, uneven shot objects or large-angle shooting at far ends; the edge distortion of the image shot by the camera is serious because of the distortion of the camera. In addition, when the size of the target object is larger or the distance from the camera is longer, due to the limitation of the shooting optical principle, in the area close to the tablet personal computer or the camera, the number of corresponding camera CMOS pixels is larger, the shot image is clearer, and the reduction degree is higher; in an area far away from the tablet personal computer or the camera, the photographed image is obviously deformed, and the photographed font is small and is not easy to distinguish.
In summary, how to improve the imaging effect of the learning aid becomes a technical problem which needs to be solved at present.
Disclosure of Invention
The embodiment of the invention provides an image shooting method, an image shooting device, terminal equipment and a storage medium, which can improve the imaging effect of an image and solve the technical problem of poor imaging effect of learning auxiliary equipment in the prior art.
In a first aspect, an embodiment of the present invention provides an image capturing method, which is applied to a terminal device, where the terminal device includes a camera module, and the camera module can freely rotate to capture images in different areas, where the method includes:
determining a shooting mode of a target area;
controlling the camera module to rotate according to the shooting mode and the pre-divided sub-regions so as to enable the camera module to shoot images of different sub-regions in the rotating process; the sub-area is obtained by dividing a target area in advance;
target information is obtained from the image.
Preferably, the dividing the target area includes:
determining the angle range of the camera module when rotating according to the target area;
determining the step length of each rotation of the camera module;
and controlling the camera module to rotate within the angle range according to the step length, and dividing the area shot by the camera module after each rotation into a sub-area.
Preferably, the dividing the target area includes:
acquiring a target image, wherein the target image is obtained by shooting the target area;
determining a boundary of a target object in the target image, determining a first image corresponding to the range of the boundary in the target image, and dividing the first image to obtain a plurality of sub-regions.
Preferably, the dividing the first image to obtain a plurality of sub-regions includes:
and uniformly dividing the first image into M columns in the transverse direction, and non-uniformly dividing each column into N rows in the longitudinal direction to obtain M multiplied by N sub-regions.
Preferably, the dividing the first image to obtain a plurality of sub-regions includes:
performing character recognition on the first image to obtain characters in the first image;
and determining the size of the characters, and dividing the first image according to the size of the characters to obtain a plurality of sub-regions.
Preferably, the dividing the first image according to the size of the text to obtain a plurality of sub-regions includes:
the first image is uniformly divided into M rows in the transverse direction, each row is non-uniformly divided into N rows in the longitudinal direction, M multiplied by N sub-regions are obtained, and the size of each sub-region is positively correlated with the size of the characters.
Preferably, the dividing the first image to obtain a plurality of sub-regions includes:
determining the template type of the first image, acquiring a division rule corresponding to the template type, and dividing the first image according to the division rule to obtain a plurality of sub-regions.
Preferably, the shooting mode is any one of a global scanning mode, a local scanning mode, and a target tracking mode.
Preferably, the shooting mode is a global scanning mode, and correspondingly, according to the shooting mode and the pre-divided sub-regions, the camera module is controlled to rotate, so that the camera module shoots images of different sub-regions in a rotating process, including:
determining a path of the camera module for shooting the subareas according to the pre-divided subareas, and controlling the camera module to rotate according to the path so as to enable the camera module to shoot images of different subareas in the rotating process.
Preferably, the shooting mode is a local scanning mode, and correspondingly, according to the shooting mode and the pre-divided sub-regions, the camera module is controlled to rotate, so that the camera module shoots images of different sub-regions in a rotating process, including:
acquiring a global image of the target area in real time;
determining a first area needing to be shot in the global image;
determining a first sub-area contained in a first area, determining a first path of the camera module for shooting the first sub-area, and controlling the camera module to rotate according to the first path so as to enable the camera module to shoot an image of the first sub-area in the rotating process.
Preferably, the shooting mode is a target tracking mode, and correspondingly, the camera module is controlled to rotate according to the shooting mode and the pre-divided sub-regions, so that the camera module shoots images of different sub-regions in a rotating process, including:
determining the position of a target object in the target area, and determining a second sub-area corresponding to the position;
determining a target angle of the camera module rotating to correspond to the second sub-area;
and controlling the camera module to rotate to the target angle so as to enable the camera module to shoot the image of the second sub-area.
Preferably, the acquiring target information from the image includes:
merging the images to obtain a second image;
and performing character recognition on the second image to obtain a target character, acquiring a corresponding standard character in a database according to the target character, and comparing the target character with the standard character to obtain a comparison result.
Preferably, the acquiring target information from the image includes:
and performing character recognition on the image to obtain a target character, acquiring a corresponding standard character in a database according to the target character, and comparing the target character with the standard character to obtain a comparison result.
Preferably, the acquiring target information from the image includes:
acquiring all images shot in a preset time period, and extracting handwriting information from each image of all the images;
determining a stroke order sequence according to the handwriting information, and acquiring a corresponding standard stroke order sequence according to the handwriting information;
and comparing the stroke order with a standard stroke order to obtain a comparison result.
In a second aspect, an embodiment of the present invention provides an image capturing device, which is applied to a terminal device, where the terminal device includes a camera module, and the camera module can freely rotate to capture images in different areas, and the image capturing device includes: the device comprises a mode determining module, a control module and an information acquiring module;
the mode determining module is used for determining a shooting mode of a target area;
the control module is used for controlling the camera module to rotate according to the shooting mode and the pre-divided sub-regions, so that the camera module shoots images of different sub-regions in the rotating process; the sub-area is obtained by dividing a target area in advance;
the information acquisition module is used for acquiring target information from the image.
In a third aspect, an embodiment of the present invention provides a terminal device, where the terminal device includes a processor and a memory;
the memory is used for storing a computer program and transmitting the computer program to the processor;
the processor is adapted to perform an image capturing method according to the first aspect according to instructions in the computer program.
In a fourth aspect, embodiments of the present invention provide a storage medium storing computer-executable instructions for performing an image capturing method as described in the first aspect when executed by a computer processor.
In the foregoing, an embodiment of the present invention provides an image capturing method and apparatus, a terminal device, and a storage medium, where the method is applied to a terminal device, the terminal device includes a camera module, and the camera module can rotate freely to capture images in different areas, and the method includes: determining a shooting mode of a target area; controlling the camera module to rotate according to the shooting mode and the pre-divided sub-regions so as to enable the camera module to shoot images of different sub-regions in the rotating process; the sub-area is obtained by dividing a target area in advance; target information is acquired from the image. According to the embodiment of the invention, the camera module is controlled to rotate according to the shooting mode and the pre-divided sub-regions, so that the camera module can shoot images of different sub-regions. When the shooting scope of camera module has been guaranteed, can reduce the edge distortion of the image of shooing, improve the formation of image effect of image to the angle of view restriction, the depth of field restriction and the distortion restriction of the camera of breaking through single angle when shooing have solved the poor technical problem of the formation of image effect of learning auxiliary assembly among the prior art.
Drawings
Fig. 1 is a flowchart of an image capturing method according to an embodiment of the present invention.
Fig. 2 is a schematic structural diagram of an image capturing apparatus according to an embodiment of the present invention.
Fig. 3 is a schematic structural diagram of another image capturing apparatus according to an embodiment of the present invention.
Fig. 4 is a schematic diagram illustrating division of a target area according to an embodiment of the present invention.
Fig. 5 is a schematic diagram of a target area shot by the camera module according to the embodiment of the present invention.
Fig. 6 is a flowchart of dividing a target area according to an embodiment of the present invention.
Fig. 7 is a flowchart of dividing a target area according to an embodiment of the present invention.
Fig. 8 is a schematic diagram illustrating a target area being divided according to a text size according to an embodiment of the present invention.
Fig. 9 is a schematic diagram illustrating a target area being divided according to a text size according to an embodiment of the present invention.
Fig. 10 is a schematic diagram of a path for shooting a sub-area according to an embodiment of the present invention.
Fig. 11 is a schematic diagram of a path for shooting a sub-area according to an embodiment of the present invention.
Fig. 12 is a schematic structural diagram of an image capturing apparatus according to an embodiment of the present invention.
Fig. 13 is a schematic structural diagram of a terminal device according to an embodiment of the present invention.
Detailed Description
The following description and the annexed drawings set forth in detail certain illustrative embodiments of the application so as to enable those skilled in the art to practice them. The examples merely typify possible variations. Individual components and functions are optional unless explicitly required, and the sequence of operations may vary. Portions and features of some embodiments may be included in or substituted for those of others. The scope of the embodiments of the present application includes the full ambit of the claims, as well as all available equivalents of the claims. Embodiments may be referred to herein, individually or collectively, by the term "invention" merely for convenience and without intending to voluntarily limit the scope of this application to any single invention or inventive concept if more than one is in fact disclosed. Herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, or terminal apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed. The embodiments are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other. For the structures, products and the like disclosed by the embodiments, the description is relatively simple because the structures, the products and the like correspond to the parts disclosed by the embodiments, and the relevant parts can be just described by referring to the method part.
As shown in fig. 1, fig. 1 is a flowchart of an image capturing method according to an embodiment of the present invention. The image shooting method provided by the embodiment of the invention is applied to the terminal equipment, and the terminal equipment comprises the camera module which can freely rotate to shoot images in different areas. Illustratively, in one embodiment, the camera module is a rotatable camera, as shown in fig. 2, the rotatable camera shoots images of different areas by freely rotating, or alternatively, the camera module is a reflector and a camera, the reflector is used for reflecting external light into the camera, and the reflector can freely rotate to adjust the angle of the mirror surface of the reflector relative to the camera, as shown in fig. 3. The reflection mirror is controlled to rotate, so that pictures in different areas can be reflected on the mirror surface of the reflection mirror, and the camera obtains images in different areas by shooting the mirror surface of the reflection mirror. It should be further noted that, in this embodiment, an auxiliary camera may also be disposed in the camera module, so as to assist in shooting. Illustratively, in one embodiment, a camera with a large field of view is further arranged in the camera module to capture a global image, or an event camera (dynamic vision sensor) is further arranged in the camera module, and the event camera is used to track the captured object.
The image shooting method provided by the embodiment of the invention comprises the following steps:
Firstly, a shooting mode of a target area needs to be determined, and the shooting mode of the target area determines a rotation mode of the camera module. For example, in one embodiment, if the shooting mode is the scanning mode, the camera module needs to be controlled to scan each position in the target area by rotating, so as to shoot an image of each position in the target area until the images of all positions in the target area are shot. In another embodiment, the shooting mode is a target tracking mode, the position of the pen point can be tracked in the target area through the event camera, and then the camera module is controlled to rotate according to the position of the pen point, so that the camera module can shoot the position where the pen point is located, a handwriting image of a user is obtained, and the handwriting of the user is tracked.
It can be understood that, in the present embodiment, the determination manner of the shooting mode may be determined according to a mode selection instruction input by the user, or alternatively, the terminal device may determine according to an object in the target area. For example, in one embodiment, before shooting, the terminal device displays shooting modes that can be executed on a display screen, a user selects a shooting mode by clicking the display screen, so as to send a mode selection instruction to the terminal device, and the terminal device executes a corresponding shooting mode according to the mode selection instruction sent by the user. In another embodiment, before determining the shooting mode, the terminal device may shoot a global image of the target area through a camera with a large field angle in advance, identify a target object in the global image, and determine the shooting mode according to the target object. It is understood that, in the present embodiment, the determination manner of the shooting mode may be set according to actual needs, and the determination manner of the shooting mode is not specifically limited in the present embodiment.
102, controlling a camera module to rotate according to a shooting mode and pre-divided sub-areas so as to enable the camera module to shoot images of different sub-areas in the rotating process; the sub-regions are obtained by dividing the target region in advance.
After the shooting mode is determined, the camera module is further controlled to rotate according to the shooting mode and the sub-areas which are divided in advance. It should be further noted that, in this embodiment, the target area needs to be divided in advance, and the target area needs to be divided into a plurality of sub-areas. Illustratively, as shown in FIG. 4, the target region in FIG. 4 has a length and a width Acm and Bcm, respectively, and is divided into a plurality of sub-regions 1-1, 1-2, 1-3, … …. Through dividing the target area into a plurality of sub-areas, the rotation angle of the camera module corresponding to each sub-area can be determined, and when the camera module rotates the corresponding rotation angle, the corresponding sub-area can be shot, so that the camera module can be conveniently and quickly positioned to the appointed sub-area, as shown in fig. 5. The sub-area that the camera module needs to shoot can be determined according to the shooting mode, the camera is controlled to rotate the rotation angle corresponding to the sub-area that needs to be shot, the sub-area that needs to be shot can be shot, and therefore images of different sub-areas are obtained.
It should be further explained that, in this embodiment, the camera module is controlled to rotate according to the shooting mode and the pre-divided sub-regions, so that when the camera module shoots images of different sub-regions, the camera module can focus on the center of each sub-region, thereby obtaining a clear image of the sub-region, and making the imaging effect in the global range of the target region clearer.
And 103, acquiring target information from the image.
And finally, acquiring target information from the image. It can be understood that in this embodiment, the target information may be set according to actual needs, for example, in an embodiment, the target information is text information, and the text information in the image is obtained by performing text recognition on the captured image, or alternatively, the target information is handwriting information, and the handwriting information in the image is obtained by performing image recognition on the image.
In the embodiment of the invention, the camera module is controlled to rotate according to the shooting mode and the pre-divided sub-regions, so that the camera module can shoot images of different sub-regions. Compared with the prior art in which a camera with a single angle is used for directly shooting the global image of a target area, the camera module has the advantages that the shooting range of the camera module is ensured, the focus of the camera module can be aligned to the center of each sub-area, the edge distortion of the shot image can be reduced, the imaging effect of the image is improved, and therefore the limitation of the field angle, the limitation of the depth of field and the limitation of distortion of the camera with the single angle when the camera takes a picture are broken through. The technical problem of learning auxiliary assembly's formation of image effect difference among the prior art is solved.
It should be noted that in this embodiment, there are various ways of dividing the target area, and a flow of dividing the target area in one embodiment is shown in fig. 6, and includes the following steps:
Firstly, the angle range of the camera module during rotation needs to be determined according to the target area. Illustratively, in one embodiment, the target area is a rectangular area, and the camera module is a rotatable camera having a transverse rotation axis and a longitudinal rotation axis perpendicular to each other, and the rotatable camera can rotate around the transverse rotation axis and the longitudinal rotation axis. According to the length and the width of the target area, the angle range of the rotatable camera when rotating around two rotating shafts respectively is determined, so that when the rotating angle of the rotatable camera reaches the boundary value of the angle range, the rotatable camera just can shoot the boundary of the target area.
And step 202, determining the step length of each rotation of the camera module.
And then, further determining the step length of the camera shooting module in each rotation. Illustratively, in one embodiment, the step length of the rotatable camera rotating around the rotation axis is set as a first step length by rotating the rotatable camera around the transverse rotation axis by 10 °, and the step length of the rotatable camera rotating around the longitudinal rotation axis by 5 ° is set as a second step length by rotating the rotatable camera around the longitudinal rotation axis, or the angle ranges of the rotatable camera rotating around the rotation axis are divided into several parts respectively, so as to obtain the step length of the rotatable camera rotating around the rotation axis. Illustratively, in one embodiment, the rotatable camera is rotated about the transverse axis of rotation through an angular range of ± 120 °, i.e., ± 120 ° are equally divided into 10 portions, i.e., the rotational camera is rotated about the transverse axis of rotation in steps of 24 °.
And 203, controlling the camera module to rotate within the angle range according to the step length, and dividing the area shot by the camera module after each rotation into a sub-area.
And then, controlling the camera module to rotate one step length each time, dividing the area shot by the camera module after each rotation into one subarea, wherein the rotation angle of the camera in the rotation process can be understood not to exceed the angle range, and dividing the area shot by the camera module after each rotation into one subarea. Illustratively, in one embodiment, the rotatable camera is rotated about the transverse axis of rotation through an angular range of ± 120 °, the rotatable camera is rotated about the transverse axis of rotation in a first step of 10 °, the rotatable camera is rotated about the longitudinal axis of rotation in an angular range of ± 60 °, the rotatable camera is rotated about the longitudinal axis of rotation in a second step of 10 °, the rotatable camera is initially controlled to rotate through (120 °, 60 °), the region captured by the rotatable camera is taken as a subregion, and thereafter, the rotatable camera is controlled to rotate through a first step about the transverse axis of rotation, the rotatable camera is rotated through (110 °, 60 °), the region captured by the rotatable camera is taken as a subregion, … …, the rotatable camera is controlled to rotate through (-120 °, 60 °) through a second step about the longitudinal axis of rotation, at the moment, after the rotatable camera rotates to (-120 degrees and 50 degrees), the rotatable camera is controlled to rotate by a first step length around the transverse rotating shaft again, and the like until the rotatable camera finishes shooting each position in the target area, so that the target area is divided into a plurality of sub-areas.
It should be further noted that, in an embodiment, the target area may be further divided into a plurality of sub-areas according to the rotation time of the camera module, so that the time taken for the camera module to rotate to the angle corresponding to each sub-area is equal.
In another embodiment, the process of dividing the target area is shown in fig. 7, and includes the following steps:
In this embodiment, a target image of a target area is first acquired, and the target image can be obtained by shooting the target area through a camera with a large field of view in the camera module.
After the target image is acquired, the boundary of the target object is further determined in the target image, and the target object may be set according to actual needs, for example, a textbook or a test paper. After the boundary except the target object is determined, a first image in the boundary range is intercepted from the target image, and the first image is divided to obtain a plurality of sub-regions. It can be understood that when the target object moves, the boundary of the target object changes, and therefore, in the embodiment, when the boundary of the target object changes, the sub-region needs to be adjusted and calibrated again according to the degree of the change of the boundary.
In this embodiment, there are various ways to divide the first image, and in an embodiment, the dividing the first image in step 302 to obtain a plurality of sub-regions is specifically performed by step 3021, and includes:
step 3021, the first image is divided into M rows in the horizontal direction, and each row is divided into N rows in the vertical direction, so as to obtain M × N sub-regions.
In one embodiment, the first image may be directly divided, the first image is divided into M columns in the horizontal direction, and each column is divided into M rows in the vertical direction, resulting in M × N sub-regions, as shown in fig. 4. It should be further noted that the purpose of dividing each column into M rows in the longitudinal direction is that the area shot by the rotatable camera is relatively small for the sub-area close to the lower part of the rotatable camera, and the area shot by the rotatable camera increases after the rotatable camera rotates around the longitudinal rotation axis. Therefore, it is necessary to divide each column non-uniformly in the longitudinal direction.
In another embodiment, the step 302 of dividing the first image into a plurality of sub-regions is specifically executed in steps 3022 to 3023, and includes:
and step 3022, performing character recognition on the first image to obtain characters in the first image.
In another embodiment, after the first image is obtained, the first image is further subjected to character recognition, so that characters in the first image are obtained. It can be understood that the method of performing the Character Recognition on the first image may use an existing OCR (Optical Character Recognition) algorithm, and the method of performing the Character Recognition is not specifically limited in this embodiment.
And step 3023, determining the size of the characters, and dividing the first image according to the size of the characters to obtain a plurality of sub-regions.
After the characters in the first image are obtained, the sizes of the characters in the first image are further determined, and the first image is divided according to the sizes of the characters, so that a plurality of sub-regions are obtained. It should be further noted that, in the present embodiment, the first image is divided according to the size of the text, and the size of the text reflects the difficulty of content recognition to some extent, and the content is easier to recognize the larger the text is (the larger the font is or the thicker the stroke is), and the content is more difficult to recognize the smaller the text is (the smaller the font is or the thinner the stroke is). If the characters in the first image are larger, the area of each sub-area can be increased properly when the first image is divided.
On the basis of the foregoing embodiment, the step 3023 is to divide the first image according to the size of the text to obtain a plurality of sub-regions, and specifically, the step 30231 includes:
step 30231, the first image is divided into M rows in the horizontal direction, each row is divided into N rows in the vertical direction, M × N sub-regions are obtained, and the size of each sub-region is positively correlated with the size of the text.
Similarly, in this embodiment, the first image is divided into M rows in the horizontal direction, each row is divided into N rows in the vertical direction, M × N sub-regions are obtained, and when the first image is divided, the size of each sub-region is positively correlated to the size of the text. Illustratively, as shown in fig. 8, the larger the area of each divided sub-region is, the larger the characters in the first image is, and as shown in fig. 9, the smaller the area of each divided sub-region is, the smaller the characters in the first image is.
It should be further noted that, in this embodiment, another way exists for the process of dividing the first image into a plurality of sub-regions, which is specifically executed by step 3024, and the method includes:
step 3024, determining a template type of the first image, obtaining a partition rule corresponding to the template type, and partitioning the first image according to the partition rule to obtain a plurality of sub-regions.
In another embodiment, after the first image is obtained, the template type of the first image is determined, the partition rule corresponding to the template type is obtained, and finally the first image is partitioned according to the partition rule, so that a plurality of sub-regions are obtained. For example, in an embodiment, the first image is an image of a test paper, after the first image is acquired, a template type of the test paper is determined, for example, what type of question type the content of the test paper consists of (choice question, fill-in-blank question, etc.) and positions of different question types, then a corresponding division rule is acquired in the server according to the question type and the position of the question type in the test paper, the division rule includes a division manner of the positions corresponding to the different question types, for example, a division manner corresponds to the position of the fill-in-blank question in the test paper, a division manner corresponds to the position of the choice question in the test paper, etc., finally the first image is divided according to the division rule, so as to obtain a plurality of sub-regions, thereby enabling the division manner of the first image to be flexibly selected according to actual needs, and the division rule supports a user to perform a custom setting, meeting the requirements of various scenes.
This is a specific process of dividing the first image in step 302 to obtain a plurality of sub-regions.
On the basis of the above embodiment, the shooting mode of the target area is any one of a global scanning mode, a local scanning mode and a target tracking mode, and the camera module has different shooting modes in different shooting modes, so as to meet the requirements of different shooting conditions.
In one embodiment, the shooting mode is a global scanning mode, and correspondingly, in step 102, according to the shooting mode and the pre-divided sub-regions, the camera module is controlled to rotate, so that the camera module shoots images of different sub-regions in a rotating process, which is specifically executed in step 1021, and specifically includes:
and step 1021, determining a path of the camera module for shooting the subareas according to the pre-divided subareas, and controlling the camera module to rotate according to the path so as to enable the camera module to shoot images of different subareas in the rotating process.
When the shooting mode is the global scanning mode, it is necessary to shoot an image of the entire target area or an image of the entire target object in the target area, for example, shoot characters of the entire target area or shoot characters on a test paper in the target area. In this case, it is necessary to determine a path of the sub-region captured by the camera in the pre-divided sub-region. In this embodiment, the path may be set in advance, or the terminal device automatically calculates an optimal path according to the target object. For example, in an embodiment, for the sub-area shown in fig. 4, the path of the shooting sub-area of the camera module is determined as follows: 1-1, 1-2 … … 1-7, 2-6 … … 2-1 … … 5-7, as shown in fig. 10. Still alternatively, the paths of the sub-regions photographed by the cameras are determined to be 1-1, 2-1, … … 5-2, 4-2 … … 1-2 … … 5-7, as shown in FIG. 11.
After the path of the camera module for shooting the subareas is determined, the terminal equipment can control the camera module to rotate according to the path, so that the camera module can shoot each subarea according to the sequence corresponding to the path, and images of different subareas are obtained.
In another embodiment, the shooting mode is a local scanning mode, and correspondingly, in step 102, according to the shooting mode and the pre-divided sub-regions, the camera module is controlled to rotate, so that the camera module shoots images of different sub-regions in the rotating process, which is specifically executed by steps 1022 to 1024, and specifically includes:
and step 1022, acquiring a global image of the target area in real time.
When the shooting mode is the local scanning mode, the camera module needs to scan a local area in the target area. First, a global image of the target area needs to be acquired in real time. The global image can be obtained by shooting through a camera with a large field angle in the camera module, or can be obtained by shooting the target area through a rotatable camera in the camera module in a global scanning mode.
And step 1023, determining a first area to be shot in the global image.
After the target area is obtained, the first area to be shot needs to be further determined in the global image. In this embodiment, the first area may be set in advance, for example, a certain area in the global image is fixed as the first area, or alternatively, the target feature of the first area may be set in the terminal device in advance, and after the terminal device acquires the global image, the terminal device automatically performs image recognition on the global image, and acquires the target feature in the global image, thereby determining the first area to be shot in the global image. For example, in one embodiment, when the text content filled in the test paper in the target area is shot through the local scanning mode, the target feature is set as the handwriting filled by the user, at this time, the global image in the target area is shot first, then, the test paper area corresponding to the test paper is identified from the global image, and finally, the area of the handwriting filled by the user can be identified from the test paper area by using an image processing algorithm, and the area is taken as the first area.
Step 1024, determining a first sub-area included in the first area, determining a first path of the camera module for shooting the first sub-area, and controlling the camera module to rotate according to the first path so that the camera module shoots an image of the first sub-area in the rotating process.
After the first area is determined in the global image, the first sub-area contained in the first area is further determined, a first path when the camera module shoots the first sub-area is determined according to the first sub-area, and then the camera module can be controlled to rotate according to the first path, so that the camera module shoots the image of the first sub-area in the rotating process. For example, in one embodiment, a first path of the camera module is determined according to an area of handwriting filled by a user, and then the camera module is controlled to rotate according to the first path, so that the camera module can shoot an image corresponding to the area of the handwriting filled by the user in the rotating process.
In one embodiment, the shooting mode is a target tracking mode, and correspondingly, in step 102, the camera module is controlled to rotate according to the shooting mode and the pre-divided sub-regions, so that the camera module shoots images of different sub-regions in the rotating process, which is specifically executed in steps 1025-1028, and specifically includes:
step 1025 determines the position of the target object in the target area and determines a second sub-area corresponding to the position.
When the photographing mode is the target tracking mode, it is necessary to determine the position of the target object in the target area. For example, the target object may be tracked by an event camera (dynamic vision sensor) on the camera module, so as to determine the position of the target object in the target area. And then, according to the position of the target object in the target area, determining a second sub-area corresponding to the position, namely the second sub-area where the target object is located. Illustratively, in one embodiment, the target object is a pen or a hand used by a user, the pen or the hand used by the user is tracked by using an event camera, the position of the pen or the hand used by the user is identified, and the second sub-area where the pen or the hand used by the user is located is determined according to the position.
And step 1026, determining that the camera module rotates to the target angle corresponding to the second sub-area.
And after the second sub-area where the target object is located is determined, controlling the shooting projection module to rotate to a target angle corresponding to the second sub-area. In this embodiment, the angle of the camera module corresponding to each sub-region can be predetermined. For example, for any sub-region in the ith row and j column in fig. 4, an angle (α i, α j) of the camera module corresponding to the sub-region is determined, where α i, α j respectively represent an angle to which the camera module needs to be rotated around the transverse rotation axis and an angle to which the camera module needs to be rotated around the longitudinal rotation axis, so that when the camera module is rotated to the angle (α i, α j), the center of the image shot by the camera module can be aligned with the center point of the sub-region.
And step 1027, controlling the camera module to rotate to a target angle so that the camera module shoots an image of the second subregion.
And then, controlling the camera module to rotate to a corresponding target angle, so that the camera module can shoot the image of the second sub-area. Illustratively, when the second sub-area is an area where a pen being used by the user or a hand of the user is located, the handwriting filled by the user can be acquired by shooting the image of the second sub-area, so that the corresponding auxiliary function can be realized according to the handwriting filled by the user later.
The above is a specific process of controlling the camera module to rotate according to the shooting mode and the pre-divided sub-regions in step 102, so that the camera module shoots images of different sub-regions in the rotating process.
It should be further noted that, in this embodiment, there are also a plurality of methods for acquiring the target information from the image in step 103, so as to meet different requirements of the user.
In one embodiment, the step 103 of obtaining the target information from the image, specifically executed by steps 1031 to 1032, includes:
and step 1031, merging the images to obtain a second image.
In one embodiment, after the images of the different sub-regions are obtained by shooting, the images of the different sub-regions are combined to obtain the second image. For example, in the global scan mode, the images of each sub-region obtained by shooting are combined, so as to obtain a second image, i.e. a complete image of the target region. The specific combining manner may be combined according to the shooting time or the shooting path, and is not limited in this embodiment.
And 1032, performing character recognition on the second image to obtain a target character, acquiring a corresponding standard character from the database according to the target character, and comparing the target character with the standard character to obtain a comparison result.
In one embodiment, the terminal device is further connected to the database, and when an object such as a test paper or a textbook is placed in the target area, the combined second image includes a picture of the object corresponding to the test paper or the textbook, so that the second image is further subjected to character recognition, the target characters on the test paper or the textbook are obtained from the second image, and then, the database is further searched according to the target characters, and the standard characters corresponding to the target characters are obtained. In one embodiment, the corresponding standard characters can be obtained from the database according to the coincidence rate of the whole target character and a certain section of characters in the database, then the target character and the standard characters are compared to obtain a comparison result, and the corresponding learning auxiliary function is executed according to the comparison result. For example, if the question type on the test paper is a blank filling question, whether the content filled by the user is correct or not can be determined according to the comparison result, if the content filled by the user is correct, a corresponding prompt can be sent, if the content is incorrect, correct standard characters can be displayed for the user to refer to, knowledge points corresponding to the places where the user wrongly fills are searched according to the comparison result, corresponding practice questions are searched according to the knowledge points, and the practice questions are pushed to the user.
In another embodiment, the step 103 of obtaining the target information from the image, specifically performed by the step 1033, includes:
and 1033, performing character recognition on the image to obtain a target character, acquiring a corresponding standard character from the database according to the target character, and comparing the target character with the standard character to obtain a comparison result.
In another embodiment, the images of the sub-regions may not be combined, but the image of each sub-region may be directly processed. Illustratively, when the shooting mode is the local scanning mode, character recognition is performed on each shot image to obtain target characters, corresponding standard characters are obtained in a database according to the target characters, the target characters and the standard characters are compared to obtain a comparison result, and finally, a corresponding learning auxiliary function is executed according to the comparison result. The specific process in step 1032 can be referred to, and is not described again in this embodiment.
In another embodiment, the step 103 of obtaining the target information from the image is specifically performed by steps 1034 to 1036, and specifically includes:
step 1034, acquiring all images shot in a preset time period, and extracting handwriting information from each image of all the images.
In one embodiment, after the images are obtained, all the images shot in a preset time period are obtained, image processing is carried out on each image, and handwriting information filled by a user is extracted from the images. For example, when the shooting mode is the target tracking mode and the tracked target object is a pen in use by the user, after the image of the subregion where the pen in use by the user is located is shot, all the images shot from a period of time in the history are acquired, and it can be understood that the preset time period can be set according to actual needs. For example, the current time point is a preset time period within 1 minute before the current time point, and the specific time of the preset time period is not limited in this embodiment. And then, extracting the handwriting information filled by the user from each image of all the acquired images.
Step 1035, determining the stroke order according to the handwriting information, and obtaining the corresponding standard stroke order according to the handwriting information.
And then, determining the handwriting information of the user according to the handwriting information extracted from all the graphs, and determining characters corresponding to the handwriting information by performing character recognition on the handwriting information, thereby determining the characters written by the user, and acquiring the standard stroke order of the characters according to the characters. Illustratively, in one embodiment, the terminal device is connected to a server on the internet, and the terminal device may obtain the standard stroke order of the text from the server on the internet. It can be understood that, in the present embodiment, the manner according to the standard stroke order of the text acquisition can be set according to actual needs.
Step 1036, comparing the stroke order sequence with the standard stroke order sequence to obtain a comparison result.
And finally, comparing the written order sequence filled by the user with the standard written order sequence to obtain a comparison result, so as to confirm whether the written order sequence is correct or not in the process of writing characters by the user, and if the written order sequence is incorrect, sending a corresponding prompt to remind wrong strokes in the written order sequence and provide a correct labeled written order sequence for the user to refer and correct, thereby playing a role in assisting the user in learning.
As shown in fig. 12, fig. 12 is a schematic structural diagram of an image capturing device according to an embodiment of the present invention, where the image capturing device is applied to a terminal device, the terminal device includes a camera module, the camera module can freely rotate to capture images in different areas, and the image capturing device includes: a mode determination module 401, a control module 402, and an information acquisition module 403;
the mode determination module 401 is used to determine a shooting mode of the target area.
The control module 402 is configured to control the camera module to rotate according to the shooting mode and the pre-divided sub-regions, so that the camera module shoots images of different sub-regions in the rotating process; the sub-regions are obtained by dividing the target region in advance.
The information acquiring module 403 is used for acquiring target information from the image.
On the basis of the above embodiment, the control module 402 is configured to divide the target area, and includes:
the camera module is used for determining the angle range of the camera module when the camera module rotates according to the target area; determining the step length of each rotation of the camera module; the camera module is controlled to rotate within an angle range according to the step length, and the area shot by the camera module after each rotation is divided into a sub-area.
On the basis of the above embodiment, the control module 402 is configured to divide the target area, and includes:
the system comprises a camera, a display unit and a control unit, wherein the camera is used for acquiring a target image, and the target image is obtained by shooting a target area; determining the boundary of a target object in a target image, determining a first image corresponding to the range of the boundary in the target image, and dividing the first image to obtain a plurality of sub-regions.
On the basis of the foregoing embodiment, the control module 402 is configured to divide the first image into a plurality of sub-regions, including:
the method is used for uniformly dividing the first image into M columns in the transverse direction and each column is non-uniformly divided into N rows in the longitudinal direction, and M multiplied by N sub-regions are obtained.
On the basis of the foregoing embodiment, the control module 402 is configured to divide the first image into a plurality of sub-regions, including:
the character recognition module is used for carrying out character recognition on the first image to obtain characters in the first image; determining the size of characters, and dividing the first image according to the size of the characters to obtain a plurality of sub-regions.
On the basis of the foregoing embodiment, the control module 402 is configured to divide the first image according to the size of the text to obtain a plurality of sub-regions, including:
the method is used for uniformly dividing the first image into M rows in the horizontal direction and each row is non-uniformly divided into N rows in the longitudinal direction to obtain M multiplied by N sub-regions, and the size of each sub-region is positively correlated with the size of the characters.
On the basis of the foregoing embodiment, the control module 402 is configured to divide the first image into a plurality of sub-regions, including:
the method is used for determining the template type of the first image, obtaining a dividing rule corresponding to the template type, and dividing the first image according to the dividing rule to obtain a plurality of sub-regions.
On the basis of the above-described embodiment, the shooting mode is any one of the global scanning mode, the local scanning mode, and the target tracking mode.
On the basis of the foregoing embodiment, the shooting mode is a global scanning mode, and correspondingly, the control module 402 is configured to control the camera module to rotate according to the shooting mode and the pre-divided sub-regions, so that the camera module shoots images of different sub-regions in a rotating process, including:
the camera module is used for determining a path of the camera module for shooting the subareas according to the pre-divided subareas and controlling the camera module to rotate according to the path so as to enable the camera module to shoot images of different subareas in the rotating process.
On the basis of the foregoing embodiment, the shooting mode is a local scanning mode, and correspondingly, the control module 402 is configured to control the camera module to rotate according to the shooting mode and the pre-divided sub-regions, so that the camera module shoots images of different sub-regions in a rotating process, including:
the system comprises a global image acquisition module, a global image acquisition module and a display module, wherein the global image acquisition module is used for acquiring a global image of a target area in real time; determining a first area needing to be shot in the global image; determining a first sub-area contained in the first area, determining a first path of the camera module for shooting the first sub-area, and controlling the camera module to rotate according to the first path so as to enable the camera module to shoot an image of the first sub-area in the rotating process.
On the basis of the foregoing embodiment, the shooting mode is a target tracking mode, and correspondingly, the control module 402 is configured to control the camera module to rotate according to the shooting mode and the pre-divided sub-regions, so that the camera module shoots images of different sub-regions in a rotating process, and the method includes:
the position of the target object in the target area is determined, and a second sub-area corresponding to the position is determined; determining a target angle of the camera module which rotates to correspond to the second sub-area; and controlling the camera module to rotate to a target angle so that the camera module shoots an image of the second sub-area.
On the basis of the above embodiment, the information obtaining module 403 is configured to obtain the target information from the image, and includes:
the image merging device is used for merging the images to obtain a second image; and performing character recognition on the second image to obtain a target character, acquiring a corresponding standard character from a database according to the target character, and comparing the target character with the standard character to obtain a comparison result.
On the basis of the above embodiment, the information obtaining module 403 is configured to obtain the target information from the image, and includes:
the system is used for carrying out character recognition on the image to obtain target characters, acquiring corresponding standard characters from a database according to the target characters, and comparing the target characters with the standard characters to obtain a comparison result.
On the basis of the above embodiment, the information acquiring module 403 is configured to acquire target information from an image, and includes:
the system comprises a camera, a display unit and a processing unit, wherein the camera is used for acquiring all images shot in a preset time period and extracting handwriting information from each image of all the images; determining a stroke order sequence according to the handwriting information, and acquiring a corresponding standard stroke order sequence according to the handwriting information; and comparing the stroke order with the standard stroke order to obtain a comparison result.
The present embodiment further provides a terminal device, as shown in fig. 13, a terminal device 50, where the terminal device includes a processor 500 and a memory 501;
the memory 501 is used for storing a computer program 502 and transmitting the computer program 502 to the processor;
the processor 500 is configured to execute the steps in one of the above-described embodiments of the image capturing method according to the instructions in the computer program 502.
Illustratively, the computer program 502 may be partitioned into one or more modules/units that are stored in the memory 501 and executed by the processor 500 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 502 in the terminal device 50.
The terminal device 50 may be a computing device such as a desktop computer, a notebook, a palm computer, and a cloud server. The terminal device 50 may include, but is not limited to, a processor 500 and a memory 501. Those skilled in the art will appreciate that fig. 13 is merely an example of a terminal device 50 and does not constitute a limitation of the terminal device 50 and may include more or less components than those shown, or combine certain components, or different components, for example, the terminal device 50 may further include an input-output device, a network access device, a bus, etc.
The Processor 500 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 501 may be an internal storage unit of the terminal device 50, such as a hard disk or a memory of the terminal device 50. The memory 501 may also be an external storage terminal device of the terminal device 50, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are equipped on the terminal device 50. Further, the memory 501 may also include both an internal storage unit and an external storage device of the terminal device 50. The memory 501 is used for storing the computer programs and other programs and data required by the terminal device 50. The memory 501 may also be used to temporarily store data that has been output or is to be output.
It can be clearly understood by those skilled in the art that, for convenience and simplicity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing computer programs.
The embodiment of the invention also provides a storage medium containing computer executable instructions, wherein the computer executable instructions are used for executing an image shooting method when being executed by a computer processor and are applied to terminal equipment, the terminal equipment comprises a camera module, and the camera module can freely rotate to shoot images in different areas, and the method comprises the following steps:
determining a shooting mode of a target area;
controlling the camera module to rotate according to the shooting mode and the pre-divided sub-regions so as to enable the camera module to shoot images of different sub-regions in the rotating process; the sub-area is obtained by dividing a target area in advance;
target information is acquired from the image.
It should be noted that the foregoing is only a preferred embodiment of the present invention and the technical principles applied. Those skilled in the art will appreciate that the embodiments of the present invention are not limited to the specific embodiments described herein, and that various obvious changes, adaptations, and substitutions are possible, without departing from the scope of the embodiments of the present invention. Therefore, although the embodiments of the present invention have been described in more detail through the above embodiments, the embodiments of the present invention are not limited to the above embodiments, and many other equivalent embodiments may be included without departing from the concept of the embodiments of the present invention, and the scope of the embodiments of the present invention is determined by the scope of the appended claims.
Claims (17)
1. An image shooting method is applied to terminal equipment, and is characterized in that the terminal equipment comprises a camera module which can freely rotate to shoot images in different areas, and the method comprises the following steps:
determining a shooting mode of a target area;
controlling the camera module to rotate according to the shooting mode and the pre-divided sub-regions so as to enable the camera module to shoot images of different sub-regions in the rotating process; the sub-area is obtained by dividing a target area in advance;
target information is obtained from the image.
2. An image capturing method as claimed in claim 1, wherein the dividing of the target area comprises:
determining the angle range of the camera module when rotating according to the target area;
determining the step length of each rotation of the camera module;
and controlling the camera module to rotate within the angle range according to the step length, and dividing the area shot by the camera module after each rotation into a sub-area.
3. An image capturing method as claimed in claim 1, wherein the dividing of the target area comprises:
acquiring a target image, wherein the target image is obtained by shooting the target area;
determining a boundary of a target object in the target image, determining a first image corresponding to the range of the boundary in the target image, and dividing the first image to obtain a plurality of sub-regions.
4. An image capturing method as claimed in claim 3, wherein the dividing the first image into a plurality of sub-regions comprises:
and uniformly dividing the first image into M columns in the transverse direction, and non-uniformly dividing each column into N rows in the longitudinal direction to obtain M multiplied by N sub-regions.
5. An image capturing method as claimed in claim 3, wherein the dividing the first image into a plurality of sub-regions comprises:
performing character recognition on the first image to obtain characters in the first image;
and determining the size of the characters, and dividing the first image according to the size of the characters to obtain a plurality of sub-regions.
6. The image capturing method according to claim 5, wherein the dividing the first image according to the text size to obtain a plurality of sub-regions comprises:
the first image is uniformly divided into M rows in the transverse direction, each row is non-uniformly divided into N rows in the longitudinal direction, M multiplied by N sub-regions are obtained, and the size of each sub-region is positively correlated with the size of the characters.
7. An image capturing method as claimed in claim 3, wherein the dividing the first image into a plurality of sub-regions comprises:
determining the template type of the first image, acquiring a division rule corresponding to the template type, and dividing the first image according to the division rule to obtain a plurality of sub-regions.
8. An image capturing method according to claim 1, wherein the capturing mode is any one of a global scanning mode, a local scanning mode, and a target tracking mode.
9. The image capturing method according to claim 8, wherein the capturing mode is a global scan mode, and accordingly, the controlling the camera module to rotate according to the capturing mode and the pre-divided sub-regions so that the camera module captures images of different sub-regions during the rotation includes:
determining a path of the camera module for shooting the subareas according to the pre-divided subareas, and controlling the camera module to rotate according to the path so as to enable the camera module to shoot images of different subareas in the rotating process.
10. The image capturing method according to claim 8, wherein the capturing mode is a local scanning mode, and accordingly, the controlling the camera module to rotate according to the capturing mode and the pre-divided sub-regions so that the camera module captures images of different sub-regions during the rotation includes:
acquiring a global image of the target area in real time;
determining a first area needing to be shot in the global image;
determining a first sub-area contained in the first area, determining a first path of the camera module for shooting the first sub-area, and controlling the camera module to rotate according to the first path so that the camera module shoots an image of the first sub-area in the rotating process.
11. An image capturing method according to claim 8, wherein the capturing mode is a target tracking mode, and accordingly, the controlling the camera module to rotate according to the capturing mode and the pre-divided sub-regions so that the camera module captures images of different sub-regions during the rotation process includes:
determining the position of a target object in the target area, and determining a second sub-area corresponding to the position;
determining a target angle of the camera module rotating to correspond to the second sub-area;
and controlling the camera module to rotate to the target angle so as to enable the camera module to shoot the image of the second sub-area.
12. An image capturing method according to claim 1, wherein the acquiring of the target information from the image includes:
merging the images to obtain a second image;
and performing character recognition on the second image to obtain a target character, acquiring a corresponding standard character in a database according to the target character, and comparing the target character with the standard character to obtain a comparison result.
13. An image capturing method according to claim 1, wherein the acquiring of the target information from the image includes:
and performing character recognition on the image to obtain a target character, acquiring a corresponding standard character in a database according to the target character, and comparing the target character with the standard character to obtain a comparison result.
14. An image capturing method according to claim 1, wherein the acquiring of the target information from the image includes:
acquiring all images shot in a preset time period, and extracting handwriting information from each image of all the images;
determining a stroke order sequence according to the handwriting information, and acquiring a corresponding standard stroke order sequence according to the handwriting information;
and comparing the stroke order with a standard stroke order to obtain a comparison result.
15. The utility model provides an image shooting device, is applied to terminal equipment, its characterized in that, terminal equipment is including the camera module, but the camera module free rotation is in order to shoot the image in different regions, image shooting device includes: the device comprises a mode determining module, a control module and an information acquiring module;
the mode determining module is used for determining a shooting mode of a target area;
the control module is used for controlling the camera module to rotate according to the shooting mode and the pre-divided sub-regions, so that the camera module shoots images of different sub-regions in the rotating process; the sub-area is obtained by dividing a target area in advance;
the information acquisition module is used for acquiring target information from the image.
16. A terminal device, characterized in that the terminal device comprises a processor and a memory;
the memory is used for storing a computer program and transmitting the computer program to the processor;
the processor is adapted to perform an image capturing method according to any of claims 1-14 according to instructions in the computer program.
17. A storage medium storing computer-executable instructions for performing an image capture method as claimed in any one of claims 1 to 14 when executed by a computer processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210303394.2A CN114500857B (en) | 2022-03-24 | 2022-03-24 | Image shooting method, device, terminal equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210303394.2A CN114500857B (en) | 2022-03-24 | 2022-03-24 | Image shooting method, device, terminal equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114500857A true CN114500857A (en) | 2022-05-13 |
CN114500857B CN114500857B (en) | 2024-10-29 |
Family
ID=81487669
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210303394.2A Active CN114500857B (en) | 2022-03-24 | 2022-03-24 | Image shooting method, device, terminal equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114500857B (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116668645A (en) * | 2023-08-01 | 2023-08-29 | 成都汉度科技有限公司 | Substation moving ring monitoring method and equipment |
CN117636698A (en) * | 2023-12-28 | 2024-03-01 | 北京奥康达体育科技有限公司 | Digital pull-up auxiliary training system for student examination |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012195797A (en) * | 2011-03-17 | 2012-10-11 | Mitsubishi Electric Corp | Pan-focus image generating device |
CN106506965A (en) * | 2016-11-29 | 2017-03-15 | 努比亚技术有限公司 | A kind of image pickup method and terminal |
CN107479729A (en) * | 2017-06-20 | 2017-12-15 | 广州视源电子科技股份有限公司 | Touch point positioning method, device and system, display terminal and writing pen |
CN107749944A (en) * | 2017-09-22 | 2018-03-02 | 华勤通讯技术有限公司 | A kind of image pickup method and device |
CN109639952A (en) * | 2019-01-23 | 2019-04-16 | 广东小天才科技有限公司 | Photographing question searching method and electronic equipment |
CN110807803A (en) * | 2019-10-11 | 2020-02-18 | 北京文香信息技术有限公司 | Camera positioning method, device, equipment and storage medium |
CN114202783A (en) * | 2021-11-10 | 2022-03-18 | 深圳中电港技术股份有限公司 | Target tracking method based on millimeter wave radar |
-
2022
- 2022-03-24 CN CN202210303394.2A patent/CN114500857B/en active Active
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012195797A (en) * | 2011-03-17 | 2012-10-11 | Mitsubishi Electric Corp | Pan-focus image generating device |
CN106506965A (en) * | 2016-11-29 | 2017-03-15 | 努比亚技术有限公司 | A kind of image pickup method and terminal |
CN107479729A (en) * | 2017-06-20 | 2017-12-15 | 广州视源电子科技股份有限公司 | Touch point positioning method, device and system, display terminal and writing pen |
CN107749944A (en) * | 2017-09-22 | 2018-03-02 | 华勤通讯技术有限公司 | A kind of image pickup method and device |
CN109639952A (en) * | 2019-01-23 | 2019-04-16 | 广东小天才科技有限公司 | Photographing question searching method and electronic equipment |
CN110807803A (en) * | 2019-10-11 | 2020-02-18 | 北京文香信息技术有限公司 | Camera positioning method, device, equipment and storage medium |
CN114202783A (en) * | 2021-11-10 | 2022-03-18 | 深圳中电港技术股份有限公司 | Target tracking method based on millimeter wave radar |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN116668645A (en) * | 2023-08-01 | 2023-08-29 | 成都汉度科技有限公司 | Substation moving ring monitoring method and equipment |
CN116668645B (en) * | 2023-08-01 | 2023-09-29 | 成都汉度科技有限公司 | Substation moving ring monitoring method and equipment |
CN117636698A (en) * | 2023-12-28 | 2024-03-01 | 北京奥康达体育科技有限公司 | Digital pull-up auxiliary training system for student examination |
CN117636698B (en) * | 2023-12-28 | 2024-06-07 | 北京奥康达体育科技有限公司 | Digital pull-up auxiliary training system for student examination |
Also Published As
Publication number | Publication date |
---|---|
CN114500857B (en) | 2024-10-29 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US12100160B2 (en) | Edge detection method and device, electronic apparatus and storage medium | |
US10452953B2 (en) | Image processing device, image processing method, program, and information recording medium | |
CN114500857B (en) | Image shooting method, device, terminal equipment and storage medium | |
CN107920257B (en) | Video key point real-time processing method and device and computing equipment | |
CN109120854B (en) | Image processing method, image processing device, electronic equipment and storage medium | |
CN111091590A (en) | Image processing method, image processing device, storage medium and electronic equipment | |
US11238276B2 (en) | Automatic image capture system based on a determination and verification of a physical object size in a captured image | |
US11594045B2 (en) | Method for determining correct scanning distance using augmented reality and machine learning models | |
CN111163265A (en) | Image processing method, image processing device, mobile terminal and computer storage medium | |
US20200218343A1 (en) | Gaze point compensation method and apparatus in display device, and display device | |
CN110493514B (en) | Image processing method, storage medium, and electronic device | |
CN112462941A (en) | Teaching interaction method, device, system and medium based on gesture recognition | |
WO2021168804A1 (en) | Image processing method, image processing apparatus and image processing system | |
CN114120163A (en) | Video frame processing method and device, and related equipment and storage medium thereof | |
CN108289176B (en) | Photographing question searching method, question searching device and terminal equipment | |
CN111179166B (en) | Image processing method, device, equipment and computer readable storage medium | |
CN114390197B (en) | Shooting method and device, electronic equipment and readable storage medium | |
CN115514897A (en) | Method and device for processing image | |
CN109547678B (en) | Processing method, device, equipment and readable storage medium | |
CN115019515B (en) | Imaging control method and system | |
JP2017199288A (en) | Image processing device, image processing method and program | |
KR102196794B1 (en) | System and method for supporting reading by linking additional content to book | |
CN114882107A (en) | Data processing method and device | |
CN111768338A (en) | Method and device for splicing test question images, electronic equipment and storage medium | |
CN112672057A (en) | Shooting method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant |