CN108270971B - Mobile terminal focusing method and device and computer readable storage medium - Google Patents
Mobile terminal focusing method and device and computer readable storage medium Download PDFInfo
- Publication number
- CN108270971B CN108270971B CN201810100689.3A CN201810100689A CN108270971B CN 108270971 B CN108270971 B CN 108270971B CN 201810100689 A CN201810100689 A CN 201810100689A CN 108270971 B CN108270971 B CN 108270971B
- Authority
- CN
- China
- Prior art keywords
- focusing
- mobile terminal
- user
- animation
- shooting
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/62—Control of parameters via user interfaces
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Human Computer Interaction (AREA)
- Telephone Function (AREA)
Abstract
The invention discloses a method and equipment for focusing a mobile terminal and a computer readable storage medium, wherein the method comprises the steps of acquiring a shooting preview picture after monitoring that a shooting application program is started, monitoring a focusing trigger event, acquiring a focus position and a focus distance, transmitting the focus position and the focus distance to an application layer, receiving a vertex coordinate calculated by the application layer according to the focus position and the focus distance, and drawing a focusing animation in the preview picture in a screen according to the vertex coordinate.
Description
[ technical field ] A method for producing a semiconductor device
The invention relates to the technical field of information, in particular to a method and equipment for focusing a mobile terminal and a computer readable storage medium.
[ background of the invention ]
For a mobile phone camera, when a focusing mode is in Continuous AF, when a preview picture is clicked, or factors such as FV (definition), Gyro (gyroscope) or SAD (brightness change) and the like are changed, focusing of the camera is triggered.
[ summary of the invention ]
In view of the above-mentioned drawbacks, the present invention provides a method, an apparatus and a computer readable storage medium for focusing a mobile terminal.
A method for focusing a mobile terminal comprises the following steps: acquiring a shooting preview picture after monitoring that a shooting application program is started; monitoring a focusing trigger event to obtain a focus position and a focus; communicating the focal position and focal length to an application layer; receiving the vertex coordinates calculated by the application layer according to the focal position and the focal length; and drawing a focusing animation in a preview picture in the screen according to the vertex coordinates.
Optionally, receiving a click operation of a user on the preview picture, and triggering a focusing event; and after the mobile terminal shakes or the shooting site environment changes to cause the change of definition, brightness or a gyroscope, triggering a focusing event.
Optionally, the mobile terminal calculates a focusing duration according to the focal length; and playing the focusing animation with the corresponding time length according to the focusing time length.
Optionally, the focusing duration is transmitted to an application layer, and the application layer calculates a scaling factor of the focusing animation according to the focusing duration.
Optionally, the focusing animation may be selected by a user, and the focusing animation may be played during focusing according to the selection of the user for the focusing animation.
Optionally, the method further comprises monitoring a focus completion event; and after focusing is finished, sending information to prompt the user that focusing is finished.
Optionally, the application layer is an OpenG L drawing thread, and the OpenG L drawing thread runs in a graphics processor.
Optionally, the OpenG L drawing thread runs on an Android platform, and a shooting focusing function is realized by calling an API provided by the Android.
In addition, the invention also provides equipment for focusing the mobile terminal, which comprises a shooting unit, a display unit, a user input unit, a processor, a graphic processor, a memory and a communication bus; the shooting unit is used for obtaining image data of a static picture or a video; the display unit is used for displaying information input by a user or information provided for the user; the user input unit is used for receiving input numeric or character information and generating key signal input related to user setting and function control of the mobile terminal; the image processor is used for processing the image data of the static picture or the video obtained by the shooting unit; the communication bus is used for realizing connection communication between the processor and the memory; the memory is used for storing the data of the customized application program; the processor is used for executing the mobile terminal focusing program stored in the memory so as to realize the following steps:
the method comprises the following steps: acquiring a shooting preview picture after monitoring that a shooting application program is started; monitoring a focusing trigger event to obtain a focus position and a focus; communicating the focal position and focal length to an application layer; receiving the vertex coordinates calculated by the application layer according to the focal position and the focal length; and drawing a focusing animation in a preview picture in the screen according to the vertex coordinates.
Optionally, receiving a click operation of a user on the preview picture, and triggering a focusing event; and after the mobile terminal shakes or the shooting site environment changes to cause the change of definition, brightness or a gyroscope, triggering a focusing event.
Optionally, the mobile terminal calculates a focusing duration according to the focal length; and playing the focusing animation with the corresponding time length according to the focusing time length.
Optionally, the focusing duration is transmitted to an application layer, and the application layer calculates a scaling factor of the focusing animation according to the focusing duration.
Optionally, the focusing animation may be selected by a user, and the focusing animation may be played during focusing according to the selection of the user for the focusing animation.
Optionally, the method further comprises monitoring a focus completion event; and after focusing is finished, sending information to prompt the user that focusing is finished.
Optionally, the application layer is an OpenG L drawing thread, and the OpenG L drawing thread runs in a graphics processor.
Optionally, the OpenG L drawing thread runs on an Android platform, and a shooting focusing function is realized by calling an API provided by the Android.
The present invention also provides a computer-readable storage medium storing one or more programs, which are executable by one or more processors, for implementing the method for focusing a mobile terminal, including:
the method comprises the following steps: acquiring a shooting preview picture after monitoring that a shooting application program is started; monitoring a focusing trigger event to obtain a focus position and a focus; communicating the focal position and focal length to an application layer; receiving the vertex coordinates calculated by the application layer according to the focal position and the focal length; and drawing a focusing animation in a preview picture in the screen according to the vertex coordinates.
Optionally, receiving a click operation of a user on the preview picture, and triggering a focusing event; and after the mobile terminal shakes or the shooting site environment changes to cause the change of definition, brightness or a gyroscope, triggering a focusing event.
Optionally, the mobile terminal calculates a focusing duration according to the focal length; and playing the focusing animation with the corresponding time length according to the focusing time length.
Optionally, the focusing duration is transmitted to an application layer, and the application layer calculates a scaling factor of the focusing animation according to the focusing duration.
Optionally, the focusing animation may be selected by a user, and the focusing animation may be played during focusing according to the selection of the user for the focusing animation.
Optionally, the method further comprises monitoring a focus completion event; and after focusing is finished, sending information to prompt the user that focusing is finished.
Optionally, the application layer is an OpenG L drawing thread, and the OpenG L drawing thread runs in a graphics processor.
Optionally, the OpenG L drawing thread runs on an Android platform, and a shooting focusing function is realized by calling an API provided by the Android.
The method HAs the advantages that the focusing animation is realized on the application layer, the GPU processing is introduced, the smoother and diversified focusing process can be realized, the workload of the HA L layer can be relieved to a certain extent, the response speed of the camera bottom layer driving is increased, and the user experience is improved.
[ description of the drawings ]
Fig. 1 is a schematic diagram of a hardware structure of a mobile terminal implementing various embodiments of the present invention.
Fig. 2 is a diagram of a wireless communication system of the mobile terminal shown in fig. 1.
Fig. 3 is a flowchart of a first method of focusing of a mobile terminal according to a first embodiment of the present invention.
Fig. 4 is a flowchart of a second method of focusing of the mobile terminal according to the second embodiment of the present invention.
Fig. 5 is a flowchart of a method of a third embodiment of focusing of a mobile terminal according to the present invention.
FIG. 6 is a block diagram of a fourth embodiment of a focusing apparatus for a mobile terminal according to the present invention.
[ detailed description ] embodiments
It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
In the following description, suffixes such as "module", "component", or "unit" used to denote elements are used only for facilitating the explanation of the present invention, and have no specific meaning in itself. Thus, "module", "component" or "unit" may be used mixedly.
The terminal may be implemented in various forms. For example, the terminal described in the present invention may include a mobile terminal such as a mobile phone, a tablet computer, a notebook computer, a palmtop computer, a Personal Digital Assistant (PDA), a Portable Media Player (PMP), a navigation device, a wearable device, a smart band, a pedometer, and the like, and a fixed terminal such as a Digital TV, a desktop computer, and the like.
The following description will be given by way of example of a mobile terminal, and it will be understood by those skilled in the art that the construction according to the embodiment of the present invention can be applied to a fixed type terminal, in addition to elements particularly used for mobile purposes.
Referring to fig. 1, which is a schematic diagram of a hardware structure of a mobile terminal for implementing various embodiments of the present invention, the mobile terminal 100 may include: RF (Radio Frequency) unit 101, WiFi module 102, audio output unit 103, a/V (audio/video) input unit 104, sensor 105, display unit 106, user input unit 107, interface unit 108, memory 109, processor 110, and power supply 111. Those skilled in the art will appreciate that the mobile terminal architecture shown in fig. 1 is not intended to be limiting of mobile terminals, which may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile terminal in detail with reference to fig. 1:
the Radio Frequency unit 101 may be configured to receive and transmit signals during a message transmission or call, specifically, receive downlink information of a base station and then process the received downlink information to the processor 110, and transmit uplink data to the base station, in General, the Radio Frequency unit 101 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a low noise amplifier, a duplexer, and the like, and in addition, the Radio Frequency unit 101 may further communicate with a network and other devices through wireless communication, and the wireless communication may use any communication standard or protocol, including, but not limited to, GSM (Global System of Mobile communication), GPRS (General Packet Radio Service), CDMA2000(Code Division Multiple Access 2000), WCDMA (Wideband Code Division Multiple Access ), TD-SCDMA (Synchronous Time Division Multiple Access, Code Division Multiple Access, Time Division Multiple Access, etc., TDD — Time Division Multiple Access, L Time Division Multiple Access, etc.
WiFi belongs to short-distance wireless transmission technology, and the mobile terminal can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 102, and provides wireless broadband internet access for the user. Although fig. 1 shows the WiFi module 102, it is understood that it does not belong to the essential constitution of the mobile terminal, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The audio output unit 103 may convert audio data received by the radio frequency unit 101 or the WiFi module 102 or stored in the memory 109 into an audio signal and output as sound when the mobile terminal 100 is in a call signal reception mode, a call mode, a recording mode, a voice recognition mode, a broadcast reception mode, or the like. Also, the audio output unit 103 may also provide audio output related to a specific function performed by the mobile terminal 100 (e.g., a call signal reception sound, a message reception sound, etc.). The audio output unit 103 may include a speaker, a buzzer, and the like.
The a/V input unit 104 is used to receive audio or video signals. The a/V input Unit 104 may include a Graphics Processing Unit (GPU) 1041 and a microphone 1042, the Graphics processor 1041 Processing image data of still pictures or video obtained by an image capturing device (e.g., a camera) in a video capturing mode or an image capturing mode. The processed image frames may be displayed on the display unit 106. The image frames processed by the graphic processor 1041 may be stored in the memory 109 (or other storage medium) or transmitted via the radio frequency unit 101 or the WiFi module 102. The microphone 1042 may receive sounds (audio data) via the microphone 1042 in a phone call mode, a recording mode, a voice recognition mode, or the like, and may be capable of processing such sounds into audio data. The processed audio (voice) data may be converted into a format output transmittable to a mobile communication base station via the radio frequency unit 101 in case of a phone call mode. The microphone 1042 may implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated in the course of receiving and transmitting audio signals.
The mobile terminal 100 also includes at least one sensor 105, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor includes an ambient light sensor that can adjust the brightness of the display panel 1061 according to the brightness of ambient light, and a proximity sensor that can turn off the display panel 1061 and/or a backlight when the mobile terminal 100 is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a fingerprint sensor, a pressure sensor, an iris sensor, a molecular sensor, a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The Display unit 106 may include a Display panel 1061, and the Display panel 1061 may be configured in the form of a liquid Crystal Display (L acquired Crystal Display, L CD), an Organic light-Emitting Diode (O L ED), or the like.
The user input unit 107 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the mobile terminal. Specifically, the user input unit 107 may include a touch panel 1071 and other input devices 1072. The touch panel 1071, also referred to as a touch screen, may collect a touch operation performed by a user on or near the touch panel 1071 (e.g., an operation performed by the user on or near the touch panel 1071 using a finger, a stylus, or any other suitable object or accessory), and drive a corresponding connection device according to a predetermined program. The touch panel 1071 may include two parts of a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 110, and can receive and execute commands sent by the processor 110. In addition, the touch panel 1071 may be implemented in various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. In addition to the touch panel 1071, the user input unit 107 may include other input devices 1072. In particular, other input devices 1072 may include, but are not limited to, one or more of a physical keyboard, function keys (e.g., volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like, and are not limited to these specific examples.
Further, the touch panel 1071 may cover the display panel 1061, and when the touch panel 1071 detects a touch operation thereon or nearby, the touch panel 1071 transmits the touch operation to the processor 110 to determine the type of the touch event, and then the processor 110 provides a corresponding visual output on the display panel 1061 according to the type of the touch event. Although the touch panel 1071 and the display panel 1061 are shown in fig. 1 as two separate components to implement the input and output functions of the mobile terminal, in some embodiments, the touch panel 1071 and the display panel 1061 may be integrated to implement the input and output functions of the mobile terminal, and is not limited herein.
The interface unit 108 serves as an interface through which at least one external device is connected to the mobile terminal 100. For example, the external device may include a wired or wireless headset port, an external power supply (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, an audio input/output (I/O) port, a video I/O port, an earphone port, and the like. The interface unit 108 may be used to receive input (e.g., data information, power, etc.) from external devices and transmit the received input to one or more elements within the mobile terminal 100 or may be used to transmit data between the mobile terminal 100 and external devices.
The memory 109 may be used to store software programs as well as various data. The memory 109 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 109 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 110 is a control center of the mobile terminal, connects various parts of the entire mobile terminal using various interfaces and lines, and performs various functions of the mobile terminal and processes data by operating or executing software programs and/or modules stored in the memory 109 and calling data stored in the memory 109, thereby performing overall monitoring of the mobile terminal. Processor 110 may include one or more processing units; preferably, the processor 110 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 110.
The shooting unit 112 is used for shooting a picture or a video, and the shot picture or video is stored in the memory 109. The photographed picture or video may be displayed on the display unit 106.
The mobile terminal 100 may further include a power supply 111 (e.g., a battery) for supplying power to various components, and preferably, the power supply 111 may be logically connected to the processor 110 via a power management system, so as to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown in fig. 1, the mobile terminal 100 may further include a bluetooth module or the like, which is not described in detail herein.
In order to facilitate understanding of the embodiments of the present invention, a communication network system on which the mobile terminal of the present invention is based is described below.
Referring to fig. 2, fig. 2 is an architecture diagram of a communication Network system according to an embodiment of the present invention, the communication Network system is L TE system of universal mobile telecommunications technology, and the L TE system includes a UE (User Equipment) 201, an E-UTRAN (Evolved UMTS Terrestrial Radio Access Network) 202, an EPC (Evolved Packet Core) 203, and an IP service 204 of an operator, which are in communication connection in sequence.
Specifically, the UE201 may be the terminal 100 described above, and is not described herein again.
The E-UTRAN202 includes eNodeB2021 and other eNodeBs 2022, among others. Among them, the eNodeB2021 may be connected with other eNodeB2022 through backhaul (e.g., X2 interface), the eNodeB2021 is connected to the EPC203, and the eNodeB2021 may provide the UE201 access to the EPC 203.
The EPC203 may include an MME (Mobility Management Entity) 2031, an HSS (Home Subscriber Server) 2032, other MMEs 2033, an SGW (Serving GateWay) 2034, a PGW (PDN GateWay) 2035, and a PCRF (Policy and charging rules Function) 2036, and the like. The MME2031 is a control node that handles signaling between the UE201 and the EPC203, and provides bearer and connection management. HSS2032 is used to provide registers to manage functions such as home location register (not shown) and holds subscriber specific information about service characteristics, data rates, etc. All user data may be sent through SGW2034, PGW2035 may provide IP address assignment for UE201 and other functions, and PCRF2036 is a policy and charging control policy decision point for traffic data flow and IP bearer resources, which selects and provides available policy and charging control decisions for a policy and charging enforcement function (not shown).
The IP services 204 may include the internet, intranets, IMS (IP Multimedia Subsystem), or other IP services, among others.
Although L TE system is described as an example, it should be understood by those skilled in the art that the present invention is not limited to L TE system, but also applicable to other wireless communication systems, such as GSM, CDMA2000, WCDMA, TD-SCDMA, and future new network systems.
Based on the above mobile terminal hardware structure and communication network system, the present invention provides various embodiments of the method.
Example one
Referring to fig. 3, a method for focusing a mobile terminal includes:
s101, after the shooting application program is monitored to be started, a shooting preview picture is obtained.
The user starts a shooting program in the mobile terminal (for example, a camera APP is opened in a smart phone), the shooting program calls a camera of the mobile terminal to acquire a picture to be shot, and then the picture is displayed in a display screen of the mobile terminal.
S102, monitoring a focusing trigger event, and acquiring a focus position and a focus distance.
When the mobile terminal starts a shooting program to shoot, monitoring of a focusing trigger event is started. A user selects a focus by clicking a preview picture of a screen of the mobile terminal in the shooting process (if the user wants to take a certain person as the focus of the shooting, the position of the person is clicked in the screen), and a shooting program monitors a focusing trigger event; when a user adjusts the position of the mobile terminal or the shooting environment changes (such as the sun is shielded by a cloud layer), and factors such as definition, a gyroscope or brightness change, the shooting program monitors a focusing trigger event. After monitoring the focusing trigger event, the photographing program acquires the coordinates of the focal position (i.e., the positions mPivotX and mPivotY identified by the screen pixel point) and the focal length of the shot.
Focal length, also known as focal length, is a measure of the concentration or divergence of light in an optical system, and refers to the distance from the center of a lens to the focal point of the light concentration. Also in the camera, the distance from the optical center of the lens to the imaging plane of the film, CCD or CMOS.
The lens of a camera is a set of lenses, and when light rays parallel to the main optical axis pass through the lens, the light converges to a point, called the focal point, and the distance from the focal point to the center of the lens (i.e., the optical center), called the focal length. A lens with fixed focal length, namely a fixed focus lens; the lens with adjustable focal length is a zoom lens.
The Camera API of the Android platform provides the capability to obtain focus and focal length during the shooting process, and the Camera is located under the Android. And starting a timing task in the shooting program, and calling an API (application program interface) corresponding to the Android platform at a fixed time to acquire the focus and the focal length of the current shooting picture.
And S103, transmitting the focus position and the focal length to an application layer.
After the photographing program obtains the coordinates (namely, the positions mPivotX and mPivotY identified by the screen pixel point) and the focal length of the focus position of the photographing, the focus position and the focal length are transmitted to an OpenG L drawing thread of an application layer program, the OpenG L drawing thread runs in a graphics processor, the OpenG L drawing thread runs on an Android platform, and the photographing focusing function is realized by calling an API provided by the Android.
OpenG L has seven major functions:
1. modeling the OpenG L graphics library provides complex three-dimensional objects (spheres, cones, polyhedrons, teapots, etc.) and complex curve and surface rendering functions in addition to basic point, line, polygon rendering functions.
2. The transformation method is beneficial to reducing the running time of an algorithm and improving the display speed of three-dimensional graphics.
3. Color mode settings there are two types of OpenG L color modes, namely RGBA mode and color index (ColorIndex).
4. The light and material setting is that the OpenG L light has self-luminous (emited L light), Ambient light (Ambient L light), Diffuse reflection light (Diffuse L light) and highlight (Specular L light), the material is represented by light reflectivity, and the color finally reflected to human eyes by an object in a Scene (Scene) is the color formed by multiplying the red, green and blue components of the light and the reflectivity of the red, green and blue components of the material.
5. Texture Mapping object surface details can be expressed very realistically using the OpenG L Texture Mapping function.
6. The bitmap display and image enhancement image functions provide special image effect processing of Blending (Blending), antialiasing (antialiasing), and fog (fog) in addition to basic copying and pixel writing. The three methods can make the simulated object have more sense of reality and enhance the effect of graphic display.
7. In short, the background cache calculates scenes and generates pictures, and the foreground cache displays the pictures drawn by the background cache.
And S104, receiving the vertex coordinates calculated by the application layer according to the focal position and the focal length.
The Android mobile terminal camera program calls an API provided by an Android platform, an OpenG L drawing thread is used, the vertex coordinates of the preview are calculated according to the focus position (positions mPivotX and mPivotY of the screen pixel point identifier) and the focal length (mSchale) of the shooting, and the image preview is drawn to the screen by using the vertex coordinates.
And S105, drawing a focusing animation in a preview picture in the screen according to the vertex coordinates.
The OpenG L rendering thread renders the image preview to the screen as follows:
the openG L adopts a cs model that c is cpu, s is GPU, the inputs from c to s are vertex information and Texture information, and the output from s is an image displayed on a display.
VBO/VAO (vertex buffer object or vertex group object):
VBO/VAO is vertex information provided by the cpu to the GPU, and includes vertex information such as the position of a vertex, color (the color of a vertex is only, and is not related to the color of a texture), texture coordinates (for texture mapping), and the like.
Vertex shader (vertex shader):
the vertex shader is a program that processes vertex information provided by the VBO/VAO. Each vertex provided by VBO/VAO performs one pass of the vertex shader. Uniformity (a variable type) remains consistent at each vertex, and Attribute is different at each vertex. Executing VertexShader once outputs one Varying and one gl _ positon.
PrimitiveAssembly (primitive Assembly):
the next stage of the vertex shader is primitive assembly, and a primitive (speculative) is a geometric object such as a triangle, a straight line or a point sprite. At this stage, the vertices output by the vertex shader are grouped into primitives.
Rasterization (rasterization):
rasterization is the process of converting a primitive into a set of two-dimensional fragments, which are then processed by a fragment shader (the input to the fragment shader). These two-dimensional segments represent pixels that can be rendered on a screen. The mechanism for generating each fragment value from the vertex shader output assigned to each primitive vertex is called Interpolation.
Fragment shader:
the fragment shader implements a generic programmable approach for operations on fragments (pixels), executing the fragment shader one pass per fragment of the rasterized output, executing the shader on each fragment generated during the rasterization phase, and generating one or more (multiple rendered) color values as output.
Per-Fragment Operations (Fragment by Fragment operation)
(1) pixelOwnershipTest (pixel attribution test):
the pixel used to determine the location (x, y) in the frame buffer is not owned by the current context. For example, if one display frame buffer window is occluded by another window, the window system may determine that the occluded pixels do not belong to the context of the opengl and therefore do not display those pixels.
(2) ScissorTest (cut test):
if the segment is outside the cropping zone, it is discarded
(3) StencilTest and DepthSt (template and depth test):
depth testing is better understood to discard a fragment shader if the returned depth is less than the depth in the buffer.
(4) Blending:
and combining the newly generated fragment color value with the color value stored in the frame buffer area to generate new RGBA.
(5) Heat (shaking):
finally, the generated fragments are placed in a frame buffer (a front buffer or a rear buffer or FBO), if the fragments are not the FBO, the screen draws the fragments in the buffer, and the pixels on the screen are generated.
According to the embodiment, the focusing animation is realized on the application layer through the method and the system, the GPU processing is introduced, so that the smoother and diversified focusing process can be realized, the workload of the HA L layer is relieved, the response speed of the camera bottom layer driving is increased, and the user experience is improved.
Example two
Referring to fig. 4, the present embodiment adds the following steps on the basis of the first embodiment:
s106, the mobile terminal calculates focusing time according to the focal length; and the mobile terminal plays the focusing animation with the corresponding time length according to the focusing time length.
The photographing program of the Android mobile terminal calculates how long time is needed for completing the focusing according to the current focal length and the processing capacity of the OpenG L drawing thread.
The camera program calculates the focusing time length according to the focal length and the GPU processing capacity (for example, the focal length is 70mm, the GPU is 4 cores and 1.0GHz, and 2 seconds are needed for completing focusing). And then the photographing program plays the animation with the corresponding time length on the photographing preview screen according to the focusing time length. The animation refers to that a picture which changes in the process of zooming the picture is decomposed into a plurality of pictures with instant actions when the picture is shot and focused, and then the pictures are continuously shot into a series of pictures, so that the continuously changing pictures are caused to the vision. If the picture is drawn from the far end, the effect of gradually enlarging the far end picture is realized by playing the animation.
And S107, transmitting the focusing time length to an application layer program, and calculating a zooming animation coefficient by the application layer program according to the focusing time length.
The photographing program determines the time length of playing the animation according to the focusing time length; and meanwhile, the scaling of the zooming animation is calculated according to the focusing time length. If the animation is the animation of zooming the screen, the zooming ratio of the animation is calculated according to the focusing time length. The longer the in-focus duration, the larger the scaling of the animation.
The user can select the animation played in the shooting process in the camera program, and the camera program provides various animations for the user to select. After the user selects the animation, the related parameters (such as animation identification) of the animation are stored in the photographing program, and the animation selected by the user is played in the photographing process.
According to the embodiment, the animation is played in the focusing process, so that the user experience is improved, and the shooting effect is enhanced.
EXAMPLE III
Referring to fig. 5, the present embodiment adds the following steps on the basis of the first embodiment:
s108, monitoring focusing completion events; and after focusing is finished, sending information to prompt the user that focusing is finished.
The camera program starts to monitor focusing completion events, after OpenG L finishes focusing, a message is sent to inform the camera program of completing focusing, the camera program stops playing animation after receiving the focusing completion message sent by OpenG L, then the camera program prompts a user to finish focusing through modes of playing sound (such as two drops), displaying a green indicator light on a picture and the like, and the user is prompted to shoot.
According to the embodiment, the focusing completion event is monitored, so that the focusing completion of the user is prompted, the user can conveniently shoot, and the shooting experience of the user is improved.
Example four
Referring to fig. 6, an apparatus for focusing a mobile terminal, the apparatus being a mobile terminal (e.g., a mobile terminal), includes: a P106 display unit, a P107 user input unit, a P112 shooting unit, a P110 processor, a P1041 graphics processor, a P109 memory and a P108 communication bus.
1) The P106 display unit is used for displaying information input by a user or information provided for the user;
2) p107 a user input unit for receiving input numeric or character information and generating key signal input related to user setting and function control of the mobile terminal;
3) the P112 shooting unit is used for shooting pictures or videos, and the shot pictures or videos are stored in the memory 109.
4) The P1041 graphics processor processes image data of still pictures or video obtained by an image capture device (e.g., a camera) in a video capture mode or an image capture mode.
5) The P108 communication bus is used for realizing connection communication between the processor and the memory;
6) the P109 memory is used for storing program data;
7) the P110 processor is used for executing the mobile terminal focusing program stored in the memory to realize the following steps:
s101, after the shooting application program is monitored to be started, a shooting preview picture is obtained.
The user starts a shooting program in the mobile terminal (for example, a camera APP is opened in a smart phone), the shooting program calls a camera of the mobile terminal to acquire a picture to be shot, and then the picture is displayed in a display screen of the mobile terminal.
S102, monitoring a focusing trigger event, and acquiring a focus position and a focus distance.
When the mobile terminal starts a shooting program to shoot, monitoring of a focusing trigger event is started. A user selects a focus by clicking a preview picture of a screen of the mobile terminal in the shooting process (if the user wants to take a certain person as the focus of the shooting, the position of the person is clicked in the screen), and a shooting program monitors a focusing trigger event; when a user adjusts the position of the mobile terminal or the shooting environment changes (such as the sun is shielded by a cloud layer), and factors such as definition, a gyroscope or brightness change, the shooting program monitors a focusing trigger event. After monitoring the focusing trigger event, the photographing program acquires the coordinates of the focal position (i.e., the positions mPivotX and mPivotY identified by the screen pixel point) and the focal length of the shot.
Focal length, also known as focal length, is a measure of the concentration or divergence of light in an optical system, and refers to the distance from the center of a lens to the focal point of the light concentration. Also in the camera, the distance from the optical center of the lens to the imaging plane of the film, CCD or CMOS.
The lens of a camera is a set of lenses, and when light rays parallel to the main optical axis pass through the lens, the light converges to a point, called the focal point, and the distance from the focal point to the center of the lens (i.e., the optical center), called the focal length. A lens with fixed focal length, namely a fixed focus lens; the lens with adjustable focal length is a zoom lens.
The Camera API of the Android platform provides the capability to obtain focus and focal length during the shooting process, and the Camera is located under the Android. And starting a timing task in the shooting program, and calling an API (application program interface) corresponding to the Android platform at a fixed time to acquire the focus and the focal length of the current shooting picture.
And S103, transmitting the focus position and the focal length to an application layer.
After the photographing program obtains the coordinates (namely, the positions mPivotX and mPivotY identified by the screen pixel point) and the focal length of the focus position of the photographing, the focus position and the focal length are transmitted to an OpenG L drawing thread of an application layer program, the OpenG L drawing thread runs in a graphics processor, the OpenG L drawing thread runs on an Android platform, and the photographing focusing function is realized by calling an API provided by the Android.
OpenG L has seven major functions:
1. modeling the OpenG L graphics library provides complex three-dimensional objects (spheres, cones, polyhedrons, teapots, etc.) and complex curve and surface rendering functions in addition to basic point, line, polygon rendering functions.
2. The transformation method is beneficial to reducing the running time of an algorithm and improving the display speed of three-dimensional graphics.
3. Color mode settings there are two types of OpenG L color modes, namely RGBA mode and color index (ColorIndex).
4. The light and material setting is that the OpenG L light has self-luminous (emited L light), Ambient light (Ambient L light), Diffuse reflection light (Diffuse L light) and highlight (Specular L light), the material is represented by light reflectivity, and the color finally reflected to human eyes by an object in a Scene (Scene) is the color formed by multiplying the red, green and blue components of the light and the reflectivity of the red, green and blue components of the material.
5. Texture Mapping object surface details can be expressed very realistically using the OpenG L Texture Mapping function.
6. The bitmap display and image enhancement image functions provide special image effect processing of Blending (Blending), antialiasing (antialiasing), and fog (fog) in addition to basic copying and pixel writing. The three methods can make the simulated object have more sense of reality and enhance the effect of graphic display.
7. In short, the background cache calculates scenes and generates pictures, and the foreground cache displays the pictures drawn by the background cache.
And S104, receiving the vertex coordinates calculated by the application layer according to the focal position and the focal length.
The Android mobile terminal camera program calls an API provided by an Android platform, an OpenG L drawing thread is used, the vertex coordinates of the preview are calculated according to the focus position (positions mPivotX and mPivotY of the screen pixel point identifier) and the focal length (mSchale) of the shooting, and the image preview is drawn to the screen by using the vertex coordinates.
And S105, drawing a focusing animation in a preview picture in the screen according to the vertex coordinates.
The OpenG L rendering thread renders the image preview to the screen as follows:
the openG L adopts a cs model that c is cpu, s is GPU, the inputs from c to s are vertex information and Texture information, and the output from s is an image displayed on a display.
VBO/VAO (vertex buffer object or vertex group object):
VBO/VAO is vertex information provided by the cpu to the GPU, and includes vertex information such as the position of a vertex, color (the color of a vertex is only, and is not related to the color of a texture), texture coordinates (for texture mapping), and the like.
Vertex shader (vertex shader):
the vertex shader is a program that processes vertex information provided by the VBO/VAO. Each vertex provided by VBO/VAO performs one pass of the vertex shader. Uniformity (a variable type) remains consistent at each vertex, and Attribute is different at each vertex. Executing VertexShader once outputs one Varying and one gl _ positon.
PrimitiveAssembly (primitive Assembly):
the next stage of the vertex shader is primitive assembly, and a primitive (speculative) is a geometric object such as a triangle, a straight line or a point sprite. At this stage, the vertices output by the vertex shader are grouped into primitives.
Rasterization (rasterization):
rasterization is the process of converting a primitive into a set of two-dimensional fragments, which are then processed by a fragment shader (the input to the fragment shader). These two-dimensional segments represent pixels that can be rendered on a screen. The mechanism for generating each fragment value from the vertex shader output assigned to each primitive vertex is called Interpolation.
Fragment shader:
the fragment shader implements a generic programmable approach for operations on fragments (pixels), executing the fragment shader one pass per fragment of the rasterized output, executing the shader on each fragment generated during the rasterization phase, and generating one or more (multiple rendered) color values as output.
Per-Fragment Operations (Fragment by Fragment operation)
(1) pixelOwnershipTest (pixel attribution test):
the pixel used to determine the location (x, y) in the frame buffer is not owned by the current context. For example, if one display frame buffer window is occluded by another window, the window system may determine that the occluded pixels do not belong to the context of the opengl and therefore do not display those pixels.
(2) ScissorTest (cut test):
if the segment is outside the cropping zone, it is discarded
(3) StencilTest and DepthSt (template and depth test):
depth testing is better understood to discard a fragment shader if the returned depth is less than the depth in the buffer.
(4) Blending:
and combining the newly generated fragment color value with the color value stored in the frame buffer area to generate new RGBA.
(5) Heat (shaking):
finally, the generated fragments are placed in a frame buffer (a front buffer or a rear buffer or FBO), if the fragments are not the FBO, the screen draws the fragments in the buffer, and the pixels on the screen are generated.
In the embodiment, the focusing animation is realized on the application layer through the method and the system, the GPU processing is introduced, so that the smoother and diversified focusing process can be realized, the workload of the HA L layer is relieved, the response speed of the camera bottom layer driving is increased, and the user experience is improved.
EXAMPLE five
In this embodiment, on the basis of the fourth embodiment, the P110 processor is further configured to execute a mobile terminal focusing program to implement the following steps:
s106, the mobile terminal calculates focusing time according to the focal length; and the mobile terminal plays the animation with the corresponding time length according to the focusing time length.
The photographing program of the Android mobile terminal calculates how long time is needed for completing the focusing according to the current focal length and the processing capacity of the OpenG L drawing thread.
The camera program calculates the focusing time length according to the focal length and the GPU processing capacity (for example, the focal length is 70mm, the GPU is 4 cores and 1.0GHz, and 2 seconds are needed for completing focusing). And then the photographing program plays the animation with the corresponding time length on the photographing preview screen according to the focusing time length. The animation refers to that a picture which changes in the process of zooming the picture is decomposed into a plurality of pictures with instant actions when the picture is shot and focused, and then the pictures are continuously shot into a series of pictures, so that the continuously changing pictures are caused to the vision. If the picture is drawn from the far end, the effect of gradually enlarging the far end picture is realized by playing the animation.
And S107, transmitting the focusing time length to an application layer program, and calculating a zooming coefficient of the focusing animation by the application layer program according to the focusing time length.
The photographing program determines the time length of playing the animation according to the focusing time length; and meanwhile, the scaling of the zooming animation is calculated according to the focusing time length. If the animation is the animation of zooming the screen, the zooming ratio of the animation is calculated according to the focusing time length. The longer the in-focus duration, the larger the scaling of the animation.
The user can select the animation played in the shooting process in the camera program, and the camera program provides various animations for the user to select. After the user selects the animation, the related parameters (such as animation identification) of the animation are stored in the photographing program, and the animation selected by the user is played in the photographing process.
According to the embodiment, the animation is played in the focusing process, so that the user experience is improved, and the shooting effect is enhanced.
EXAMPLE six
In this embodiment, on the basis of the fourth embodiment, the P110 processor is further configured to execute a mobile terminal focusing program to implement the following steps:
s108, monitoring focusing completion events; and after focusing is finished, sending information to prompt the user that focusing is finished.
The camera program starts to monitor focusing completion events, after OpenG L finishes focusing, a message is sent to inform the camera program of completing focusing, the camera program stops playing animation after receiving the focusing completion message sent by OpenG L, then the camera program prompts a user to finish focusing through modes of playing sound (such as two drops), displaying a green indicator light on a picture and the like, and the user is prompted to shoot.
According to the embodiment, the focusing completion event is monitored, so that the focusing completion of the user is prompted, the user can conveniently shoot, and the shooting experience of the user is improved.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The above-mentioned serial numbers of the embodiments of the present invention are merely for description and do not represent the merits of the embodiments.
Through the above description of the embodiments, those skilled in the art will clearly understand that the method of the above embodiments can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware, but in many cases, the former is a better implementation manner. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which is stored in a storage medium (such as ROM/RAM, magnetic disk, optical disk) and includes instructions for enabling a terminal (such as a mobile phone, a computer, a server, an air conditioner, or a network device) to execute the method according to the embodiments of the present invention.
While the present invention has been described with reference to the embodiments shown in the drawings, the present invention is not limited to the embodiments, which are illustrative and not restrictive, and it will be apparent to those skilled in the art that various changes and modifications can be made therein without departing from the spirit and scope of the invention as defined in the appended claims.
Claims (10)
1. A method for focusing a mobile terminal is characterized by comprising the following steps:
acquiring a shooting preview picture after monitoring that a shooting application program is started;
monitoring a focusing trigger event to obtain a focus position and a focus;
passing the focus position and focus distance to an OpenG L drawing thread of an application layer;
receiving vertex coordinates calculated by the OpenG L drawing thread of the application layer according to the focal position and the focal length;
and drawing a focusing animation in a preview picture in the screen according to the vertex coordinates.
2. The method for focusing the mobile terminal according to claim 1, wherein a user click operation on the preview screen is received to trigger a focusing event;
and after the mobile terminal shakes or the shooting site environment changes to cause the change of definition, brightness or a gyroscope, triggering a focusing event.
3. The method according to claim 1, wherein the mobile terminal calculates a focusing duration according to a focal length; and playing the focusing animation with the corresponding time length according to the focusing time length.
4. The method for focusing the mobile terminal according to claim 3, wherein the focusing duration is transmitted to an application layer, and the application layer calculates a scaling factor of the focusing animation according to the focusing duration.
5. The method of claim 3, wherein the focusing animation is selectable by a user, and the focusing animation is played during focusing according to the user's selection of the focusing animation.
6. The method for focusing a mobile terminal according to claim 1, further comprising monitoring a focusing completion event;
and after focusing is finished, sending information to prompt the user that focusing is finished.
7. The method for focusing a mobile terminal as claimed in claim 1, wherein the application layer is an OpenG L drawing thread, and the OpenG L drawing thread runs in a graphics processor.
8. The method for focusing the mobile terminal according to claim 7, wherein the OpenG L drawing thread runs on an Android platform, and a shooting focusing function is realized by calling an API provided by the Android.
9. A device for realizing focusing of a mobile terminal is characterized by comprising a shooting unit, a display unit, a user input unit, a processor, a graphic processor, a memory and a communication bus;
the shooting unit is used for obtaining image data of a static picture or a video;
the display unit is used for displaying information input by a user or information provided for the user;
the user input unit is used for receiving input numeric or character information and generating key signal input related to user setting and function control of the mobile terminal;
the image processor is used for processing the image data of the static picture or the video obtained by the shooting unit;
the communication bus is used for realizing connection communication between the processor and the memory;
the memory is used for storing the data of the customized application program;
the processor is configured to execute a mobile terminal focusing program stored in the memory, and is configured to implement the method for focusing a mobile terminal according to one of claims 1 to 8.
10. A computer-readable storage medium, characterized in that the computer-readable storage medium stores one or more programs, which are executable by one or more processors, for implementing the method for focusing of a mobile terminal of one of claims 1 to 8.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810100689.3A CN108270971B (en) | 2018-01-31 | 2018-01-31 | Mobile terminal focusing method and device and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810100689.3A CN108270971B (en) | 2018-01-31 | 2018-01-31 | Mobile terminal focusing method and device and computer readable storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108270971A CN108270971A (en) | 2018-07-10 |
CN108270971B true CN108270971B (en) | 2020-07-24 |
Family
ID=62777200
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810100689.3A Active CN108270971B (en) | 2018-01-31 | 2018-01-31 | Mobile terminal focusing method and device and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108270971B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111583365B (en) * | 2020-04-24 | 2023-09-19 | 完美世界(北京)软件科技发展有限公司 | Processing method and device for animation element display, storage medium and terminal |
CN111654637B (en) * | 2020-07-14 | 2021-10-22 | RealMe重庆移动通信有限公司 | Focusing method, focusing device and terminal equipment |
CN112333387A (en) * | 2020-10-30 | 2021-02-05 | 维沃移动通信有限公司 | Shooting method, shooting device, electronic equipment and storage medium |
CN115623320B (en) * | 2022-09-26 | 2024-10-22 | 中国人民财产保险股份有限公司 | Method, system, device and medium for controlling claim settlement camera for mobile claim settlement |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017070884A1 (en) * | 2015-10-29 | 2017-05-04 | 深圳市莫孚康技术有限公司 | Image focusing system and method based on wireless distance measurement, and photographing system |
CN106775902A (en) * | 2017-01-25 | 2017-05-31 | 北京奇虎科技有限公司 | A kind of method and apparatus of image procossing, mobile terminal |
CN107329649A (en) * | 2017-06-14 | 2017-11-07 | 努比亚技术有限公司 | Cartoon display method, terminal and computer-readable recording medium |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
TWI518437B (en) * | 2014-05-12 | 2016-01-21 | 晶睿通訊股份有限公司 | Dynamical focus adjustment system and related method of dynamical focus adjustment |
-
2018
- 2018-01-31 CN CN201810100689.3A patent/CN108270971B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2017070884A1 (en) * | 2015-10-29 | 2017-05-04 | 深圳市莫孚康技术有限公司 | Image focusing system and method based on wireless distance measurement, and photographing system |
CN106775902A (en) * | 2017-01-25 | 2017-05-31 | 北京奇虎科技有限公司 | A kind of method and apparatus of image procossing, mobile terminal |
CN107329649A (en) * | 2017-06-14 | 2017-11-07 | 努比亚技术有限公司 | Cartoon display method, terminal and computer-readable recording medium |
Also Published As
Publication number | Publication date |
---|---|
CN108270971A (en) | 2018-07-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106937039B (en) | Imaging method based on double cameras, mobile terminal and storage medium | |
CN107592466B (en) | Photographing method and mobile terminal | |
KR101859312B1 (en) | Image processing method and apparatus, and computer device | |
CN109597481B (en) | AR virtual character drawing method and device, mobile terminal and storage medium | |
CN108270971B (en) | Mobile terminal focusing method and device and computer readable storage medium | |
CN111935402B (en) | Picture shooting method, terminal device and computer readable storage medium | |
CN107959795B (en) | Information acquisition method, information acquisition equipment and computer readable storage medium | |
CN107948505B (en) | Panoramic shooting method and mobile terminal | |
CN107948498B (en) | A kind of elimination camera Morie fringe method and mobile terminal | |
CN111885307B (en) | Depth-of-field shooting method and device and computer readable storage medium | |
CN107133939A (en) | A kind of picture synthesis method, equipment and computer-readable recording medium | |
CN110266957B (en) | Image shooting method and mobile terminal | |
CN107707821B (en) | Distortion parameter modeling method and device, correction method, terminal and storage medium | |
CN109194874A (en) | Photographic method, device, terminal and computer readable storage medium | |
CN108022227B (en) | Black and white background picture acquisition method and device and computer readable storage medium | |
CN108320263A (en) | A kind of method, device and mobile terminal of image procossing | |
CN107835404A (en) | Method for displaying image, equipment and system based on wear-type virtual reality device | |
CN107948516A (en) | A kind of image processing method, device and mobile terminal | |
CN112188082A (en) | High dynamic range image shooting method, shooting device, terminal and storage medium | |
CN113888452A (en) | Image fusion method, electronic device, storage medium, and computer program product | |
CN111866388B (en) | Multiple exposure shooting method, equipment and computer readable storage medium | |
CN107295262B (en) | Image processing method, mobile terminal and computer storage medium | |
CN110807769B (en) | Image display control method and device | |
CN107913519B (en) | Rendering method of 2D game and mobile terminal | |
CN112135045A (en) | Video processing method, mobile terminal and computer storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |