Detailed Description
The technical solutions in the embodiments of the present application will be described clearly below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
As shown in fig. 3, an embodiment of the present application relates to a mobile device, which may specifically include:
the input unit 301, the display unit 302, the processor 303, the memory 304, the sensor 305, the audio circuit 306, a Wireless Fidelity (WiFi) module 307, a Radio Frequency (RF) circuit 308, a power supply 309, and a bus 310 are connected to communicate with each other through the bus 310. Those skilled in the art will appreciate that the device architecture shown in fig. 2 does not constitute a limitation of mobile devices, and may include more or fewer components than shown, or some components in combination, or a different arrangement of components.
Taking the mobile device 300 as an example, the following describes each component of the mobile device in detail with reference to fig. 3:
the input unit 301 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 301 may include a touch panel 3011 and other input devices 3012. The touch panel 3011 is also called a touch screen, and can collect touch operations performed by a user on or near the touch panel 3011 (e.g., operations performed by the user on or near the touch panel 3011 using any suitable object or accessory such as a finger or a stylus), and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 3011 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 303, and can receive and execute commands sent by the processor 303. In addition, the touch panel 3011 can be implemented by various types, such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 301 may include other input devices 3012 in addition to the touch panel 3011. In particular, other input devices 3012 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 302 may be used to display information input by the user or information provided to the user and various menus of the cellular phone 300. The Display unit 302 may include a Display panel 3021, and optionally, the Display panel 3021 may be configured in the form of a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 3011 may cover the display panel 3021, and when the touch panel 3011 detects a touch operation on or near the touch panel 3011, the touch operation is transmitted to the processor 303 to determine the type of the touch event, and then the processor 303 provides a corresponding visual output on the display panel 3021 according to the type of the touch event. Although in fig. 3, the touch panel 3011 and the display panel 3021 are implemented as two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 3011 and the display panel 3021 may be integrated to implement the input and output functions of the mobile phone.
The handset 300 may also include at least one sensor 305, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 3021 according to the brightness of ambient light, and a proximity sensor that turns off the display panel 3021 and/or a backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
The audio circuitry 306, speaker 311, microphone 312 may provide an audio interface between the user and the handset 300. The audio circuit 306 may transmit the electrical signal converted from the received audio data to the speaker 311, and convert the electrical signal into a sound signal by the speaker 311 and output the sound signal; on the other hand, the microphone 312 converts the collected sound signal into an electrical signal, which is received by the audio circuit 306 and converted into audio data, which is then processed by the output processor 303 and then transmitted to, for example, another cellular phone via the RF circuit 308, or output to the memory 304 for further processing.
WiFi belongs to short-distance wireless transmission technology, and the mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through the WiFi module 307, and provides wireless broadband Internet access for the user. Although fig. 3 shows the WiFi module 307, it is understood that it does not belong to the essential constitution of the handset, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The RF circuit 308 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information from a base station and then processes the received downlink information to the processor 303; in addition, the data for designing uplink is transmitted to the base station. In general, RF circuitry 308 includes, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, RF circuitry 308 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communication (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), email, Short Message Service (SMS), and the like.
The memory 304 may be used to store software programs and modules, and the processor 303 executes various functional applications and data processing of the mobile phone by operating the software programs and modules stored in the memory 304. The memory 304 may mainly include a program storage area and a data storage area, wherein the program storage area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 304 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device.
The processor 303 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 304 and calling data stored in the memory 304, thereby performing overall monitoring of the mobile phone. Alternatively, processor 303 may include one or more processing units; preferably, the processor 303 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, etc., and a modem processor, which mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 303.
The handset also includes a power supply 309 (e.g., a battery) for powering the various components, which may preferably be logically connected to the processor 303 via a power management system, so that the power management system may be used to manage charging, discharging, and power consumption. Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In an embodiment of the present invention, the processor 303 may call the operation instructions stored in the memory 304 to execute the following method: acquiring an image to be processed; establishing a three-dimensional coordinate system consisting of an X axis, a Y axis and a Z axis, wherein an XY plane consisting of the X axis and the Y axis is parallel to a two-dimensional plane where the display screen is located; call gluLookAt (A)x,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) And carrying out view transformation on the image to be processed to obtain an image to be displayed, wherein Ax,Ay,AzRespectively an x-axis coordinate, a y-axis coordinate, a z-axis coordinate of the observation point, Bx,By,BzRespectively an x-axis coordinate, a y-axis coordinate, a z-axis coordinate, C of the observed objectx,Cy,CzX-axis coordinate, y-axis coordinate, z-axis coordinate, respectively, of the end point of a target vector indicating the direction directly above the viewpoint, ay>0,Az>0,Cx=0,Cy>0,Cz<0; and displaying the image to be displayed on the display screen.
Optionally, in another embodiment of the present application, the processor 303 may further perform the following method: shooting a target image, and extracting image features from the target image, wherein the image features are head features; if the similarity between the image features and the preset head features is larger than a preset threshold value, determining (C) according to the angle between the two-dimensional plane where the camera lens of the mobile equipment is located and the two-dimensional plane where the display screen is locatedx,Cy,Cz) (ii) a Measuring the distance of the user to the mobile device, determining (A) from the distance and the anglex,Ay,Az) (ii) a Determination of (B)x,By,Bz)。
In practical applications, the display image is not necessarily directly opposite to the user's sight line, and in order to further adjust the angle between the display image and the display screen, optionally, in another embodiment of the present application, the processor 303 may further perform the following method: receiving key information, and sending (A) according to the key informationx,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) Modified as (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz') to a host; call gluLookAt (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz') carrying out view transformation on an image to be displayed to obtain a target display image; and displaying the target display image on the display screen.
Optionally, in another embodiment of the present application, the processor 303 may further perform the following method: detecting a touch operation applied to the display screen, and (A) according to the touch operationx,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) Modified as (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz') to a host; call gluLookAt (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz') carrying out view transformation on an image to be displayed to obtain a target display image; and displaying the target display image on the display screen.
In another embodiment of the present application, the processor 303 may further perform the following method:
acquiring an image to be processed; establishing a three-dimensional coordinate system consisting of an X axis, a Y axis and a Z axis, wherein an XY plane consisting of the X axis and the Y axis is parallel to a two-dimensional plane where the display screen is located; calling glRotate (. alpha., P)x,Py,Pz) Performing model transformation on the image to be processed to obtain an image to be displayed, wherein the image is 0 DEG<α<90 degree, Px>0,Py=0,Pz0; and displaying the image to be displayed on the display screen.
Alternatively, in another of the present applicationIn the embodiment, the call for glRotate (α, P)x,Py,Pz) Before the image to be processed is subjected to view transformation to obtain an image to be displayed, the processor 303 may further perform the following method: shooting a target image, and extracting image features from the target image, wherein the image features are head features; if the similarity between the image characteristics and the preset head characteristics is larger than a preset threshold value, taking the angle between a two-dimensional plane where a camera lens of the mobile equipment is located and a two-dimensional plane where a display screen is located as alpha; determining (P)x,Py,Pz)。
Optionally, in another embodiment of the present application, after displaying the image to be displayed on the display screen, the processor 303 may further perform the following method:
receiving key information, and setting alpha as alpha' according to the key information; calling glRotate (. alpha.', P)x,Py,Pz) Carrying out model transformation on an image to be displayed to obtain a target display image; and displaying the target display image on the display screen.
Optionally, in another embodiment of the present application, after displaying the image to be displayed on the display screen, the processor 303 may further perform the following method:
detecting touch operation acting on the display screen, and modifying alpha into alpha' according to the touch operation; calling glRotate (. alpha.', P)x,Py,Pz) Carrying out model transformation on an image to be displayed to obtain a target display image; and displaying the target display image on the display screen.
Based on the mobile device in the embodiment or the alternative embodiment shown in fig. 3, the present application provides a plurality of image processing methods, which can improve the inclination angle between the screen display image and the user's sight line, and refer to the following embodiments specifically:
referring to fig. 4, an embodiment of an image processing method in the present application based on a mobile device (e.g., a mobile phone 300) with the above-mentioned hardware structure includes:
401. the mobile equipment acquires an image to be processed;
in this embodiment, when the user selects to play the picture, the mobile device may acquire the picture and use the picture as the image to be processed; when a user watches a video, the mobile device may decode the video file to obtain a plurality of video frames, and each video frame is used as an image to be processed. The picture format may be BMP, JPG, JPEG, PNG, GIF, PSD, FPX, EXIF, DXF, EPS, TGA, or the like.
402. The mobile equipment establishes a three-dimensional coordinate system consisting of an X axis, a Y axis and a Z axis, and an XY plane consisting of the X axis and the Y axis is parallel to a two-dimensional plane where the display screen is located;
the mobile device may establish a three-dimensional coordinate system using an Open Graphics Library (OpenGL). OpenGL is a cross-programming language, cross-platform programming interface specification for graphical program interfaces. An open graphics library (OpenGL for Embedded Systems, abbreviated as OpenGL ES) of an Embedded system is an OpenGL subset mainly applied to mobile devices, and can achieve very rich animation effects.
403. Mobile device calls gluLookAt (a)x,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) Carrying out view transformation on the image to be processed to obtain an image to be displayed;
after the mobile device acquires the image to be processed, the image to be processed can be stored in the display buffer area, and then the gluLookAt (a) is calledx,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) To perform view transformation on the image to be processed.
Among them, gluLookAt (A)x,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) Is a view transformation function provided by OpenGL ES. A. thex,Ay,AzRespectively an x-axis coordinate, a y-axis coordinate, a z-axis coordinate and B of the observation point in a three-dimensional coordinate systemx,By,BzFrom the origin (0, 0, 0) to the point C (C) in the three-dimensional coordinate system, the x-axis coordinate, the y-axis coordinate, and the z-axis coordinate of the observation objectx,Cy,Cz) Is a target vector, the targetThe vector is used to indicate the directly up direction of the viewpoint. The observation point may also be referred to as a viewpoint, and the observation object may also be referred to as a reference point. The origin is the starting point of the target vector, and point C is the ending point of the target vector.
In the established three-dimensional coordinates, the observation point can be regarded as a camera, the observation object can be regarded as an object aligned with a lens of the camera, and the target vector is used for representing the direct upward direction of the camera, as shown in fig. 5. In OpenGL ES, the gluLookAt function can also be expressed by gluLookAt (eyex, eyey, eyez, centerx, centery, centerzupx, upx, upy, upz).
When A isy>0,Az>When 0, the observation point A is located in a space defined by the XY plane and the positive direction of the z axis. When C is presentx=0,Cy>0,Cz<When 0, the right-up direction of the viewpoint is inclined to the XY plane.
For example, when the view transformation parameter is (0, 5, 5, 0, 0, 0, 0, 1, -1), after the image to be processed is subjected to the view transformation according to the view transformation parameter, it corresponds to that the observer is at the observation point (i.e. the position of (0, 5, 5) of the three-dimensional coordinate system), and the vector direction from (0, 0, 0) to (0, 1, -1) is the direction directly above the observation point, i.e. the image to be processed at (0, 0, 0) is seen by the user's line of sight which is inclined downward by 45 degrees, and for the convenience of observation, only the portion of the image to be displayed is taken as an example, as shown in fig. 6.
404. The mobile device displays an image to be displayed on a display screen.
After the mobile device obtains the image to be displayed, the image to be displayed can be displayed on the display screen, and the display image is inclined to the plane of the display screen. When the mobile device is horizontally placed, the display image is displayed obliquely upward. Because the display image and the plane where the display screen is located form an included angle α, as shown in fig. 7, the included angle between the display screen and the user sight line is β, the included angle between the display image and the user sight line is α + β, and when α + β is 90, the display image is directly opposite to the user sight line, so that the viewing experience of the user is improved.
The specific process of displaying the display image on the display screen by the mobile device may include: display image → projection transformation → view port transformation, that is, the display image is transformed from the three-dimensional coordinate system to the two-dimensional graph on the two-dimensional plane of the display screen, then the view port (display window) is selected on the two-dimensional plane of the display screen, and the display image is displayed in the view port.
In this embodiment, mobile device can rotate the display image to the screen for the display image can be relative with user's sight, and the user can be clear watch whole image, has improved the user and has seen the shadow experience, compares with prior art, and this application technology has improved human-computer interaction efficiency, and has practiced thrift the expense of support frame.
In some other embodiments of the present invention, the mobile device may call the gluhookat function according to the position of the observer relative to the mobile device, so as to dynamically display the image on the display screen in real time.
Specifically, when the mobile terminal is horizontally placed, the mobile terminal can detect the distance between the body and the mobile terminal through the distance sensor, for example, 40 cm; the mobile terminal can also obtain a preset height, for example 40cm, and the mobile terminal can determine the position (0, 4, 4) of the observation point and the position (0, 1, -1) of the destination vector according to the distance from the body to the mobile terminal and the preset height, and then perform view transformation on the image to be processed by using (0, 4, 4, 0, 0, 0, 0, 1, -1) as a view transformation parameter to obtain an image to be displayed, and then display the image on the display screen. And in the view of a user, the included angle between the display image and the two-dimensional plane where the display screen is located is 45 degrees.
When the body is far away from the mobile terminal, the mobile terminal can detect that the distance between the mobile terminal and the body is increased, increase the z-axis coordinate of the observation point according to the detected distance, reduce the y-axis coordinate of the target vector terminal and keep the z-axis coordinate of the target vector terminal unchanged; when the image to be displayed is displayed on the display screen, the included angle between the displayed image and the two-dimensional plane where the display screen is located is increased when viewed by a user. When a body approaches the mobile terminal, the mobile terminal reduces the z-axis coordinate of the observation point, increases the y-axis coordinate of the target vector terminal point and keeps the z-axis coordinate of the target vector terminal point unchanged; when the image to be displayed is displayed on the display screen, the included angle between the displayed image and the two-dimensional plane where the display screen is located is reduced when viewed by a user. Therefore, with the movement of the body, the included angle between the sight of the user and the two-dimensional plane where the display screen is located is constantly changed, and the mobile terminal can adjust the included angle between the display image and the two-dimensional plane where the display screen is located in real time according to the distance from the body to the mobile terminal, so that the display image is always opposite to the sight of the user.
Or when the user rotates the mobile terminal, the mobile terminal can detect an included angle between a two-dimensional plane where the display screen is located and a horizontal plane through the gyroscope, and along with the increase of the included angle, the mobile terminal can reduce the y-axis coordinate of the target vector terminal and increase the z-axis coordinate of the target vector terminal; and the user can see that the included angle between the display image and the two-dimensional plane where the display screen is positioned is reduced. When the included angle between the two-dimensional plane where the display screen is located and the horizontal plane is 45 degrees, the display image is overlapped or parallel to the two-dimensional plane where the display screen is located. Therefore, as the mobile terminal rotates, the included angle between the sight of the user and the two-dimensional plane where the display screen is located continuously changes, and the mobile terminal can adjust the included angle between the display image and the two-dimensional plane where the display screen is located in real time according to the included angle between the two-dimensional plane where the display screen is located and the horizontal plane, so that the display image is always opposite to the sight of the user.
In the embodiment of the present invention, the view transformation parameter (A)x,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) The setting may be based on experience or may be set by the mobile device based on other parameters obtained. The following describes a process of setting view transformation parameters according to other parameters by the mobile device. In another embodiment of the present application, after step 402 and before step 403, the image processing method further includes:
the mobile equipment shoots a target image, and image features are extracted from the target image, wherein the image features are head features;
if the similarity between the image features and the preset head features is larger than a preset threshold value, the mobile equipment determines (C) according to the angle between the two-dimensional plane where the camera lens of the mobile equipment is located and the two-dimensional plane where the display screen is locatedx,Cy,Cz);
The mobile device measures the distance from the user to the mobile device;
determining (A) from the distance and the anglex,Ay,Az);
Determination of (B)x,By,Bz)。
In this embodiment, the mobile device is provided with a camera inclined to the display screen, and an angle between a two-dimensional plane where the camera lens is located and the two-dimensional plane where the display screen is located may be recorded as θ. The mobile equipment can shoot an image through the camera, image features such as head features such as eyes, eyebrows, hairs and noses are extracted from the shot image, if the similarity between the extracted image features and the preset image features is larger than a preset threshold value, the fact that the inclination angle exists between the eyes and a display screen is determined, the value of the inclination angle is set to be theta, and C is selected according to the thetay,CzThe specific formula can be: tan θ ═ Cy|/|Cz|。CxAnd may be set to 0 in general.
The distance sensor can be arranged at the adjacent position of the camera, the mobile equipment can also measure the distance from the head feature to the mobile equipment through the distance sensor, the distance can be recorded as L, and A is selected according to theta and Ly,AzSpecifically, it may be Ay=L×sinθ×a,AzL × cos θ × a. a is a conversion coefficient for converting the actual distance to a three-dimensional coordinate system. A. thexAnd may be set to 0 in general.
(Bx,By,Bz) May be (0, 0, 0).
Therefore, the mobile device can call the gluhookat function according to the different positions of the observer relative to the mobile device, and then dynamically and real-timely display the image on the display screen.
According to the image processing method shown in fig. 4, after the mobile device displays the display image on the display screen, the display image may not be directly facing the user's sight line, and the included angle between the display image and the plane of the display screen needs to be adjusted. The application provides a method for adjusting the included angle, which can enable the included angle between a display image and the sight line of a user to be vertical or basically vertical, and the method comprises the following steps:
in another embodiment of the present application, after step 404, the image processing method further includes: receiving key information, and sending (A) according to the key informationx,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) Modified as (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz') to a host; call gluLookAt (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz') carrying out view transformation on the display image to obtain a target display image; and displaying the target display image on the display screen.
In this example, Ax′,Ay′,Az' modified x-axis coordinate, y-axis coordinate, z-axis coordinate of observation point, Bx′,By′,Bz' modified x-axis coordinate, y-axis coordinate, z-axis coordinate of observed object, Cx′,Cy′,Cz' are the x-axis coordinate, y-axis coordinate, z-axis coordinate, respectively, of the end point of the modified target vector.
The mobile device may change the view to the parameter (A) by physical key (e.g. volume key) or virtual keyx,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) Modifying to target view transformation parameters (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz') call gluLookAt (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz') view-transforming the display image to obtain a target display image, thereby changing the clip between the display image and the display screenAnd (4) an angle. For example, pressing the volume key + once increases the angle between the upward direction in the view transformation parameter and the XY plane by 5 degrees, rotates the display image 5 degrees inward of the screen, and pressing the volume key-once decreases the angle between the upward direction in the view transformation parameter and the XY plane by 5 degrees, rotates the display image 5 degrees outward of the screen. It is understood that the correspondence between the key information and the upward direction in the view transformation parameter may be set according to actual situations, and is not limited herein.
It should be noted that, the process of the mobile device performing view transformation on the display image according to the target view transformation parameter to obtain a target display image, and then displaying the target display image on the display screen is similar to steps 403 to 404 in the embodiment shown in fig. 4, and details are not repeated here.
In another embodiment of the present application, after step 404, the image processing method further includes: detecting a touch operation applied to the display screen, and changing the view parameter (A) according to the touch operationx,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) Modifying to target view transformation parameters (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz') to a host; call gluLookAt (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz') carrying out view transformation on the display image to obtain a target display image; and displaying the target display image on the display screen.
In this embodiment, when the finger slides on the display screen, the mobile device may detect the sliding operation, and modify the view transformation parameter according to the sliding operation, thereby changing an included angle between the display image and the display screen. For example, when the finger slides up, the display image rotates outward from the screen, and when the finger slides down, the display image rotates inward from the screen. Or when the finger slides to the left, the display image rotates to the outside of the screen, and when the finger slides to the right, the display image rotates to the inside of the screen. It is understood that the corresponding relationship between the sliding distance of the finger on the display screen and the rotation angle of the image may be set according to practical situations, and is not limited herein.
It should be noted that, the process of the mobile device performing view transformation on the display image according to the target view transformation parameter to obtain a target display image, and then displaying the target display image on the display screen is similar to steps 403 to 404 in the embodiment shown in fig. 4, and details are not repeated here.
In another embodiment of the present application, before step 401, the image processing method further includes: the mobile equipment detects the equipment posture; when the device posture is the horizontal placement posture, the mobile device is triggered to execute step 401.
In this embodiment, the mobile device may detect the device gesture by using a gyroscope, and when the device gesture is detected to be a horizontal placement gesture, step 401 to step 404 are triggered. In this case, the mobile device can automatically make the displayed image opposite to the sight of the user, and the user does not need to start the related program of the image processing method, so that the time of the user can be saved.
Referring to fig. 8, another embodiment of an image processing method according to the present application includes:
801. the mobile equipment acquires an image to be processed;
802. the mobile equipment establishes a three-dimensional coordinate system consisting of an X axis, a Y axis and a Z axis, and an XY plane consisting of the X axis and the Y axis is parallel to a two-dimensional plane where the display screen is located;
steps 801 to 802 are similar to steps 401 to 402 in the embodiment shown in fig. 4, and are not described again here.
803. Mobile device calls glRotate (alpha, P)x,Py,Pz) Carrying out model transformation on the image to be processed to obtain an image to be displayed;
after the mobile device acquires the image to be processed, the image to be processed may be stored in the display buffer, and then model transformation is performed on the image to be processed by using glRotate (α, x, y, z). Wherein, glRotate (alpha, P)x,Py,Pz) Is a model transformation function provided by OpenGL ES. Wherein,alpha is the angle of rotation, Px,Py,PzThe x-axis coordinate, the y-axis coordinate, and the z-axis coordinate, which are the rotation axis end points, respectively, (0, 0, 0) is the starting point of the vector. When P is presentx>0,Py=0,PzWhen the value is 0, the display image is indicated to rotate around an x axis, and the x axis is a rotating axis; when 0 degree<α<When the angle is 90 degrees, the display image is rotated towards the screen, and the rotation angle does not exceed 90 degrees.
For example, when α is 45 degrees, (P)x,Py,Pz) And when the value is (1, 0, 0), the model transformation parameter is (45, 1, 0, 0), the model transformation parameter is used as an input parameter of the glRotate function, and after the glRotate function processing, an image to be displayed can be obtained, and the image to be displayed is rotated by 45 degrees relative to the XY plane into the screen, as shown in fig. 9. It should be noted that the display image may also be subjected to a glTranslate function to change the distance from the display image to the XY plane. For example, after the mobile device obtains the display image, a gltransform (0, 1, 0) is executed to move the display image by 1 unit in the y-axis direction.
804. The mobile equipment displays the image to be displayed on the display screen
Step 804 is similar to step 404 in the embodiment shown in fig. 4, and is not repeated here.
Optionally, in another embodiment of the present application, before step 803, the image processing method further includes:
the mobile equipment shoots a target image, and image features are extracted from the target image, wherein the image features are head features;
if the similarity between the image features and the preset head features is larger than a preset threshold value, the mobile equipment takes the angle between a two-dimensional plane where a camera lens of the mobile equipment is located and a two-dimensional plane where a display screen is located as alpha;
mobile device determination (P)x,Py,Pz)。
In this embodiment, the mobile device is provided with a camera inclined to the display screen, and an angle between a two-dimensional plane where the camera lens is located and the two-dimensional plane where the display screen is located may be recorded as θ. The mobile equipment can shoot byThe method comprises the steps of shooting an image like a head, extracting image features such as head features of eyes, eyebrows, hairs and noses from the shot image, determining that an inclination angle exists between the eyes and a display screen if the similarity between the extracted image features and preset image features is larger than a preset threshold value, and enabling the mobile device to acquire theta and take the theta as a rotation angle. For example, Py=0,Pz=0,PxAnd may be any point on the positive x half axis.
Optionally, in another embodiment of the present application, after displaying the display image on the display screen, the image processing method further includes: receiving key information, modifying alpha into alpha 'according to the key information, wherein the alpha' is a modified rotation angle; calling glRotate (. alpha.', P)x,Py,Pz) Carrying out model transformation on the display image to obtain a target display image; and displaying the target display image on the display screen.
Specifically, the mobile device may transform the model into the parameters (α, P) by physical key (e.g., volume key) or virtual keyx,Py,Pz) Modified to target model parameters (α', P)x,Py,Pz) Call glRotate (. alpha.', P)x,Py,Pz) And carrying out model transformation on the display image to obtain a target display image, so as to change an included angle between the display image and the display screen. For example, pressing the volume key + once increases the rotation angle of the model transformation parameter by 5 degrees, and pressing the volume key-once decreases the rotation angle of the model transformation parameter by 5 degrees. It can be understood that the corresponding relationship between the key information and the rotation angle can be set according to actual situations, and is not limited herein.
Optionally, in another embodiment of the present application, after displaying the display image on the display screen, the image processing method further includes: detecting a touch operation applied to the display screen, transforming the model parameters (alpha, P) according to the touch operationx,Py,Pz) Modified to target model transformation parameters (alpha', P)x,Py,Pz) (ii) a Calling glRotate (. alpha.', P)x,Py,Pz) Will showCarrying out model transformation on the display image to obtain a target display image; and displaying the target display image on the display screen.
Specifically, when a finger slides on the display screen, the mobile device may detect the sliding operation, and modify α in the model transformation parameter according to the sliding operation, thereby changing an included angle between the display image and the display screen. For example, when the finger slides upwards, the rotation angle of the model transformation parameter becomes smaller, and the display image rotates towards the outside of the screen; when the finger slides downwards, the rotation angle of the model transformation parameters is increased, and the display image rotates towards the screen. Or when the finger slides leftwards, the rotation angle of the model transformation parameters is reduced, and the display image rotates outwards the screen; when the finger slides to the right, the rotation angle of the model transformation parameter is increased, and the display image rotates towards the screen. It can be understood that the corresponding relationship between the sliding distance of the finger on the display screen and the rotation angle may be set according to practical situations, and is not limited herein.
For the sake of understanding, the image processing method in the present application is described in detail below with a specific application scenario:
the mobile device takes a mobile phone as an example, a user puts the mobile phone on a desktop flatly, clicks a video, and the mobile phone can acquire a video file and decode the video file to obtain a video frame; the mobile phone can also establish a three-dimensional coordinate system by utilizing OpenGL, wherein an XY plane is parallel to a two-dimensional plane where the display screen is located;
for example, (0, 5, 5, 0, 0, 0, 0, 1, -1) as the preset view transformation parameters, the mobile phone calls the gluhookat function to perform view transformation on the video frame to obtain a new video frame, and the new video frame is displayed on the display screen. The video frame is rotated 45 degrees into the screen relative to the display screen, and when the angle between the line of sight of the user and the display screen is 45 degrees, the video frame is perpendicular to the line of sight of the user. When the mobile phone continuously performs view transformation on a plurality of video frames, the sight of a user is vertical to the video;
in another case, the mobile phone may capture an image, and if the captured image has eyes, an included angle between the camera lens and the display screen is obtained, where the included angle is 45 degrees, for example, and the mobile device may determine CyAnd CzAre equal in value and opposite in sign,CxSpecifically, (0, 1, -1) may be set to 0. The mobile phone can also measure the distance between the eyes and the mobile phone by taking 50cm as an example, the conversion coefficient is 0.1, and the mobile device can determine Ay=50×sin45°×(0.1)≈3.5,Az≈3.5,Ax0, and (B)x,By,Bz) Is (0, 0, 0), thereby determining the view transformation parameters to be (0, 3.5, 3.5, 0, 0, 0, 0, 1, -1).
In another case, the preset model transformation parameters are (45, 1, 0, 0), for example, for a decoded video frame, the mobile phone may rotate the video frame by 45 degrees into the screen relative to the display screen by using a glRotate function, and when the angle between the user's line of sight and the display screen is 45 degrees, the video frame is perpendicular to the user's line of sight.
For the video frame which rotates 45 degrees to the screen relative to the display screen, when the angle between the sight line of the user and the display screen is 30 degrees, the mobile phone is supposed to rotate the video frame 5 degrees to the screen according to the key information of volume + once, the user can press the volume + 3 times, and the mobile phone can rotate the video frame 15 degrees to the screen again, so that the video frame rotates 60 degrees to the screen relative to the display screen, and the sight line of the user is perpendicular to the video frame; or the mobile phone can slide 5mm downwards on the display screen according to the finger, rotate the video frame by 5 degrees into the screen, slide 15mm downwards on the display screen by the user, rotate the video frame by 15 degrees into the screen again by the mobile phone, and rotate the video frame by 60 degrees into the screen relative to the display screen, so that the sight of the user is perpendicular to the video frame.
Therefore, the image processing method can display the display image in the inclined upward direction, the included angle between the sight line of the user and the display image can be basically vertical, a supporting frame is not needed, the man-machine interaction efficiency is improved, and the cost of the supporting frame is saved.
The image processing method in the present application is described above from a method perspective, and the mobile device in the present application is described below from an apparatus perspective:
referring to fig. 10, a mobile device 1000 may implement the image processing method in the embodiment or the optional embodiment shown in fig. 4, where the mobile device 1000 may include:
an obtaining module 1001 configured to obtain an image to be processed;
the establishing module 1002 is configured to establish a three-dimensional coordinate system formed by an X axis, a Y axis, and a Z axis, where an XY plane formed by the X axis and the Y axis is parallel to a two-dimensional plane where the display screen is located;
a view transformation module 1003 for calling gluLookAt (A)x,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) And carrying out view transformation on the image to be processed to obtain an image to be displayed, wherein Ax,Ay,AzRespectively an x-axis coordinate, a y-axis coordinate, a z-axis coordinate of the observation point, Bx,By,BzRespectively an x-axis coordinate, a y-axis coordinate, a z-axis coordinate, C of the observed objectx,Cy,CzX-axis coordinate, y-axis coordinate, z-axis coordinate, respectively, of the end point of a target vector indicating the direction directly above the viewpoint, ay>0,Az>0,Cy>0,Cz<0;
The display module 1004 is configured to display an image to be displayed on a display screen.
Based on the mobile device shown in fig. 10, in an alternative embodiment of the present application,
a shooting module 1101 for shooting a target image;
an extracting module 1102, configured to extract an image feature from the target image, where the image feature is a head feature;
a determining module 1103, configured to determine (C) according to an angle between a two-dimensional plane where a camera lens of the mobile device is located and a two-dimensional plane where a display screen is located if the similarity between the image feature and the preset head feature is greater than a preset threshold valuex,Cy,Cz);
A measurement module 1104 for measuring a distance of the head feature to the mobile device;
a determining module 1103, further configured to determine (A) according to the distance and the anglex,Ay,Az);
A determining module 1103 for determining (B)x,By,Bz)。
Based on the mobile device shown in fig. 10, the present application also provides a mobile device capable of adjusting an included angle between a display image and a display screen, as shown in fig. 12. Optionally, in another embodiment of the present application, the mobile device 1000 further includes:
a receiving module 1201, configured to receive key information;
a first modification module 1202 for modifying (A) according to the key informationx,Ay,Az,Bx,By,Bz,Cx,Cy,Cz) Modified as (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz′);
View transformation module 1003 for calling gluLookAt (A)x′,Ay′,Az′,Bx′,By′,Bz′,Cx′,Cy′,Cz') carrying out view transformation on an image to be displayed to obtain a target display image;
the display module 1004 is further configured to display the target display image on a display screen.
Further, referring to fig. 12, in another alternative embodiment of the present application, the mobile device 900 further includes:
a touch module 1203, configured to detect a touch operation applied to the display screen;
a second modification module 1204, configured to modify the view transformation parameter into a target view transformation parameter according to the touch operation;
the view transformation module 1003 performs view transformation on the image to be displayed according to the target view transformation parameter to obtain a target display image;
the display module 1004 is further configured to display the target display image on a display screen.
For understanding, the relationship between the modules in the mobile device provided in the present application is described in detail in a specific application scenario as follows:
a user puts a mobile phone on a desktop flatly, clicks a video, and the acquisition module 1001 acquires a video file and decodes the video file to obtain a video frame; the establishing module 1002 may establish a three-dimensional coordinate system by using OpenGL, where an XY plane is parallel to a two-dimensional plane where the display screen is located;
the preset view transformation parameters are (0, 5, 5, 0, 0, 0, 0, 1, -1) for example, the view transformation module 803 may perform view transformation on the video frame by using the gluloogat function to obtain a new video frame, and the display module 804 may display the new video frame on the display screen. The video frame is rotated 45 degrees into the screen relative to the display screen, and when the angle between the line of sight of the user and the display screen is 45 degrees, the video frame is perpendicular to the line of sight of the user. When the mobile phone continuously carries out view transformation on a plurality of video frames, the sight line of the user is vertical to the video image.
In another case, the capturing module 1101 may capture an image, and the extracting module 1102 may extract image features from the captured image. If the shot image has eyes, the determining module 1103 may obtain an included angle between the camera lens and the display screen, where the included angle is 45 degrees, and may determine CyAnd CzAre equal in value and opposite in sign, CxSpecifically, (0, 1, -1) may be set to 0. The measuring module 1104 can also measure the distance between the eye and the mobile phone, for example, 50cm, the conversion coefficient is 0.1, and the determining module 1103 can determine ay=50×sin45°×(0.1)≈3.5,Az≈3.5,Ax0, and (B)x,By,Bz) Is (0, 0, 0), thereby determining the view transformation parameters to be (0, 3.5, 3.5, 0, 0, 0, 0, 1, -1).
For the video frame rotated by 45 degrees to the screen relative to the display screen, when the angle between the user's sight line and the display screen is 30 degrees, it is assumed that the mobile phone can rotate the video frame by 15 degrees to the screen according to the key information of one volume +, when the user presses "volume +", the receiving module 1201 receives the key information of "volume +", the first modifying module 1202 modifies (0, 5, 5, 0, 0, 0, 0, 0, 1, -1) into (0, 5, 5, 0, 0, 0, 0, 0, 1, -2) according to "volume +", the view transforming module 803 rotates the video frame by 15 degrees to the screen according to (0, 1, 5, 0, 0, 0, 0, 1, -2), and the display module 804 can display the video frame on the display screen. The user's sight is perpendicular to the video frame because the video frame is rotated 60 degrees into the screen relative to the display screen;
alternatively, assuming that the mobile phone can slide down on the display screen by 15mm according to the finger, and rotate the video frame into the screen by 15 degrees, when the user slides down on the display screen by 15mm, the touch module 1203 may detect the touch operation, the second modification module 1204 modifies (0, 5, 5, 0, 0, 0, 0, 1, -1) into (0, 5, 5, 0, 0, 0, 0, 1, -2) according to the touch operation, the view transformation module 1003 rotates the video frame into the screen by 15 degrees according to (0, 5, 5, 0, 0, 0, 0, 0, 1, -2), and the display module 1004 may display the video frame on the display screen. Since the video frame is rotated 60 degrees into the screen relative to the display screen, the user's line of sight is perpendicular to the video frame.
Referring to fig. 13, the present application provides a mobile device 1300, which may implement the method embodiment or alternative embodiment shown in fig. 8, where the mobile device 1300 includes:
an obtaining module 1301, configured to obtain an image to be processed;
the establishing module 1302 is configured to establish a three-dimensional coordinate system formed by an X axis, a Y axis and a Z axis, where an XY plane formed by the X axis and the Y axis is parallel to a two-dimensional plane where the display screen is located;
a model transformation module 1303 for calling glRotate (α, P)x,Py,Pz) Performing model transformation on the image to be processed to obtain an image to be displayed, wherein alpha is a rotation angle and P isx,Py,PzX-axis coordinate, y-axis coordinate, z-axis coordinate, 0 degree of the rotation axis end point<α<90 degree, Px>0,Py=0,Pz=0;
And a display module 1304, configured to display the image to be displayed on the display screen.
Referring to fig. 14 based on the mobile device shown in fig. 13, in another alternative embodiment of the present application, the mobile device 1300 may further include:
an image pickup module 1401 for picking up a target image;
an extracting module 1402, configured to extract an image feature from the target image, where the image feature is a head feature;
a setting module 1403, configured to, if the similarity between the image feature and the preset head feature is greater than a preset threshold, take an angle between a two-dimensional plane where a camera lens of the mobile device is located and a two-dimensional plane where a display screen is located as α.
Based on the mobile device shown in fig. 13, the present application further provides a mobile device capable of adjusting an included angle between a display image and a display screen, as shown in fig. 15. Optionally, in another embodiment of the present application, the mobile device 1300 further includes:
a receiving module 1501, configured to receive key information;
a first modification module 1502, configured to modify α to α' according to the key information;
the model transformation module 1303 further for calling glRotate (α', P)x,Py,Pz) Carrying out model transformation on an image to be displayed to obtain a target display image;
the display module 1304 is further configured to display the target display image on a display screen.
Optionally, in another embodiment of the present application, the mobile device 1300 further includes:
a touch module 1503, configured to detect a touch operation applied to the display screen;
a second modification module 1504, configured to modify α to α 'according to the touch operation, where α' is a modified rotation angle;
the model transformation module 1303 further configured to transform the object according to the call glRotate (α', P)x,Py,Pz) Carrying out model transformation on an image to be displayed to obtain a target display image;
the display module 1304 is further configured to display the target display image on a display screen.
For understanding, the relationship between the modules in the mobile device provided in the present application is described in detail in a specific application scenario as follows:
a user puts the mobile phone on a desktop flatly, clicks a video, and the acquisition module 1301 acquires a video file and decodes the video file to obtain a video frame; the establishing module 1302 may establish a three-dimensional coordinate system by using OpenGL, where an XY plane is parallel to a two-dimensional plane where the display screen is located;
for the preset model transformation parameters (45, 1, 0, 0) as an example, for a decoded video frame, the model transformation module 1303 may rotate the video frame by 45 degrees into the screen relative to the display screen by using a glRotate function, and the display module 1304 may display the rotated video frame on the display screen. The video frame is perpendicular to the user's gaze when the user's gaze is at a 45 degree angle to the display screen.
For the video frame rotated by 45 degrees to the screen relative to the display screen, when the angle between the user's sight line and the display screen is 30 degrees, it is assumed that the mobile phone can rotate the video frame by 15 degrees to the screen according to the key information of one time of volume +, when the user presses "volume +", the receiving module 1501 receives the key information of "volume +", the first modifying module 1502 modifies (45, 1, 0, 0) into (60, 1, 0, 0) according to "volume +", the model transforming module 1303 rotates the video frame by 15 degrees to the screen according to (60, 1, 0, 0), and the display module 1304 can display the video frame on the display screen. The user's sight is perpendicular to the video frame because the video frame is rotated 60 degrees into the screen relative to the display screen;
alternatively, assuming that the mobile phone can slide 5mm downwards on the display screen according to the finger, and rotate the video frame 5 degrees into the screen, when the user slides 15mm downwards on the display screen, the touch module 1503 may detect the touch operation, the second modification module 1504 modifies (45, 1, 0, 0) to (60, 1, 0, 0) according to the touch operation, the model transformation module 1303 rotates the video frame 15 degrees into the screen according to (60, 1, 0, 0), and the display module 1304 may display the video frame on the display screen. Since the video frame is rotated 60 degrees into the screen relative to the display screen, the user's line of sight is perpendicular to the video frame.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the present application, it should be understood that the disclosed system, apparatus and method may be implemented in other manners. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present invention. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and other various media capable of storing program codes.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.