CN107632985B - Webpage preloading method and device - Google Patents
Webpage preloading method and device Download PDFInfo
- Publication number
- CN107632985B CN107632985B CN201610566743.4A CN201610566743A CN107632985B CN 107632985 B CN107632985 B CN 107632985B CN 201610566743 A CN201610566743 A CN 201610566743A CN 107632985 B CN107632985 B CN 107632985B
- Authority
- CN
- China
- Prior art keywords
- sub
- gray
- display area
- characteristic value
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The invention discloses a webpage preloading method and device, and belongs to the technical field of computers. The method comprises the following steps: acquiring a face picture of a user in real time; extracting a gray characteristic value of the face picture; acquiring a target sub-display area corresponding to the gray characteristic value according to the gray characteristic value and the current display state; determining a target title item according to the target sub-display area; and when the time for reading the target title item by the user exceeds the preset time, preloading the webpage corresponding to the target title item. The method extracts the gray characteristic value from the face picture, determines the target title item according to the gray characteristic value and the current display state, and then preloads the webpage corresponding to the target title item when the time for reading the target title item by a user exceeds the preset time. The title items are pre-loaded in a targeted manner in the process, so that data traffic is saved, a large number of unnecessary webpages are prevented from occupying the memory of the terminal, and the performance of the terminal is improved.
Description
Technical Field
The invention relates to the technical field of computers, in particular to a webpage preloading method and device.
Background
With the development of computer technology, terminals with reading functions are widely used in the life of users, such as smart phones, e-book readers, tablet computers, desktop computers, and the like. In order to meet the reading requirements of users, a plurality of reading applications are installed in the terminal. After the reading application is started, a main list is displayed on a display area of the reading application, wherein the main list comprises a plurality of title items, and each title item corresponds to a webpage.
In order to increase the reading speed, the prior art preloads the web pages corresponding to the title items on the main list. The specific preloading process is as follows: in the process of scrolling the main list, the title items in the main list displayed on the current display area are obtained, and then the webpage corresponding to the displayed title items is preloaded.
In the process of implementing the invention, the inventor finds that the prior art has at least the following problems:
in the prior art, the web pages corresponding to all the title items displayed on the current display area are preloaded, but most of the loaded web page users cannot view the web pages, so that a large amount of data traffic is wasted, unnecessary web pages occupy a terminal memory, and the performance of the terminal is reduced.
Disclosure of Invention
In order to solve the problems in the prior art, embodiments of the present invention provide a method and an apparatus for preloading a web page. The technical scheme is as follows:
in one aspect, a method for preloading a web page is provided, and the method includes:
in the display process of an appointed reading application main list, acquiring a face picture of a user in real time through a front camera, wherein the main list comprises a plurality of title items, and each title item corresponds to a webpage;
extracting a gray characteristic value of the face picture;
acquiring a target sub-display area corresponding to the gray characteristic value from a target face picture database according to the gray characteristic value and the current display state, wherein the target face picture database stores the corresponding relation between each sub-display area and the gray characteristic value range in the current display state;
according to the target sub-display area, determining a target title item from the currently displayed main list;
and preloading the webpage corresponding to the target title item when the fact that the duration of the user sight line staying on the target title item exceeds the preset duration is detected.
In another aspect, an apparatus for preloading web pages is provided, the apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a face picture of a user in real time through a front camera in the display process of an appointed reading application main list, the main list comprises a plurality of title items, and each title item corresponds to a webpage;
the first extraction module is used for extracting the gray characteristic value of the face picture;
the first acquisition module is used for acquiring a target sub-display area corresponding to the gray characteristic value from a target face picture database according to the gray characteristic value and the current display state, and the target face picture database stores the corresponding relation between each sub-display area and the gray characteristic value range in the current display state;
a determining module, configured to determine a target title entry from the currently displayed main list according to the target sub-display area;
and the preloading module is used for preloading the webpage corresponding to the target title item when the fact that the duration that the user stays in the target title item exceeds the preset duration is detected.
The technical scheme provided by the embodiment of the invention has the following beneficial effects:
extracting a gray characteristic value from the face picture, determining a target title item according to the gray characteristic value and the current display state, and preloading a webpage corresponding to the target title item when the time for reading the target title item by a user exceeds the preset time. The title items are pre-loaded in a targeted manner in the process, so that data traffic is saved, a large number of unnecessary webpages are prevented from occupying the memory of the terminal, and the performance of the terminal is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed to be used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
FIG. 1 is a flowchart of a method for preloading web pages according to an embodiment of the present invention;
FIG. 2 is a flowchart of a method for preloading web pages according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of a terminal display interface according to another embodiment of the invention;
fig. 4 is a schematic diagram of a gray-scale feature value extracted from a face picture according to another embodiment of the present invention;
FIG. 5 is a schematic diagram of a terminal display interface according to another embodiment of the invention;
FIG. 6 is a diagram illustrating a web page preloading procedure according to another embodiment of the present invention;
FIG. 7 is a schematic structural diagram of an apparatus for preloading web pages according to another embodiment of the present invention;
fig. 8 is a schematic structural diagram of a web preloading terminal according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, embodiments of the present invention will be described in detail with reference to the accompanying drawings.
In modern life, many users like to read news and electronic books on smart phones and other devices in leisure time in order to relieve tension work. In order to improve the reading experience of the user, the terminal generally preloads a webpage corresponding to a title item displayed on a main list in the reading application. In a mobile network, not only the data traffic of the terminal is wasted by the preloaded title entries, but also the downloaded web pages are mostly unnecessary and are not viewed by the user. In order to save data traffic of a terminal and enable unnecessary web pages to occupy a memory of the terminal so as to improve the performance of the terminal, an embodiment of the present invention provides a web page preloading method, and referring to fig. 1, a method flow provided by the embodiment of the present invention includes:
101. in the display process of the appointed reading application main list, a front camera is used for collecting face pictures of a user in real time, the main list comprises a plurality of title items, and each title item corresponds to a webpage.
102. And extracting the gray characteristic value of the face picture.
103. And acquiring a target sub-display area corresponding to the gray characteristic value from a target face picture database according to the gray characteristic value and the current display state, wherein the target face picture database stores the corresponding relation between each sub-display area and the gray characteristic value range in the current display state.
104. And determining a target title item from the currently displayed main list according to the target sub-display area.
105. And preloading the webpage corresponding to the target title item when the fact that the duration that the sight of the user stays in the target title item exceeds the preset duration is detected.
According to the method provided by the embodiment of the invention, the gray characteristic value is extracted from the face picture, the target title item is determined according to the gray characteristic value and the current display state, and then the webpage corresponding to the target title item is preloaded when the time length for reading the target title item by a user exceeds the preset time length. The title items are pre-loaded in a targeted manner in the process, so that data traffic is saved, a large number of unnecessary webpages are prevented from occupying the memory of the terminal, and the performance of the terminal is improved.
In another embodiment of the present invention, extracting gray-scale feature values of a face picture includes:
reducing the face picture to a specified size;
converting the human face picture with the specified size into a gray value with the specified order of magnitude;
calculating an average gray value of gray values of a specified number of levels;
comparing the gray value of each level with the average gray value to obtain a specified number of comparison results;
a specified number of comparison results are combined into a gray scale feature value.
In another embodiment of the present invention, before obtaining the target sub-display area corresponding to the gray feature value from the target face image database according to the gray feature value and the current display state, the method further includes:
in any display state, dividing a display area of the terminal into a plurality of sub-display areas;
when the fact that a user logs in a designated reading application and stays in any sub-display area is detected, a front camera is used for collecting a plurality of reference face pictures of the user until touch control operation of the user on the sub-display area is detected;
extracting a gray characteristic value of each reference face picture;
forming a similar picture group by the reference face pictures of which the Hamming distance between the gray characteristic values is smaller than a preset value;
acquiring a gray characteristic value range corresponding to the similar picture group;
taking the gray characteristic value range as a gray characteristic value range corresponding to the sub-display area;
and storing the corresponding relation between each sub-display area and the gray characteristic value range in the display state to obtain a human face image database.
In another embodiment of the present invention, the method further comprises:
recording the acquisition time of each reference face picture;
acquiring minimum acquisition time and maximum acquisition time from acquisition time corresponding to the similar picture group;
taking the time interval between the minimum acquisition time and the maximum acquisition time as the preloading time of the sub-display area;
acquiring the average value of the preloading time corresponding to the sub-display areas;
and taking the average value as a preset time length in a display state.
In another embodiment of the present invention, determining a target title entry from a currently displayed main list according to a target sub-display area includes:
acquiring a coordinate range of a target sub-display area in the vertical direction;
calling an API (Application Programming Interface), and acquiring a title item in a coordinate range from a currently displayed main list;
and taking the title entries positioned in the coordinate range in the main list as target title entries.
In another embodiment of the present invention, the display states include a landscape display state and a portrait display state.
All the above-mentioned optional technical solutions can be combined arbitrarily to form the optional embodiments of the present invention, and are not described herein again.
The embodiment of the invention provides a webpage preloading method, and referring to fig. 2, the method provided by the embodiment of the invention comprises the following steps:
201. the terminal establishes a human face picture database in different display states in advance.
The terminal may be a smart phone, an electronic book reader, a personal computer, a tablet computer, a notebook computer, or the like, and the present embodiment does not specifically limit the product type of the terminal. In order to meet the reading requirements of users, a plurality of reading applications including a news reading application, an electronic book reading application and the like are installed in the terminal. After the reading application is started, the terminal displays a main list of the reading application on a display interface, wherein the main list comprises a plurality of title items, each title item is actually a webpage link, and when the webpage link is clicked, the terminal displays a webpage corresponding to the title item on the display interface. In order to maintain reading schedules of different users conveniently, the reading application provides a registration function, based on the registration function, a user can register an account in the reading application, when the user logs in the reading application through the registered account for the first time, the terminal records information such as the reading schedule and the reading habit of the user and provides corresponding services for the user, when a subsequent user uses the reading application again, the user does not need to log in again, the terminal defaults that the user is the currently logged-in user, and if other users use the reading application on the terminal of the user, the other users can switch the account of the user into self-registered accounts through the account switching function of the reading application.
In this embodiment, the terminal is provided with a front-facing camera, when the reading application is started and a user logs in the reading application through a registered account, the terminal acquires a face picture of the user through the front-facing camera, and constructs a face picture database corresponding to the user according to the acquired face picture, where the face picture database is used for determining a title item currently read by the user from a main list of the reading application.
Generally, the display states of the terminal include a horizontal screen display state, a vertical screen display state and the like, and in different display states, the face pictures acquired by the front camera are different, and further, the face picture database constructed according to the acquired face pictures is also different. Therefore, for different display states of the terminal, the method provided by the embodiment includes, but is not limited to, the following two cases when the face image database is constructed:
in the first case, for the horizontal screen display state, the following steps 20111-20117 can be seen in the specific construction process.
20111. In the landscape display state, the terminal divides the display area into a plurality of sub-display areas.
Generally, a display area on a terminal display interface is rectangular, and in a horizontal screen display state, a straight line parallel to the length of the rectangular display area is taken as a dividing line, so that the display area can be equally divided into a plurality of identical areas, and each area is a sub-display area. In the specific division, the display area may be divided into 6 sub-display areas, or the display area may be divided into 10 sub-display areas, and the number of the sub-display areas that divide the display area is not specifically limited in this embodiment. Fig. 3 shows a schematic diagram of a display interface of the terminal in a landscape display state, and referring to fig. 3, a rectangular coordinate system may be established with the length of a rectangular display area on the display interface of the terminal as an X axis and the width of the rectangular display area as a Y axis, and then the display area may be divided into 10 sub-display areas by using a dividing line parallel to the X axis.
In order to distinguish different sub-display areas, the terminal may set different numbers for different sub-display areas, for example, the number 1 may be set for the uppermost sub-display area according to the display position, and so on, and the number n may be set for the lowermost sub-display area.
20112. When the fact that the user logs in the appointed reading application and stays in any sub-display area through the sight line of the user is detected, a plurality of reference face pictures of the user are collected through the front camera until touch operation of the user on the sub-display area is detected.
And when the fact that the user logs in the appointed reading application is detected, the terminal collects a plurality of reference face pictures of the user when the sight of the user stays in each sub-display area through the front camera. Taking any sub-display area as an example, when the user stays at the sub-display area, the terminal collects multiple reference face pictures of the user through the front camera until the user finishes reading the title items displayed on the sub-display area, and the terminal detects the touch operation for the sub-display area.
When the terminal collects the face picture of the user, the face picture can be realized through facedetectors and findFaces. Wherein, the faceDetector type acquisition codes are as follows:
BitmapFactory.Options bitmapOption=new BitmapFactory.Options();
bitmapOption.inPreferredConfig=Bitmap.Config.RGB_565;
Bitmap myBitmap=BitmapFactory.decodeResource(getResources(),
R.drawable.mm,bitmapOption);
FaceDetector mFaceDetector=new
FaceDetector(myBitmap.getWidth(),myBitmap.getHeight(),1);
FaceDetector.Face[]mFace=new FaceDetector.Face[1];
mFaceDetector.findFaces(myBitmap,mFace)。
in order to facilitate subsequent application, the terminal can store the face picture acquired by the front camera in a bitmap, and the bitmap is a local storage medium of the terminal and is used for storing the face picture of the user.
20113. And the terminal extracts the gray characteristic value of each reference face picture.
When the terminal extracts the gray characteristic value of each reference face picture, the following steps 201131-201135 can be adopted:
201131, the terminal reduces each reference face picture to a specified size.
In this embodiment, it is preferable to reduce each reference face picture to 8 × size of 64 pixels. By reducing each reference face picture to 64 pixels, the details of each reference face picture can be removed, basic information such as the structure, the brightness and the like of each reference face picture is kept, and picture difference caused by different sizes and different proportions is avoided.
201132, the terminal converts the reference face picture with the specified size into a grey value with the specified order of magnitude.
In the present embodiment, the specified order of magnitude is 64 levels. The reference face picture reduced to 64 pixels is converted into 64-level gray scale values, that is, all the pixel points included in each reference face picture have 64 colors. By converting the reference face picture with the specified size into 64-level gray scale values, the calculation amount of the reference face picture is reduced.
201133, the terminal calculates the mean gray value of the gray values for the specified number of levels.
The terminal may obtain an average gray value by adding the 64-level gray values of each reference face picture and averaging.
201134, the terminal compares the gray value of each level with the average gray value to obtain a specified number of comparison results.
For each reference face picture, the terminal compares the gray value of each level in the 64-level gray values with the average gray value respectively, and when the gray value of any level is greater than or equal to the average gray value, the comparison result is counted as 1; when the gray value of any level is less than the average gray value, the comparison result is 0. When 64 levels of gray values are compared with the average gray value, 64 comparison results are obtained.
201135, the terminal combines a specified number of comparison results into a gray level feature value.
The terminal combines the 64 comparison results together to obtain a 64-bit integer, and the 64-bit integer is the gray characteristic value of the reference face picture, which is also called the fingerprint of the face reference picture. Referring to fig. 4, the left image in fig. 4 is a reference face picture, and the right image in fig. 4 is a gray characteristic value corresponding to the reference face picture, that is, a fingerprint of the reference face picture.
It should be noted that, when all comparison results of each reference face picture are combined together, the combination order may be set arbitrarily, as long as it is ensured that all reference face pictures are combined in the same order. In addition, the above description is given by taking the example of reducing each reference face picture to 64 pixels, and in practical application, each reference face picture can also be reduced to 36 pixels, 81 pixels, and so on, and this embodiment is not described one by one.
20114. And the terminal forms a similar picture group by the reference face pictures of which the Hamming distance between the gray characteristic values is smaller than a preset value.
After extracting the gray characteristic value of each reference face picture, the terminal compares the data at the same position of the gray characteristic values of any two reference face pictures to obtain the Hamming distance between the gray characteristic values of the two reference face pictures. And if the Hamming distance between the gray characteristic values of the two reference face pictures is smaller than a preset value, taking the two reference face pictures as similar pictures, and forming a similar picture group by all the reference face pictures which are similar pictures with the same reference face picture in the collected multiple reference face pictures.
The preset value may be 5, 7, 10, etc., and the preset value is not specifically limited in this embodiment. Taking a preset numerical value as 5 as an example, when the hamming distance between the gray characteristic values of any two reference face pictures is less than 5, that is, the number of times that the data on the same positions of the two reference face pictures are different is less than 5, the two reference face pictures are considered to be similar pictures; and when the Hamming distance between the gray characteristic values of any two reference face pictures is more than 5, namely the number of times that the data on the same positions of the two reference face pictures are different is more than 5, the two reference face pictures are considered to be dissimilar pictures.
20115. And the terminal acquires the gray characteristic value range corresponding to the similar picture group.
For a plurality of reference face pictures included in the similar picture group, the terminal sorts the gray feature values corresponding to each reference face picture in the similar picture group according to the descending order of the gray feature values to obtain the minimum gray feature value and the maximum gray feature value, and obtains the range of the gray feature value corresponding to the similar picture group according to the minimum gray feature value (fp _ min) and the maximum gray feature value (fp _ max).
20116. And the terminal takes the gray characteristic value range as the gray characteristic value range corresponding to the sub-display area.
Because the reference face pictures which are included in the similar picture group are acquired by the front camera when the sight of the user stays on the sub-display area, the gray characteristic value corresponding to the similar picture group can be used as the gray characteristic value range corresponding to the sub-display area.
20117. And the terminal stores the corresponding relation between each sub-display area and the gray characteristic value range in the horizontal screen display state to obtain a human face picture database.
After each sub-display area acquires the corresponding gray characteristic value through the steps 20112-20116, the terminal stores the corresponding relation between each sub-display area and the gray characteristic value range to obtain a human face picture database corresponding to the horizontal screen display state. For the convenience of storage, the terminal can store the data in a Key-Value storage form, wherein a Key Value is a number of the sub-display area, and a Value is a gray characteristic Value range corresponding to the sub-display area. Referring to table 1, a storage form of data in the face picture database corresponding to the landscape display state is shown.
TABLE 1
In the horizontal screen state, for any sub-display area, the terminal records the acquisition time of each reference face picture in the process from the time when the terminal detects that the sight of the user stays in the sub-display area to the time when the terminal detects that the user touches the sub-display area. For the similar picture group obtained in step 20114, the terminal may obtain the minimum acquisition time and the maximum acquisition time from the acquisition times corresponding to the similar picture group, and use a time interval between the minimum acquisition time and the maximum acquisition time as the pre-loading time of the sub-display area. Wherein the pre-loaded time is a sub-display from the user's readingThe time interval from a title entry on an area to a click on the title entry. After the preloading time is obtained for each sub-display area in the manner, the terminal can obtain the average value of the preloading time corresponding to the plurality of sub-display areas, and then the average value is used as the preset duration in the horizontal screen display state. For example, the terminal divides the display area into 10 sub-display areas, and if the acquired preload time corresponding to the 10 sub-display areas is t1、t2、t3、t4、t5、t6、t7、t8、t9、t10According to the preloading time corresponding to the 10 sub-display areas, the terminal can obtain the average value of the preloading time (t ═ t)1+t2+t3+t4+t5+t6+t7+t8+t9+t10)/10。
In order to improve the accuracy of the constructed face picture database corresponding to the horizontal screen display state and the preset duration, the terminal can acquire the acquisition time of the reference face picture of the user and each reference face picture when the user logs in the appointed reading application for multiple times, construct the face picture database corresponding to the horizontal screen display state based on the reference face pictures acquired for multiple times, and calculate the preset duration in the horizontal screen display state based on the acquisition time of each reference face picture acquired for multiple times.
In the second case, the state is displayed for the vertical screen.
20121. In the landscape display state, the terminal divides the display area into a plurality of sub-display areas.
Generally, a display area on a terminal display interface is rectangular, and in a horizontal screen display state, a straight line parallel to the width of the rectangular display area is taken as a dividing line, so that the display area can be equally divided into a plurality of identical areas, and each area is a sub-display area. In the specific division, the display area may be divided into 6 sub-display areas, or the display area may be divided into 10 sub-display areas, and the number of the sub-display areas that divide the display area is not specifically limited in this embodiment. Fig. 5 shows a schematic diagram of a display interface of the terminal in a vertical screen display state, and referring to fig. 5, a rectangular coordinate system may be established with the width of a rectangular display area on the display interface of the terminal as an X axis and the length of the rectangular display area as a Y axis, and then the display area may be divided into 10 sub-display areas by using a dividing line parallel to the X axis.
In order to distinguish different sub-display areas, the terminal may set different numbers for different sub-display areas, for example, the number 1 may be set for the uppermost sub-display area according to the display position, and so on, and the number n may be set for the lowermost sub-display area.
20122. When the fact that the user logs in the appointed reading application and stays in any sub-display area through the sight line of the user is detected, a plurality of reference face pictures of the user are collected through the front camera until touch operation of the user on the sub-display area is detected.
The step is the same as the step 20112 when the step is implemented specifically, refer to the step 20112 specifically, and will not be described herein again.
20123. And the terminal extracts the gray characteristic value of each reference face picture.
The step is the same as the step 20113 in the specific implementation, and reference is specifically made to the step 20113, which is not described herein again.
20124. And the terminal forms a similar picture group by the reference face pictures of which the Hamming distance between the gray characteristic values is smaller than a preset value.
The step is the same as the step 20114 in the specific implementation, and reference is specifically made to the step 20114, which is not described herein again.
20125. And the terminal acquires the gray characteristic value range corresponding to the similar picture group.
The step is the same as the step 20115 in the specific implementation, and reference is specifically made to the step 20115, which is not described herein again.
20126. And the terminal takes the gray characteristic value range as the gray characteristic value range corresponding to the sub-display area.
The step is the same as the step 20116 when the step is implemented specifically, refer to the step 20116 specifically, and will not be described again here.
20127. And the terminal stores the corresponding relation between each sub-display area and the gray characteristic value range in the vertical screen display state to obtain a human face picture database.
After each sub-display area obtains the corresponding gray characteristic value through the steps 20122-20126, the terminal stores the corresponding relation between each sub-display area and the range of the gray characteristic value, and a face picture database corresponding to the vertical screen display state is obtained. For the convenience of storage, the terminal can also adopt a Key-Value storage form for storage.
In the vertical screen state, for any sub-display area, the terminal records the acquisition time of each reference face picture in the process from the time when the terminal detects that the sight of the user stays in the sub-display area to the time when the terminal detects that the user touches the sub-display area. For the similar picture group obtained in step 20124, the terminal may obtain the minimum acquisition time and the maximum acquisition time from the acquisition times corresponding to the similar picture group, and use the time interval between the minimum acquisition time and the maximum acquisition time as the preloading time of the sub-display area. After the preloading time is obtained for each sub-display area in the manner, the terminal can obtain the average value of the preloading time corresponding to the plurality of sub-display areas, and then the average value is used as the preset duration in the vertical screen display state.
In order to improve the accuracy of the constructed face picture database corresponding to the vertical screen display state and the preset time length, the terminal can acquire the acquisition time of the reference face picture of the user and each reference face picture when the user logs in the appointed reading application for multiple times, construct the face picture database corresponding to the vertical screen display state based on the reference face pictures acquired for multiple times, and calculate the preset time length in the vertical screen display state based on the acquisition time of each reference face picture acquired for multiple times.
It should be noted that, the above description is given by taking an example in which the terminal constructs the face image database corresponding to the user when the user logs in the designated reading application, and for other users, when other users log in the designated reading application, the terminal also constructs the corresponding face image database for other users. In addition, when a user logs in different terminals, the different terminals can pre-load potential reading webpages for the user according to the use habits of the user, and after a face picture database corresponding to the user and preset durations in different display states are constructed by the terminals, the constructed face picture database and the preset durations in different display states are uploaded to a server of an appointed reading application and are stored by the server of the appointed reading application.
202. And in the display process of the appointed reading application main list, the front-facing camera of the terminal acquires the face picture of the user in real time.
And after the appointed reading application is started, the terminal displays the main list of the reading application on the display interface. In the display process of the appointed reading application main list, the terminal can acquire the face picture of the user in real time through the front-facing camera
203. And the terminal extracts the gray characteristic value of the face picture.
When the terminal extracts the gray characteristic value of the face picture, the following steps 2031-2035 can be adopted:
2031. and the terminal reduces the face picture to a specified size.
Wherein the specified size is 8 x 8, i.e. 64 pixels.
2032. And the terminal converts the human face picture with the specified size into a gray value with the specified order of magnitude.
Wherein the specified order of magnitude is 64 levels.
2033. The terminal calculates the mean gray value of the gray values of the specified number of levels.
2034. And the terminal compares the gray value of each level with the average gray value to obtain a specified number of comparison results.
2035. The terminal combines a specified number of comparison results into a gray level feature value.
204. And the terminal acquires a target sub-display area corresponding to the gray characteristic value from the target face picture database according to the gray characteristic value and the current display state.
And the terminal acquires a target face picture database corresponding to the current display state from the face picture database corresponding to the user according to the current display state and the currently logged user. If the current display state is the horizontal screen display state, acquiring a face picture database corresponding to the horizontal screen display state as a target face picture database; and if the current display state is the vertical screen display state, acquiring a face picture database corresponding to the vertical screen display state as a target face picture database.
Based on the acquired gray characteristic value, the terminal searches a gray characteristic value range where the gray characteristic value is located from a target face picture database, acquires a sub-display area corresponding to the gray characteristic value range, and further takes the sub-display area as a target sub-display area corresponding to the gray characteristic value.
205. And the terminal determines a target title item from the currently displayed main list according to the target sub-display area.
When the process is specifically realized, the following steps 2051-2053 can be seen:
2051. and the terminal acquires the coordinate range of the target sub-display area in the vertical direction.
In this embodiment, an operating system installed in a terminal is taken as an android operating system as an example. Under an android operating system, no matter what display state the terminal is in, each sub-display area corresponds to a coordinate range in the vertical direction. In order to facilitate the terminal to search the coordinate range corresponding to each sub-display area, the terminal may pre-store a coordinate database in which the corresponding relationship between each sub-display area and the coordinate range in the vertical direction is stored. Based on the constructed coordinate database, in the current display state, after the terminal acquires the target sub-display area through the step 204, the terminal may acquire the coordinate range of the target sub-display area in the vertical direction from the coordinate database.
2052. And calling an Application Programming Interface (API) by the terminal, and acquiring the title items in the coordinate range from the currently displayed main list.
The terminal can acquire an Index value corresponding to a given coordinate range based on ListView # pointToPosition (x, y) through an API provided by ListView of an android system, and further acquire a title entry at an Index position according to a code MVC architecture and a currently displayed main list.
2053. The terminal takes the title entries located in the coordinate range in the main list as target title entries.
206. And when the fact that the duration that the sight of the user stays in the target title item exceeds the preset duration is detected, the terminal preloads the webpage corresponding to the target title item.
When the fact that the sight of the user stays at the target title item is detected, the terminal starts to record the time length of the user for reading the target title item, when the fact that the time length of the user for reading the target title item exceeds the preset time length in the current display state is detected, the possibility that the user clicks the target title item is high, in order to improve the opening speed of a webpage corresponding to the target title item and improve the reading experience of the user, the terminal can pre-load the webpage corresponding to the target title item, and the webpage corresponding to the target title item is downloaded to a local memory of the terminal. The local memory is at least one of a volatile memory (e.g., a memory) and a nonvolatile memory (e.g., a hard disk). If the touch operation of the user on the target title item is detected, the terminal directly obtains the webpage corresponding to the target title item from the local memory without pulling the webpage from the internet, and then displays the webpage corresponding to the target title item, so that the webpage display speed is increased, and the reading experience of the user is improved.
Fig. 6 is a schematic diagram of an implementation process of the web page preloading method according to an embodiment of the present invention, referring to fig. 6, when the designated reading application is started, the user logs in the designated reading application, the terminal displays a main list of the designated reading application, during the display process of the main list, the terminal acquires a face picture of the user through the front-facing camera and extracts a gray characteristic value from the acquired face picture, since the current display state is the vertical screen display state, the terminal can acquire a face picture database corresponding to the vertical screen display state from the face picture database of the logged-in user, and further, according to the gray characteristic value, searches a gray characteristic value range in which the gray characteristic value is located from the face picture database corresponding to the vertical screen display state, acquires a sub-display region corresponding to the gray characteristic value range, and uses the sub-display region as a target sub-display region, and then obtaining a target title item positioned on the target sub-display area, and preloading the target title item when the reading time of the target title item is detected to exceed the preset time in the vertical screen display state.
According to the method provided by the embodiment of the invention, the gray characteristic value is extracted from the face picture, the target title item is determined according to the gray characteristic value and the current display state, and then the webpage corresponding to the target title item is preloaded when the time length for reading the target title item by a user exceeds the preset time length. The title items are pre-loaded in a targeted manner in the process, so that data traffic is saved, a large number of unnecessary webpages are prevented from occupying the memory of the terminal, and the performance of the terminal is improved.
Referring to fig. 7, an embodiment of the present invention provides a web page preloading device, where the device includes:
the first acquisition module 701 is used for acquiring a face picture of a user in real time through a front camera in the display process of a specified reading application main list, wherein the main list comprises a plurality of title items, and each title item corresponds to a webpage;
a first extraction module 702, configured to extract a gray characteristic value of a face picture;
a first obtaining module 703, configured to obtain, according to the gray characteristic value and the current display state, a target sub-display region corresponding to the gray characteristic value from a target face image database, where a corresponding relationship between each sub-display region and a gray characteristic value range in the current display state is stored in the target face image database;
a determining module 704, configured to determine a target title entry from the currently displayed main list according to the target sub-display area;
the preloading module 705 is configured to preload a webpage corresponding to the target title item when it is detected that the duration that the user stays at the target title item exceeds a preset duration.
In another embodiment of the present invention, the first extraction module 702 is configured to reduce the face image to a specified size; converting the human face picture with the specified size into a gray value with the specified order of magnitude; calculating an average gray value of gray values of a specified number of levels; comparing the gray value of each level with the average gray value to obtain a specified number of comparison results; a specified number of comparison results are combined into a gray scale feature value.
In another embodiment of the present invention, the apparatus further comprises:
the sub-display area dividing module is used for dividing the display area of the terminal into a plurality of sub-display areas in any display state;
the second acquisition module is used for acquiring a plurality of reference face pictures of the user through the front camera when the fact that the user logs in the appointed reading application and the sight of the user stays in any sub-display area is detected until the touch operation of the user on the sub-display area is detected;
the second extraction module is used for extracting the gray characteristic value of each reference face picture;
the similar picture group composition module is used for composing a similar picture group by the reference face pictures of which the Hamming distance between the gray characteristic values is smaller than a preset value;
the second acquisition module is used for acquiring the gray characteristic value range corresponding to the similar picture group; taking the gray characteristic value range as a gray characteristic value range corresponding to the sub-display area;
and the storage module is used for storing the corresponding relation between each sub-display area and the gray characteristic value range in the display state to obtain a human face image database.
In another embodiment of the present invention, the apparatus further comprises:
the recording module is used for recording the acquisition time of each reference face picture;
the third acquisition module is used for acquiring the minimum acquisition time and the maximum acquisition time from the acquisition time corresponding to the similar picture group; taking the time interval between the minimum acquisition time and the maximum acquisition time as the preloading time of the sub-display area;
the fourth acquisition module is used for acquiring the average value of the preloading time corresponding to the plurality of sub-display areas; and taking the average value as a preset time length in a display state.
In another embodiment of the present invention, the determining module 704 is configured to obtain a coordinate range of the target sub-display area in the vertical direction; calling an Application Programming Interface (API), and acquiring a title item in a coordinate range from a currently displayed main list; and taking the title entries positioned in the coordinate range in the main list as target title entries.
In another embodiment of the present invention, the display states include a landscape display state and a portrait display state.
In summary, the apparatus provided in the embodiment of the present invention extracts the gray characteristic value from the face picture, determines the target title item according to the gray characteristic value and the current display state, and then preloads the webpage corresponding to the target title item when the time length for the user to read the target title item exceeds the preset time length. The title items are pre-loaded in a targeted manner in the process, so that data traffic is saved, a large number of unnecessary webpages are prevented from occupying the memory of the terminal, and the performance of the terminal is improved.
Referring to fig. 8, a schematic structural diagram of a web page preloading terminal according to an embodiment of the present invention is shown, where the terminal may be used to implement the web page preloading method provided in the foregoing embodiment. Specifically, the method comprises the following steps:
the terminal 800 may include components such as an RF (Radio Frequency) circuit 110, a memory 120 including one or more computer-readable storage media, an input unit 130, a display unit 140, a sensor 150, an audio circuit 160, a WiFi (Wireless Fidelity) module 170, a processor 180 including one or more processing cores, and a power supply 190. Those skilled in the art will appreciate that the terminal structure shown in fig. 8 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components. Wherein:
the RF circuit 110 may be used for receiving and transmitting signals during information transmission and reception or during a call, and in particular, receives downlink information from a base station and then sends the received downlink information to the one or more processors 180 for processing; in addition, data relating to uplink is transmitted to the base station. In general, the RF circuitry 110 includes, but is not limited to, an antenna, at least one Amplifier, a tuner, one or more oscillators, a Subscriber Identity Module (SIM) card, a transceiver, a coupler, an LNA (Low Noise Amplifier), a duplexer, and the like. In addition, the RF circuitry 110 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to GSM (Global System for Mobile communications), GPRS (General Packet Radio Service), CDMA (Code Division Multiple Access), WCDMA (Wideband Code Division Multiple Access), LTE (Long Term Evolution), e-mail, SMS (short messaging Service), etc.
The memory 120 may be used to store software programs and modules, and the processor 180 executes various functional applications and data processing by operating the software programs and modules stored in the memory 120. The memory 120 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required by at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the terminal 800, and the like. Further, the memory 120 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid state storage device. Accordingly, the memory 120 may further include a memory controller to provide the processor 180 and the input unit 130 with access to the memory 120.
The input unit 130 may be used to receive input numeric or character information and generate keyboard, mouse, joystick, optical or trackball signal inputs related to user settings and function control. In particular, the input unit 130 may include a touch-sensitive surface 131 as well as other input devices 132. The touch-sensitive surface 131, also referred to as a touch display screen or a touch pad, may collect touch operations by a user on or near the touch-sensitive surface 131 (e.g., operations by a user on or near the touch-sensitive surface 131 using a finger, a stylus, or any other suitable object or attachment), and drive the corresponding connection device according to a predetermined program. Alternatively, the touch sensitive surface 131 may comprise two parts, a touch detection means and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, sends the touch point coordinates to the processor 180, and can receive and execute commands sent by the processor 180. Additionally, the touch-sensitive surface 131 may be implemented using various types of resistive, capacitive, infrared, and surface acoustic waves. In addition to the touch-sensitive surface 131, the input unit 130 may also include other input devices 132. In particular, other input devices 132 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 140 may be used to display information input by or provided to a user and various graphical user interfaces of the terminal 800, which may be made up of graphics, text, icons, video, and any combination thereof. The Display unit 140 may include a Display panel 141, and optionally, the Display panel 141 may be configured in the form of an LCD (Liquid Crystal Display), an OLED (Organic Light-Emitting Diode), or the like. Further, the touch-sensitive surface 131 may cover the display panel 141, and when a touch operation is detected on or near the touch-sensitive surface 131, the touch operation is transmitted to the processor 180 to determine the type of the touch event, and then the processor 180 provides a corresponding visual output on the display panel 141 according to the type of the touch event. Although in FIG. 8, touch-sensitive surface 131 and display panel 141 are shown as two separate components to implement input and output functions, in some embodiments, touch-sensitive surface 131 may be integrated with display panel 141 to implement input and output functions.
The terminal 800 can also include at least one sensor 150, such as a light sensor, motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display panel 141 according to the brightness of ambient light, and a proximity sensor that may turn off the display panel 141 and/or a backlight when the terminal 800 is moved to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when the mobile phone is stationary, and can be used for applications of recognizing the posture of the mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the terminal 800, further description is omitted here.
WiFi belongs to a short-distance wireless transmission technology, and the terminal 800 can help a user send and receive e-mails, browse web pages, access streaming media, and the like through the WiFi module 170, and provides wireless broadband internet access for the user. Although fig. 8 shows the WiFi module 170, it is understood that it does not belong to the essential constitution of the terminal 800, and may be omitted entirely as needed within the scope not changing the essence of the invention.
The processor 180 is a control center of the terminal 800, connects various parts of the entire mobile phone using various interfaces and lines, and performs various functions of the terminal 800 and processes data by operating or executing software programs and/or modules stored in the memory 120 and calling data stored in the memory 120, thereby performing overall monitoring of the mobile phone. Optionally, processor 180 may include one or more processing cores; optionally, the processor 180 may integrate an application processor and a modem processor, wherein the application processor mainly handles operating systems, user interfaces, application programs, and the like, and the modem processor mainly handles wireless communications. It will be appreciated that the modem processor described above may not be integrated into the processor 180.
The terminal 800 further includes a power supply 190 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 180 via a power management system to manage charging, discharging, and power consumption management functions via the power management system. The power supply 190 may also include any component including one or more of a dc or ac power source, a recharging system, a power failure detection circuit, a power converter or inverter, a power status indicator, and the like.
Although not shown, the terminal 800 may further include a camera, a bluetooth module, etc., which will not be described herein. In this embodiment, the display unit of the terminal 800 is a touch screen display, and the terminal 800 further includes a memory and one or more programs, wherein the one or more programs are stored in the memory and configured to be executed by the one or more processors. The one or more programs include instructions for:
in the display process of an appointed reading application main list, acquiring a face picture of a user in real time through a front-facing camera, wherein the main list comprises a plurality of title items, and each title item corresponds to a webpage;
extracting a gray characteristic value of the face picture;
acquiring a target sub-display area corresponding to the gray characteristic value from a target face picture database according to the gray characteristic value and the current display state, wherein the target face picture database stores the corresponding relation between each sub-display area and the gray characteristic value range in the current display state;
determining a target title item from a currently displayed main list according to the target sub-display area;
and preloading the webpage corresponding to the target title item when the fact that the duration that the sight of the user stays in the target title item exceeds the preset duration is detected.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
extracting the gray characteristic value of the face picture, comprising the following steps:
reducing the face picture to a specified size;
converting the human face picture with the specified size into a gray value with the specified order of magnitude;
calculating an average gray value of gray values of a specified number of levels;
comparing the gray value of each level with the average gray value to obtain a specified number of comparison results;
a specified number of comparison results are combined into a gray scale feature value.
Assuming that the above is the second possible implementation manner, in a third possible implementation manner provided on the basis of the second possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
before the target sub-display area corresponding to the gray characteristic value is obtained from the target face image database according to the gray characteristic value and the current display state, the method further comprises the following steps:
in any display state, dividing a display area of the terminal into a plurality of sub-display areas;
when the fact that a user logs in a designated reading application and stays in any sub-display area is detected, a front camera is used for collecting a plurality of reference face pictures of the user until touch control operation of the user on the sub-display area is detected;
extracting a gray characteristic value of each reference face picture;
forming a similar picture group by the reference face pictures of which the Hamming distance between the gray characteristic values is smaller than a preset value;
acquiring a gray characteristic value range corresponding to the similar picture group;
taking the gray characteristic value range as a gray characteristic value range corresponding to the sub-display area;
and storing the corresponding relation between each sub-display area and the gray characteristic value range in the display state to obtain a human face image database.
Assuming that the above is the third possible implementation manner, in a fourth possible implementation manner provided on the basis of the third possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
recording the acquisition time of each reference face picture;
acquiring minimum acquisition time and maximum acquisition time from acquisition time corresponding to the similar picture group;
taking the time interval between the minimum acquisition time and the maximum acquisition time as the preloading time of the sub-display area;
acquiring the average value of the preloading time corresponding to the sub-display areas;
and taking the average value as a preset time length in a display state.
Assuming that the above is the fourth possible implementation manner, in a fifth possible implementation manner provided on the basis of the fourth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
determining a target title entry from the currently displayed main list according to the target sub-display area, including:
acquiring a coordinate range of a target sub-display area in the vertical direction;
calling an Application Programming Interface (API), and acquiring a title item in a coordinate range from a currently displayed main list;
and taking the title entries positioned in the coordinate range in the main list as target title entries.
Assuming that the above is the fifth possible implementation manner, in a sixth possible implementation manner provided on the basis of the fifth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
the display state comprises a horizontal screen display state and a vertical screen display state.
The terminal provided by the embodiment of the invention extracts the gray characteristic value from the face picture, determines the target title item according to the gray characteristic value and the current display state, and then preloads the webpage corresponding to the target title item when the time for reading the target title item by a user exceeds the preset time. The title items are pre-loaded in a targeted manner in the process, so that data traffic is saved, a large number of unnecessary webpages are prevented from occupying the memory of the terminal, and the performance of the terminal is improved.
An embodiment of the present invention further provides a computer-readable storage medium, where the computer-readable storage medium may be a computer-readable storage medium contained in the memory in the foregoing embodiment; or it may be a separate computer-readable storage medium not incorporated in the terminal. The computer-readable storage medium stores one or more programs, the one or more programs being used by one or more processors to perform a method of web preloading, the method comprising:
in the display process of an appointed reading application main list, acquiring a face picture of a user in real time through a front-facing camera, wherein the main list comprises a plurality of title items, and each title item corresponds to a webpage;
extracting a gray characteristic value of the face picture;
acquiring a target sub-display area corresponding to the gray characteristic value from a target face picture database according to the gray characteristic value and the current display state, wherein the target face picture database stores the corresponding relation between each sub-display area and the gray characteristic value range in the current display state;
determining a target title item from a currently displayed main list according to the target sub-display area;
and preloading the webpage corresponding to the target title item when the fact that the duration that the sight of the user stays in the target title item exceeds the preset duration is detected.
Assuming that the above is the first possible implementation manner, in a second possible implementation manner provided on the basis of the first possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
extracting the gray characteristic value of the face picture, comprising the following steps:
reducing the face picture to a specified size;
converting the human face picture with the specified size into a gray value with the specified order of magnitude;
calculating an average gray value of gray values of a specified number of levels;
comparing the gray value of each level with the average gray value to obtain a specified number of comparison results;
a specified number of comparison results are combined into a gray scale feature value.
Assuming that the above is the second possible implementation manner, in a third possible implementation manner provided on the basis of the second possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
before the target sub-display area corresponding to the gray characteristic value is obtained from the target face image database according to the gray characteristic value and the current display state, the method further comprises the following steps:
in any display state, dividing a display area of the terminal into a plurality of sub-display areas;
when the fact that a user logs in a designated reading application and stays in any sub-display area is detected, a front camera is used for collecting a plurality of reference face pictures of the user until touch control operation of the user on the sub-display area is detected;
extracting a gray characteristic value of each reference face picture;
forming a similar picture group by the reference face pictures of which the Hamming distance between the gray characteristic values is smaller than a preset value;
acquiring a gray characteristic value range corresponding to the similar picture group;
taking the gray characteristic value range as a gray characteristic value range corresponding to the sub-display area;
and storing the corresponding relation between each sub-display area and the gray characteristic value range in the display state to obtain a human face image database.
Assuming that the above is the third possible implementation manner, in a fourth possible implementation manner provided on the basis of the third possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
recording the acquisition time of each reference face picture;
acquiring minimum acquisition time and maximum acquisition time from acquisition time corresponding to the similar picture group;
taking the time interval between the minimum acquisition time and the maximum acquisition time as the preloading time of the sub-display area;
acquiring the average value of the preloading time corresponding to the sub-display areas;
and taking the average value as a preset time length in a display state.
Assuming that the above is the fourth possible implementation manner, in a fifth possible implementation manner provided on the basis of the fourth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
determining a target title entry from the currently displayed main list according to the target sub-display area, including:
acquiring a coordinate range of a target sub-display area in the vertical direction;
calling an Application Programming Interface (API), and acquiring a title item in a coordinate range from a currently displayed main list;
and taking the title entries positioned in the coordinate range in the main list as target title entries.
Assuming that the above is the fifth possible implementation manner, in a sixth possible implementation manner provided on the basis of the fifth possible implementation manner, the memory of the terminal further includes instructions for performing the following operations:
the display state comprises a horizontal screen display state and a vertical screen display state.
The computer-readable storage medium provided by the embodiment of the invention extracts the gray characteristic value from the face picture, determines the target title item according to the gray characteristic value and the current display state, and then preloads the webpage corresponding to the target title item when the time for reading the target title item by a user exceeds the preset time. The title items are pre-loaded in a targeted manner in the process, so that data traffic is saved, a large number of unnecessary webpages are prevented from occupying the memory of the terminal, and the performance of the terminal is improved.
The embodiment of the invention provides a graphical user interface, which is used on a webpage preloading terminal, wherein the webpage preloading terminal comprises a touch screen display, a memory and one or more processors for executing one or more programs; the graphical user interface includes:
in the display process of an appointed reading application main list, acquiring a face picture of a user in real time through a front-facing camera, wherein the main list comprises a plurality of title items, and each title item corresponds to a webpage;
extracting a gray characteristic value of the face picture;
acquiring a target sub-display area corresponding to the gray characteristic value from a target face picture database according to the gray characteristic value and the current display state, wherein the target face picture database stores the corresponding relation between each sub-display area and the gray characteristic value range in the current display state;
determining a target title item from a currently displayed main list according to the target sub-display area;
and preloading the webpage corresponding to the target title item when the fact that the duration that the sight of the user stays in the target title item exceeds the preset duration is detected.
The graphical user interface provided by the embodiment of the invention extracts the gray characteristic value from the face picture, determines the target title item according to the gray characteristic value and the current display state, and then preloads the webpage corresponding to the target title item when the time for reading the target title item by a user exceeds the preset time. The title items are pre-loaded in a targeted manner in the process, so that data traffic is saved, a large number of unnecessary webpages are prevented from occupying the memory of the terminal, and the performance of the terminal is improved.
It should be noted that: in the web page preloading device provided in the above embodiment, when preloading a web page, only the division of the above functional modules is taken as an example, in practical applications, the above function allocation may be completed by different functional modules according to needs, that is, the internal structure of the web page preloading device is divided into different functional modules to complete all or part of the above described functions. In addition, the web page preloading device and the web page preloading method embodiment provided by the above embodiments belong to the same concept, and specific implementation processes thereof are detailed in the method embodiment and are not described herein again.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program instructing relevant hardware, where the program may be stored in a computer-readable storage medium, and the above-mentioned storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like that fall within the spirit and principle of the present invention are intended to be included therein.
Claims (14)
1. A method for preloading a webpage is characterized by comprising the following steps:
in the display process of an appointed reading application main list, acquiring a face picture of a user in real time through a front camera, wherein the main list comprises a plurality of title items, and each title item corresponds to a webpage;
extracting a gray characteristic value of the face picture;
acquiring a target sub-display area corresponding to the gray characteristic value from a target face picture database according to the gray characteristic value and the current display state, wherein the target face picture database stores the corresponding relation between each sub-display area and the gray characteristic value range in the current display state;
according to the target sub-display area, determining a target title item from the currently displayed main list;
and preloading the webpage corresponding to the target title item when the fact that the duration of the user sight line staying on the target title item exceeds the preset duration is detected.
2. The method according to claim 1, wherein the extracting the gray-scale feature value of the face picture comprises:
reducing the face picture to a specified size;
converting the human face picture with the specified size into a gray value with the specified order of magnitude;
calculating an average gray value of the gray values of the specified magnitude;
comparing the gray value of each level with the average gray value to obtain a specified number of comparison results;
and combining the specified number of comparison results into the gray characteristic value.
3. The method according to claim 1, wherein before obtaining the target sub-display region corresponding to the gray feature value from the target face image database according to the gray feature value and the current display state, the method further comprises:
in any display state, dividing a display area of the terminal into a plurality of sub-display areas;
when the fact that the user logs in the appointed reading application and stays in any sub-display area through the sight line of the user is detected, acquiring a plurality of reference face pictures of the user through the front camera until touch operation of the user on the sub-display area is detected;
extracting a gray characteristic value of each reference face picture;
forming a similar picture group by the reference face pictures of which the Hamming distance between the gray characteristic values is smaller than a preset value;
acquiring a gray characteristic value range corresponding to the similar picture group;
taking the gray characteristic value range as a gray characteristic value range corresponding to the sub-display area;
and storing the corresponding relation between each sub-display area and the gray characteristic value range in the display state to obtain a human face image database.
4. The method of claim 3, further comprising:
recording the acquisition time of each reference face picture;
acquiring minimum acquisition time and maximum acquisition time from acquisition time corresponding to the similar picture group;
taking a time interval between the minimum acquisition time and the maximum acquisition time as a pre-loading time of the sub-display area;
acquiring the average value of the preloading time corresponding to the sub-display areas;
and taking the average value as the preset duration in the display state.
5. The method of claim 1, wherein determining a target title entry from the currently displayed main list according to the target sub-display area comprises:
acquiring a coordinate range of the target sub-display area in the vertical direction;
calling an Application Programming Interface (API) to acquire a title item in the coordinate range from the currently displayed main list;
and taking the title item in the main list, which is positioned in the coordinate range, as the target title item.
6. The method of claim 1, wherein the display state comprises a landscape display state and a portrait display state.
7. An apparatus for preloading web pages, the apparatus comprising:
the system comprises a first acquisition module, a second acquisition module and a third acquisition module, wherein the first acquisition module is used for acquiring a face picture of a user in real time through a front camera in the display process of an appointed reading application main list, the main list comprises a plurality of title items, and each title item corresponds to a webpage;
the first extraction module is used for extracting the gray characteristic value of the face picture;
the first acquisition module is used for acquiring a target sub-display area corresponding to the gray characteristic value from a target face picture database according to the gray characteristic value and the current display state, and the target face picture database stores the corresponding relation between each sub-display area and the gray characteristic value range in the current display state;
a determining module, configured to determine a target title entry from the currently displayed main list according to the target sub-display area;
and the preloading module is used for preloading the webpage corresponding to the target title item when the fact that the duration that the user stays in the target title item exceeds the preset duration is detected.
8. The apparatus of claim 7, wherein the first extraction module is configured to narrow down the face picture to a specified size; converting the human face picture with the specified size into a gray value with the specified order of magnitude; calculating an average gray value of the gray values of the specified magnitude; comparing the gray value of each level with the average gray value to obtain a specified number of comparison results; and combining the specified number of comparison results into the gray characteristic value.
9. The apparatus of claim 7, further comprising:
the sub-display area dividing module is used for dividing the display area of the terminal into a plurality of sub-display areas in any display state;
the second acquisition module is used for acquiring a plurality of reference face pictures of the user through the front camera when the user is detected to log in the appointed reading application and the sight of the user stays in any sub-display area until the touch operation of the user on the sub-display area is detected;
the second extraction module is used for extracting the gray characteristic value of each reference face picture;
the similar picture group composition module is used for composing a similar picture group by the reference face pictures of which the Hamming distance between the gray characteristic values is smaller than a preset value;
the second acquisition module is used for acquiring a gray characteristic value range corresponding to the similar picture group; taking the gray characteristic value range as a gray characteristic value range corresponding to the sub-display area;
and the storage module is used for storing the corresponding relation between each sub-display area and the gray characteristic value range in the display state to obtain a human face image database.
10. The apparatus of claim 9, further comprising:
the recording module is used for recording the acquisition time of each reference face picture;
a third obtaining module, configured to obtain a minimum collecting time and a maximum collecting time from the collecting times corresponding to the similar picture groups; taking a time interval between the minimum acquisition time and the maximum acquisition time as a pre-loading time of the sub-display area;
the fourth acquisition module is used for acquiring the average value of the preloading time corresponding to the plurality of sub-display areas; and taking the average value as the preset duration in the display state.
11. The apparatus according to claim 7, wherein the determining module is configured to obtain a coordinate range of the target sub-display area in a vertical direction; calling an Application Programming Interface (API) to acquire a title item in the coordinate range from the currently displayed main list; and taking the title item in the main list, which is positioned in the coordinate range, as the target title item.
12. The apparatus of claim 7, wherein the display state comprises a landscape display state and a portrait display state.
13. A terminal, characterized in that the terminal comprises a processor and a memory, wherein at least one program code is stored in the memory, and the at least one program code is loaded and executed by the processor to realize the webpage preloading method as recited in any one of claims 1 to 6.
14. A computer-readable storage medium having stored therein at least one program code, the at least one program code being loaded and executed by a processor to implement the method for preloading web pages as claimed in any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610566743.4A CN107632985B (en) | 2016-07-18 | 2016-07-18 | Webpage preloading method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610566743.4A CN107632985B (en) | 2016-07-18 | 2016-07-18 | Webpage preloading method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN107632985A CN107632985A (en) | 2018-01-26 |
CN107632985B true CN107632985B (en) | 2020-09-01 |
Family
ID=61112296
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610566743.4A Active CN107632985B (en) | 2016-07-18 | 2016-07-18 | Webpage preloading method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN107632985B (en) |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109918602B (en) * | 2019-02-26 | 2021-04-30 | 南威软件股份有限公司 | Web data preloading method and system |
CN110262659B (en) * | 2019-06-18 | 2022-03-15 | Oppo广东移动通信有限公司 | Application control method and related device |
CN110941473B (en) * | 2019-11-27 | 2023-10-24 | 维沃移动通信有限公司 | Preloading method, preloading device, electronic equipment and medium |
CN111597480A (en) * | 2020-04-27 | 2020-08-28 | 中国平安财产保险股份有限公司 | Webpage resource preloading method and device, computer equipment and storage medium |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102663012A (en) * | 2012-03-20 | 2012-09-12 | 北京搜狗信息服务有限公司 | Webpage preloading method and system |
CN103729439A (en) * | 2013-12-30 | 2014-04-16 | 优视科技有限公司 | Method and device for preloading webpage |
CN103838745A (en) * | 2012-11-22 | 2014-06-04 | 腾讯科技(深圳)有限公司 | Processing method and device of webpage pre-reading |
US8903950B2 (en) * | 2000-05-05 | 2014-12-02 | Citrix Systems, Inc. | Personalized content delivery using peer-to-peer precaching |
CN105095227A (en) * | 2014-04-28 | 2015-11-25 | 小米科技有限责任公司 | Method and apparatus for preloading webpage |
CN105550356A (en) * | 2015-12-28 | 2016-05-04 | 魅族科技(中国)有限公司 | Preloading method of browsed contents, and terminal |
CN105573489A (en) * | 2014-11-03 | 2016-05-11 | 三星电子株式会社 | Electronic device and method for controlling external object |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6584498B2 (en) * | 1996-09-13 | 2003-06-24 | Planet Web, Inc. | Dynamic preloading of web pages |
US9448977B2 (en) * | 2012-08-24 | 2016-09-20 | Qualcomm Innovation Center, Inc. | Website blueprint generation and display algorithms to reduce perceived web-page loading time |
-
2016
- 2016-07-18 CN CN201610566743.4A patent/CN107632985B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8903950B2 (en) * | 2000-05-05 | 2014-12-02 | Citrix Systems, Inc. | Personalized content delivery using peer-to-peer precaching |
CN102663012A (en) * | 2012-03-20 | 2012-09-12 | 北京搜狗信息服务有限公司 | Webpage preloading method and system |
CN102663012B (en) * | 2012-03-20 | 2017-07-04 | 北京搜狗信息服务有限公司 | A kind of webpage preloads method and system |
CN103838745A (en) * | 2012-11-22 | 2014-06-04 | 腾讯科技(深圳)有限公司 | Processing method and device of webpage pre-reading |
CN103729439A (en) * | 2013-12-30 | 2014-04-16 | 优视科技有限公司 | Method and device for preloading webpage |
CN105095227A (en) * | 2014-04-28 | 2015-11-25 | 小米科技有限责任公司 | Method and apparatus for preloading webpage |
CN105573489A (en) * | 2014-11-03 | 2016-05-11 | 三星电子株式会社 | Electronic device and method for controlling external object |
CN105550356A (en) * | 2015-12-28 | 2016-05-04 | 魅族科技(中国)有限公司 | Preloading method of browsed contents, and terminal |
Also Published As
Publication number | Publication date |
---|---|
CN107632985A (en) | 2018-01-26 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111984165B (en) | Method and device for displaying message and terminal equipment | |
CN108885614B (en) | Text and voice information processing method and terminal | |
US9697622B2 (en) | Interface adjustment method, apparatus, and terminal | |
CN105786878B (en) | Display method and device of browsing object | |
CN105867751B (en) | Operation information processing method and device | |
CN104852885B (en) | Method, device and system for verifying verification code | |
CN108205398B (en) | Method and device for adapting webpage animation to screen | |
US20140365892A1 (en) | Method, apparatus and computer readable storage medium for displaying video preview picture | |
CN108156508B (en) | Barrage information processing method and device, mobile terminal, server and system | |
CN106203459B (en) | Picture processing method and device | |
CN108984066B (en) | Application icon display method and mobile terminal | |
US20170109756A1 (en) | User Unsubscription Prediction Method and Apparatus | |
EP3242447A1 (en) | Information recommendation management method, device and system | |
WO2014194713A1 (en) | Method,apparatus and computer readable storage medium for displaying video preview picture | |
US20160292946A1 (en) | Method and apparatus for collecting statistics on network information | |
CN104571979A (en) | Method and device for realizing split-screen views | |
CN107632985B (en) | Webpage preloading method and device | |
CN109753202B (en) | Screen capturing method and mobile terminal | |
CN115390707A (en) | Sharing processing method and device, electronic equipment and storage medium | |
US20210099566A1 (en) | Dialing method and mobile terminal | |
CN110767950B (en) | Charging method, charging device, terminal equipment and computer readable storage medium | |
CN105513098B (en) | Image processing method and device | |
EP3511840A1 (en) | Data processing method, electronic device, and computer-readable storage medium | |
CN107967086B (en) | Icon arrangement method and device for mobile terminal and mobile terminal | |
CN106934003B (en) | File processing method and mobile terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |