CN108182457A - For generating the method and apparatus of information - Google Patents
For generating the method and apparatus of information Download PDFInfo
- Publication number
- CN108182457A CN108182457A CN201810088070.5A CN201810088070A CN108182457A CN 108182457 A CN108182457 A CN 108182457A CN 201810088070 A CN201810088070 A CN 201810088070A CN 108182457 A CN108182457 A CN 108182457A
- Authority
- CN
- China
- Prior art keywords
- image
- matching
- vocabulary
- point
- area
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/74—Image or video pattern matching; Proximity measures in feature spaces
- G06V10/75—Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
- G06V10/751—Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/22—Matching criteria, e.g. proximity measures
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Evolutionary Computation (AREA)
- Data Mining & Analysis (AREA)
- Artificial Intelligence (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Databases & Information Systems (AREA)
- General Health & Medical Sciences (AREA)
- Medical Informatics (AREA)
- Software Systems (AREA)
- Multimedia (AREA)
- Image Analysis (AREA)
Abstract
The embodiment of the present application discloses the method and apparatus for generating information.One specific embodiment of this method includes:Each fisrt feature point that fisrt feature point is concentrated with each second feature point that second feature point is concentrated is matched, obtains matching characteristic point to set;It determines to be contained in the closeness that matching characteristic point is distributed the second feature o'clock in set in the image-region of the second image, the corresponding image-region of closeness maximum in identified closeness is determined as matching characteristic point close quarters;Determine to be contained in the second feature point of matching characteristic point close quarters;By matching characteristic point to the set of the matching characteristic point pair comprising identified second feature point in set, it is determined as revised matching characteristic point to set;Based on revised matching characteristic point to set and target pixel points, the first matching result of target pixel points is generated.This embodiment improves the accuracys of the matching result of determining target pixel points.
Description
Technical field
The invention relates to field of computer technology, and in particular to Internet technical field, it is more particularly, to raw
Into the method and apparatus of information.
Background technology
In the prior art, when carrying out the compatibility test to website on different devices, generally require to every equipment into
Row individually test;Alternatively, being also based on position of the target point of Website page on the screen of an equipment, this is determined
The position of corresponding points of the target point on other equipment screen, so as to be carried out based on automated test tool or program above-mentioned simultaneous
Capacitive is tested.
At present, the method for determining corresponding points of the target point on other equipment screen mainly includes following two:One kind is
It is positioned based on page control attribute (such as id, name etc.);Another kind is matched based on certain single image feature.Its
In, can be based on Scale invariant features transform (SIFT, Scale Invariant based on certain single image feature
Feature Transform) obtained characteristics of image.
Invention content
The embodiment of the present application proposes the method and apparatus for generating information.
In a first aspect, the embodiment of the present application provides a kind of method for generating information, this method includes:By the first spy
Each fisrt feature point that sign point is concentrated is matched with each second feature point that second feature point is concentrated, and obtains matching characteristic
It puts to set, wherein, fisrt feature point is the characteristic point for the object region that the first image includes, and object region includes
Target pixel points, second feature point are the characteristic points of the second image;Determine to be contained in matching characteristic point to the second spy in set
The closeness that sign o'clock is distributed in the image-region of the second image, by the corresponding figure of closeness maximum in identified closeness
As region is determined as matching characteristic point close quarters;Determine to be contained in the second feature point of matching characteristic point close quarters;General
With characteristic point to the set of the matching characteristic point pair comprising identified second feature point in set, it is determined as revised matching
Characteristic point is to set;Based on revised matching characteristic point to set and target pixel points, the first of target pixel points are generated
Matching result.
In some embodiments, the above method further includes:Target pixel points are each in the neighborhood in the first image
Pixel, using algorithm of region growing, will meet the first sub-pixel point of default screening conditions as the first sub-pixel point
The zone of convergency be determined as first area;Using each pixel in the second image as second seed pixel, using region
The zone of convergency of second seed pixel for meeting default screening conditions is determined as second area by growth algorithm;To meet with
The second area of at least one of lower matching condition is determined as and the matched second area in first area:The compactedness of second area with
The difference of the compactedness of first area is less than preset compactedness threshold value;The length-width ratio of second area and the length-width ratio of first area
Difference is less than preset length-width ratio threshold value;The similarity of second area and first area is more than preset first similarity threshold;It will
First area and the combination of the matched second area in first area, are determined as matching area pair;Based on matching area pair and mesh
Pixel is marked, generates the second matching result of target pixel points.
In some embodiments, above-mentioned default screening conditions include at least one of following:First default distance values, the first figure
The height of picture and the product of the width three of the first image are less than the pixel quantity of the zone of convergency;The width of the zone of convergency is less than the
The product of the width of one image and the second default distance values;The height of the zone of convergency is less than the height of the first image and second and presets
The product of distance values.
In some embodiments, the first lexical set is presented in object pixel neighborhood of a point, the second image presents
Two lexical sets;And the above method further includes:For each first vocabulary in the first lexical set, in the second lexical set
In determine with the second vocabulary of first terminology match, by first vocabulary and first vocabulary matched second vocabulary group
It closes, is determined as matching vocabulary pair;Based on matching vocabulary pair and target pixel points, the third matching knot of target pixel points is generated
Fruit.
In some embodiments, the second vocabulary with first terminology match is determined, including:Determine the four of first vocabulary
Angle number;Determine the similarity of each second vocabulary and first vocabulary in the second lexical set;By with first vocabulary
The second vocabulary that the four-corner system is identical and/or similarity is maximum, is determined as the second vocabulary with first terminology match.
In some embodiments, the above method further includes:Using object pixel neighborhood of a point, template is carried out to the second image
Matching operation determines the image-region of the second image and the similarity of neighborhood, similarity in identified similarity is maximum
The image-region of second image is determined as matching image-region;It determines the selected pixel in neighborhood and is matching image district
The matched pixel point of selected pixel is determined in domain;Based on selected pixel, matched pixel point and target pixel points, mesh is generated
Mark the 4th matching result of pixel.
In some embodiments, the above method further includes:Based on the matching result generated, final matching results are generated.
In some embodiments, when the first image is that target webpage is presented on the first electronic equipment, the first electronic equipment institute
The image of display, when the second image is that target webpage is presented on the second electronic equipment, the image shown by the second electronic equipment.
In some embodiments, the above method further includes:Based on final matching results, to the website involved by target webpage
Carry out compatibility test.
Second aspect, the embodiment of the present application provide a kind of device for being used to generate information, which includes:Matching is single
Member is configured to each fisrt feature point for concentrating fisrt feature point and is clicked through with each second feature that second feature point is concentrated
Row matching obtains matching characteristic point to set, wherein, fisrt feature point is the feature for the object region that the first image includes
Point, object region include target pixel points, and second feature point is the characteristic point of the second image;First determination unit, configuration
It is distributed in the image-region of the second image for determining to be contained in matching characteristic point to the second feature o'clock in set intensive
Degree, is determined as matching characteristic point close quarters by the corresponding image-region of closeness maximum in identified closeness;Second
Determination unit is configured to the second feature point for determining to be contained in matching characteristic point close quarters;Third determination unit, configuration are used
The set of the matching characteristic point pair comprising identified second feature point in by matching characteristic point to set, after being determined as amendment
Matching characteristic point to set;First generation unit is configured to based on revised matching characteristic point to set and target
Pixel generates the first matching result of target pixel points.
In some embodiments, above device further includes:4th determination unit, is configured to target pixel points first
Each pixel in neighborhood in image, using algorithm of region growing, will meet default screening as the first sub-pixel point
The zone of convergency of first sub-pixel point of condition is determined as first area;5th determination unit is configured to the second image
In each pixel as second seed pixel, using algorithm of region growing, will meet second of default screening conditions
The zone of convergency of sub-pixel point is determined as second area;6th determination unit is configured to that at least one of following matching will be met
The second area of condition is determined as and the matched second area in first area:The compactedness of second area and the filling of first area
The difference of degree is less than preset compactedness threshold value;The difference of the length-width ratio of second area and the length-width ratio of first area is less than preset length
Width compares threshold value;The similarity of second area and first area is more than preset first similarity threshold;7th determination unit, configuration
For by the combination of first area and the matched second area in first area, being determined as matching area pair;Second generation unit, matches
It puts, based on matching area pair and target pixel points, to generate the second matching result of target pixel points.
In some embodiments, screening conditions are preset including at least one of following:The first default distance values, the first image
Height and the product of the width three of the first image are less than the pixel quantity of the zone of convergency;The width of the zone of convergency is less than the first figure
The product of the width of picture and the second default distance values;The height of the zone of convergency is less than the height and the second default spacing of the first image
The product of value.
In some embodiments, the first lexical set is presented in object pixel neighborhood of a point, the second image presents
Two lexical sets;And above device further includes:8th determination unit is configured to for each the in the first lexical set
One vocabulary determines the second vocabulary with first terminology match, by first vocabulary and first word in the second lexical set
Converge institute matched second vocabulary combination, be determined as matching vocabulary pair;Third generation unit is configured to based on matching vocabulary pair
And target pixel points, generate the third matching results of target pixel points.
In some embodiments, the 8th determination unit includes:First determining module is configured to determine first vocabulary
The four-corner system;Second determining module is configured to determine each second vocabulary and first vocabulary in the second lexical set
Similarity;Third determining module is configured to identical and/or maximum similarity with the four-corner system of first vocabulary second
Vocabulary is determined as the second vocabulary with first terminology match.
In some embodiments, above device further includes:9th determination unit is configured to the neighbour using target pixel points
Domain carries out template-matching operation to the second image, determines the image-region of the second image and the similarity of neighborhood, by determined by
The image-region of the second image of similarity maximum is determined as matching image-region in similarity;Tenth determination unit, configuration are used
In determining selected pixel in neighborhood and the matched pixel point of selected pixel determined in image-region is matched;4th
Generation unit is configured to, based on selected pixel, matched pixel point and target pixel points, generate the 4th of target pixel points
Matching result.
In some embodiments, above device further includes:5th generation unit is configured to based on the matching knot generated
Fruit generates final matching results.
In some embodiments, when the first image is that target webpage is presented on the first electronic equipment, the first electronic equipment institute
The image of display, when the second image is that target webpage is presented on the second electronic equipment, the image shown by the second electronic equipment.
In some embodiments, above device further includes:Test cell is configured to based on final matching results, to mesh
It marks the website involved by webpage and carries out compatibility test.
The third aspect, the embodiment of the present application provide a kind of electronic equipment for being used to generate information, including:It is one or more
Processor;Storage device, for storing one or more programs, when said one or multiple programs are by said one or multiple places
It manages device to perform so that the one or more processors realize the side of any embodiment in the method as described for generation information
Method.
Fourth aspect, the embodiment of the present application provide a kind of computer readable storage medium for being used to generate information, thereon
Computer program is stored with, any embodiment in the method as described for generation information is realized when which is executed by processor
Method.
Method and apparatus provided by the embodiments of the present application for generating information, it is each by the way that fisrt feature point is concentrated
Fisrt feature point is matched with each second feature point that second feature point is concentrated, and obtains matching characteristic point to gathering, then
It determines to be contained in the closeness that matching characteristic point is distributed the second feature o'clock in set in the image-region of the second image, it will
The corresponding image-region of maximum closeness is determined as matching characteristic point close quarters in identified closeness, then determines to include
In the second feature point of matching characteristic point close quarters, later by matching characteristic point to including identified second feature in set
The set of the matching characteristic point pair of point is determined as revised matching characteristic point to set, it is special to be finally based on revised matching
Sign point generates the first matching result of target pixel points, improves determining target pixel points to set and target pixel points
The accuracy of matching result.
Description of the drawings
By reading the detailed description made to non-limiting example made with reference to the following drawings, the application's is other
Feature, objects and advantages will become more apparent upon:
Fig. 1 is that this application can be applied to exemplary system architecture figures therein;
Fig. 2 is the flow chart for being used to generate one embodiment of the method for information according to the application;
Fig. 3 A ' are the signals for being used to generate the first image of first application scenarios of the method for information according to the application
Figure;
Fig. 3 A " are the second images for being used to generate above-mentioned first application scenarios of the method for information according to the application
Schematic diagram;
Fig. 3 B ' are the target pictures for above-mentioned first application scenarios for being used to generate the method for information according to the application
The schematic diagram of the position of vegetarian refreshments;
Fig. 3 B " are the mesh determined for above-mentioned first application scenarios according to the method for being used to generate information of the application
Mark the schematic diagram of the position of the corresponding points of pixel;
Fig. 3 C ' are the signals for being used to generate the first image of second application scenarios of the method for information according to the application
Figure;
Fig. 3 C " are the second images for being used to generate above-mentioned second application scenarios of the method for information according to the application
Schematic diagram;
Fig. 3 D ' are the target pictures for above-mentioned second application scenarios for being used to generate the method for information according to the application
The schematic diagram of the position of vegetarian refreshments;
Fig. 3 D " are the mesh determined for above-mentioned second application scenarios according to the method for being used to generate information of the application
Mark the schematic diagram of the position of the corresponding points of pixel;
Fig. 3 E ' are the signals for being used to generate the first image of the third application scenarios of the method for information according to the application
Figure;
Fig. 3 E " are the second images for being used to generate the above-mentioned third application scenarios of the method for information according to the application
Schematic diagram;
Fig. 3 F ' are the signals for being used to generate the first image of the 4th application scenarios of the method for information according to the application
Figure;
Fig. 3 F " are the second images for being used to generate above-mentioned 4th application scenarios of the method for information according to the application
Schematic diagram;
Fig. 4 is the flow chart for being used to generate another embodiment of the method for information according to the application;
Fig. 5 is the structure diagram for being used to generate one embodiment of the device of information according to the application;
Fig. 6 is adapted for the structure diagram of the computer system of the electronic equipment for realizing the embodiment of the present application.
Specific embodiment
The application is described in further detail with reference to the accompanying drawings and examples.It is understood that this place is retouched
The specific embodiment stated is used only for explaining related invention rather than the restriction to the invention.It also should be noted that in order to
Convenient for description, illustrated only in attached drawing and invent relevant part with related.
It should be noted that in the absence of conflict, the feature in embodiment and embodiment in the application can phase
Mutually combination.The application is described in detail below with reference to the accompanying drawings and in conjunction with the embodiments.
Fig. 1 shows the method for being used to generate information that can apply the application or the implementation for generating the device of information
The exemplary system architecture 100 of example.
As shown in Figure 1, system architecture 100 can include terminal device 101,102,103, network 104 and server 105.
Network 104 between terminal device 101,102,103 and server 105 provide communication link medium.Network 104 can be with
Including various connection types, such as wired, wireless communication link or fiber optic cables etc..
User can be interacted with using terminal equipment 101,102,103 by network 104 with server 105, to receive or send out
Send message etc..Various telecommunication customer end applications can be installed, such as web browser should on terminal device 101,102,103
With, shopping class application, searching class application, instant messaging tools, mailbox client, social platform software etc..
Terminal device 101,102,103 can be the various electronic equipments with display screen and supported web page browsing, wrap
It includes but is not limited to smart mobile phone, tablet computer, E-book reader, MP3 player (Moving Picture Experts
Group Audio Layer III, dynamic image expert's compression standard audio level 3), MP4 (Moving Picture
Experts Group Audio Layer IV, dynamic image expert's compression standard audio level 4) it is player, on knee portable
Computer and desktop computer etc..
Server 105 can be to provide the server of various services, such as to being shown on terminal device 101,102,103
Webpage provides the backstage web page server supported.Backstage web page server can carry out the processing such as analyzing to the data received,
And handling result is fed back into terminal device.
Server 105 can also be the background information obtained to the image shown on terminal device 101,102,103
Processing server.Background information processing server can to picture receive or get, to come from different terminal equipment into
The processing such as row corresponding point matching, and handling result (such as matching result information) is fed back into terminal device.
It should be noted that in practice, what the embodiment of the present application was provided generally requires for the method that generates information
It is performed by relatively high performance electronic equipment;For generating the generally requiring through relatively high performance electricity of the device of information
Sub- equipment is realized to set.For relative termination equipment, server often has higher performance.Thus, it is generally the case that this
What application embodiment was provided is generally performed for the method that generates information by server 105, correspondingly, for generating information
Device is generally positioned in server 105.However, when the performance of terminal device can meet this method execution condition or this set
During standby setting condition, the embodiment of the present application provided for generate information method can also by terminal device 101,102,
103 perform, and the device for generating information can also be set in terminal device 101,102,103.
It should be understood that the number of the terminal device, network and server in Fig. 1 is only schematical.According to realization need
Will, can have any number of terminal device, network and server.
With continued reference to Fig. 2, the flow for being used to generate one embodiment of the method for information according to the application is shown
200.This is used for the method for generating information, includes the following steps:
Step 201, each second spy that each fisrt feature point fisrt feature point concentrated is concentrated with second feature point
Sign point is matched, and obtains matching characteristic point to set.
In the present embodiment, for generating electronic equipment (such as the service shown in FIG. 1 of the method for information operation thereon
Device or terminal device) each fisrt feature point that fisrt feature point can be concentrated and second feature point concentrate it is each second special
Sign point is matched, and obtains matching characteristic point to set.Wherein, fisrt feature point is the object region that the first image includes
Characteristic point, object region include target pixel points, second feature point is the characteristic point of the second image.Above-mentioned target image
The shapes and sizes in region can be pre-set.Illustratively, the shape of above-mentioned target image can be round, for example,
Can be for the center of circle, with the width of the first image with target pixel points (can also be other pixels in the first image)
0.5 (can also be other numerical value) times is the circle of radius;The shape of target image can also be rectangle, for example, it may be with
0.5 (can also be other numerical value) of the width of the first image times is the length of side, (can also be in the first image with target pixel points
Other pixels) centered on square etc..Characteristic point can be used to characterize color, the texture information of image in image
Pixel.
In practice, above-mentioned electronic equipment can perform the step as follows:
First, above-mentioned electronic equipment can on the first image, and extraction includes asking the object region of pixel
SURF (Speed Up Robust Feature) characteristic point obtains the first SURF feature point sets and calculates the first SURF features
The feature vector for each first SURF characteristic points that point is concentrated, obtains first eigenvector set.
Then, above-mentioned electronic equipment can extract the SURF characteristic points of the second image, obtain the 2nd SURF feature point sets, with
And the feature vector of each 2nd SURF characteristic points in the 2nd SURF feature point sets is calculated, obtain second feature vector set.
Later, above-mentioned electronic equipment (can be denoted as A to each first SURF characteristic points in the first SURF feature point sets
Point), in the 2nd SURF feature point sets, determine second minimum with the distance (such as Euclidean distance, manhatton distance etc.) of A points
SURF characteristic points (are denoted as B1 points).Determine the twoth SURF characteristic point (be denoted as B2 point) small with the distance of A points time.By A points and B1
The distance of point is denoted as L1;The distance of A points and B2 points is denoted as L2.
Then, above-mentioned electronic equipment can calculate the ratio of L1 and L2, if the ratio is less than pre-set threshold value,
B1 points are determined as to the matching characteristic point of A points.And the combination of A points and B1 points is determined as matching characteristic point pair.Wherein, it is above-mentioned
Threshold value can be used for characterizing the similarity of A points and B1 points.
Finally, above-mentioned electronic equipment can determine the matching characteristic point of each first SURF characteristic points.It obtains as a result,
With characteristic point to set.
As an example, refer to Fig. 3 A ' and Fig. 3 A ".In Fig. 3 A ', the first image 301 includes object region
3010.Object region 3010 includes target pixel points 3013.Server (i.e. above-mentioned electronic equipment) is by fisrt feature point set
Each fisrt feature point (such as fisrt feature point in (set of the characteristic point i.e. included by object region 3010)
3011st, 3012,3014) each second feature point concentrated with second feature point is matched.Wherein, second feature point can be
The characteristic point included by the second image 302 in Fig. 3 A ".According to above-mentioned steps, it is characteristic point 3011 to determine characteristic point 3021
Matching characteristic point, characteristic point 3022 is the matching characteristic point of characteristic point 3012, and characteristic point 3024 is the matching of characteristic point 3014
Characteristic point.Based on this, above-mentioned electronic equipment has obtained matching characteristic point to set.
Optionally, above-mentioned electronic equipment can also be directly by the 2nd SURF of the similarity maximum with the first SURF characteristic points
Characteristic point is determined as the matching characteristic point (ratio without comparing both minimum range and time small distance of the first SURF characteristic points
Magnitude relationship between value and pre-set threshold value).Wherein, similarity can be by standardizing Euclidean distance, Hamming distance
Deng characterization.
Step 202, determine to be contained in matching characteristic point to the second feature o'clock in set the second image image-region
It is close to be determined as matching characteristic point by the closeness of middle distribution for the corresponding image-region of closeness maximum in identified closeness
Collect region.
In the present embodiment, based on the matching characteristic point obtained in step 201 to set, above-mentioned electronic equipment can determine
Be contained in the closeness that matching characteristic point is distributed the second feature o'clock in set in the image-region of the second image, by really
The corresponding image-region of maximum closeness is determined as matching characteristic point close quarters in fixed closeness.Wherein, the second image
The shapes and sizes of image-region can be pre-set.What closeness can be included by the unit area of image-region
The quantity of second feature point is characterized.
As an example, please continue to refer to Fig. 3 A ' and Fig. 3 A ".Wherein, server will be big with 3010 grade of object region
Rectangle frame as target frame, moved on the second image 302.Determine mobile initial position and every time target after movement
The quantity of second feature point that the image-region of second image of frame institute frame choosing includes.Finally, server by including it is second special
The image-region 3020 that the quantity of sign point is most is determined as matching characteristic point close quarters.
Step 203, determine to be contained in the second feature point of matching characteristic point close quarters.
In the present embodiment, based on the matching characteristic point close quarters determined in step 202, above-mentioned electronic equipment can be true
Surely it is contained in the second feature point of matching characteristic point close quarters.
As an example, please refer to Fig.3 A ' and Fig. 3 A ".Wherein, server is determined to be contained in matching characteristic point compact district
The second feature point 3022,3023,3024 in domain 3020.
Step 204, by matching characteristic point to the collection of the matching characteristic point pair comprising identified second feature point in set
It closes, is determined as revised matching characteristic point to set.
In the present embodiment, above-mentioned electronic equipment can be by matching characteristic point to including identified second feature in set
The set of the matching characteristic point pair of point is determined as revised matching characteristic point to set.
As an example, please refer to Fig.3 A ' and Fig. 3 A ".Wherein, server by matching characteristic point in set include second
The set of the matching characteristic point pair of characteristic point 3022,3023,3024 is determined as revised matching characteristic point to set.
Step 205, the of target pixel points are generated to set and target pixel points based on revised matching characteristic point
One matching result.
In the present embodiment, based on the revised matching characteristic point that step 204 determines to set and above-mentioned target picture
Vegetarian refreshments, above-mentioned electronic equipment can generate the first matching result of target pixel points.Wherein, above-mentioned first matching result can be
The information of the position of corresponding points (i.e. match point) of the target pixel points in the second image;Can also be for characterizing the second image
Whether the information of the corresponding points of target pixel points is included.
As an example, the step can perform as follows:
First, above-mentioned electronic equipment can determine revised matching characteristic point to each fisrt feature point in set
Position (being denoted as location sets A) and revised matching characteristic point in the first image is special to each second in set
Position (be denoted as location sets B) of the sign o'clock in the second image.
Then, above-mentioned electronic equipment can determine that the position at the midpoint of each position in location sets A (is denoted as below
Point midway A).For example, the abscissa of point midway can be the average of the abscissa of each position in location sets A,
The ordinate of point midway can be the average of the ordinate of each position in location sets A.Similarly, above-mentioned electronic equipment
It can determine the position (being denoted as point midway B below) at the midpoint of each position in location sets B.
Later, above-mentioned electronic equipment can determine the position of target pixel points.
Finally, above-mentioned electronic equipment can be determined according to target pixel points and the relative position of point midway A second
It is the corresponding points (i.e. match point) of target pixel points with the pixel that point midway B has identical relative position on image.
Above-mentioned electronic equipment can generate the first matching result of target pixel points as a result,.Wherein, the first matching result can be target
The location information of the corresponding points of pixel.
Optionally, the fisrt feature point that above-mentioned electronic equipment can also include set revised matching characteristic point
Set be determined as set A, revised matching characteristic point is determined as collecting to the set of second feature point that set includes
Close B.For each fisrt feature point in set A, its neighborhood (image-region for pre-setting shapes and sizes) is enabled as spy
Region is levied, and provides, when the characteristic area of multiple point generations overlaps, its union to can be regarded as same characteristic area.In this approach
Generate the characteristic area collection Ra of set A.The similarly characteristic area collection Rb of generation set B.If target pixel points are located at characteristic area
Collect in some characteristic area rA in Ra, then it is required target picture to enable the corresponding points in matching characteristic region rB of the rA in Rb
The corresponding points position of vegetarian refreshments.And based on the first matching result of this generation target pixel points.
Illustratively, B ' and Fig. 3 B " is please referred to Fig.3.Wherein, the first image 303 includes target pixel points 3031.Service
Device according to above-mentioned steps, determine target pixel points 3031 be located in characteristic area 3032 (i.e. features described above region rA) and
Determine that the characteristic area 3042 in the second image 304 is characterized the matching characteristic region in region 3032.Also, characteristic area
The length of 3032 (shape is rectangle) be Wa, width Ha, target pixel points 3031 arrive characteristic area 3032 length (a line)
Distance for Ta, the distance of the width (another a line) of target pixel points 3031 to characteristic area 3032 is La.Server also determines
The length for going out characteristic area 3042 (shape is rectangle) is Wb, width Hb.Server can be by determining Lb and Tb as a result,
Value, so that it is determined that going out the position of the corresponding points of target pixel points 3031.Wherein, Lb is length of the corresponding points to characteristic area 3042
The distance of (a line), Tb are distance of the corresponding points to the width (another a line) of characteristic area 3042.As an example, server can
To determine the value of Lb and Tb by equation below:
Lb=Wb*La/Wa
Tb=Hb*Ta/Ha
Above-mentioned server can generate the location information of the corresponding points 3041 of target pixel points 3031 as a result,.It is right in diagram
The location information that 3041 should be put is the first matching result.
It is appreciated that when the similarity of corresponding points and target pixel points is less than preset similarity threshold (such as 0.9),
First matching result can be " corresponding points are not present " or the similarity for characterizing corresponding points and target pixel points
Information can also be other information.
The method that above-described embodiment of the application provides passes through each fisrt feature point for concentrating fisrt feature point and the
Each second feature point in two feature point sets is matched, and obtains matching characteristic point to set;It determines to be contained in matching characteristic
The closeness that is distributed in the image-region of the second image to the second feature o'clock in set of point, by identified closeness most
The corresponding image-region of big closeness is determined as matching characteristic point close quarters;It determines to be contained in matching characteristic point close quarters
Second feature point;By matching characteristic point in set include identified second feature point matching characteristic point pair set,
It is determined as revised matching characteristic point to set;It is raw based on revised matching characteristic point to set and target pixel points
Into the first matching result of target pixel points.This embodiment improves the accuracys of the matching result of determining target pixel points.
In some optional realization methods of the present embodiment, the above method further includes:By target pixel points in the first figure
Each pixel in neighborhood as in, using algorithm of region growing, will meet default screening item as the first sub-pixel point
The zone of convergency of first sub-pixel point of part is determined as first area;Using each pixel in the second image as second
Using algorithm of region growing, the zone of convergency of second seed pixel for meeting default screening conditions is determined as sub-pixel point
Second area;The second area for meeting at least one of following matching condition is determined as and the matched second area in first area:
The difference of the compactedness of second area and the compactedness of first area is less than preset compactedness threshold value;The length-width ratio of second area with
The difference of the length-width ratio of first area is less than preset length-width ratio threshold value;The similarity of second area and first area is more than preset
First similarity threshold;By the combination of first area and the matched second area in first area, it is determined as matching area pair;It is based on
Matching area pair and target pixel points generate the second matching result of target pixel points.
Wherein, neighborhood of the target pixel points in the first image is the image-region for including target pixel points, also, its shape
Shape and size are pre-set.Illustratively, above-mentioned neighborhood can be the rectangle centered on target pixel points or pros
The image-region of shape.Second matching result can be corresponding points (i.e. match point) position of target pixel points in the second image
Information;Can also be for characterizing the information whether the second image includes the corresponding points of target pixel points.Default screening conditions are
It is pre-set, for screening the zone of convergency so as to obtain the condition of first area.It is appreciated that it is obtained using algorithm of region growing
To the zone of convergency may include region (it is too small to show as area) caused by noise or large stretch of background, white space
(it is excessive to show as area), and these regions to matching be do not have it is helpful.Above-mentioned default screening conditions can be used for rejecting this
A little regions.The method of the similarity of above-mentioned determining first area and second area includes but not limited to:The image of feature based point
The computational methods of similarity, the computational methods of image similarity based on histogram.
Illustratively, the compactedness of image-region (including first area and second area) can be identified below:First, may be used
To determine the length (can be characterized with pixel) of image-region and the product of width (can be characterized with pixel);It is then possible to really
Determine the actual pixels number of image-region and the ratio of above-mentioned product, which is determined as to the compactedness of image-region.
It should be noted that above-mentioned zone growth algorithm is one kind in the algorithm involved by image Segmentation Technology.Region
The basic thought of growth algorithm is to be aggregating the pixel with similitude to form region.In the embodiment of the present application, region
The step of growth algorithm, can carry out as follows:First, using each pixel in the neighborhood in the first image as sub-pixel
Point;Then, by the pixel for having same or similar property in sub-pixel point surrounding neighbors with the sub-pixel point (such as color
Identical pixel) it is merged into the region where the sub-pixel point, then, new pixel can continue as seed picture
Vegetarian refreshments is grown around, until the pixel that the condition of satisfaction is not present can be included to get to a zone of convergency.
In practice, can the experience according to technical staff and/or the accuracy to the second matching result (for example, history number
The accuracy of the second matching result obtained in) reference, determine or adjust above-mentioned preset compactedness threshold value, preset length
It is wide than threshold value and preset first similarity threshold.
As an example, please refer to Fig.3 C ' and Fig. 3 C ".Wherein, server determines mesh on the first image 311 first
The position of pixel 3111 is marked, then, it is determined that image-region 3110 is the neighborhood of target pixel points 3111;Then, server
Using each pixel in neighborhood 3110 as the first sub-pixel point, using algorithm of region growing, default screening item will be met
The zone of convergency of first sub-pixel point of part is determined as first area;Finally obtain first area 3112.Similarly, server is also
The second area in the second image 312 has been obtained, and has been determined and first area 3112 matched according to above-mentioned matching condition
Two regions 3122.On this basis, server according to the relative positions of first area 3112 and target pixel points 3111 (such as with
The center of first area 3112 is starting point, moves up 10 pixels, then be moved to the left 10 pixels, then reaches target picture
Position where vegetarian refreshments 3111), the position of the corresponding points 3121 of target pixel points 3111 is generated (such as with second area 3122
Center for starting point, move up 10 pixels, then be moved to the left 10 pixels, then reach pair of target pixel points 3111
The position where 3121 should be put) information, that is, generate the second matching result of target pixel points.
Optionally, above-mentioned electronic equipment can also generate the second matching result of target pixel points in accordance with the following steps:
Illustratively, D ' and Fig. 3 D " is please referred to Fig.3.Wherein, the first image 313 includes target pixel points 3131.Pixel
In the first area that point 3130, which is server, to be determined using above-mentioned steps pixel (such as the central point of first area or
Other pixels);Pixel 3140 is server using above-mentioned steps, determined in the second image 314 and first area
Pixel in matched second area.It should be noted that relative position and picture of the pixel 3140 in the second area
Relative position of the vegetarian refreshments 3130 in first area is consistent.In diagram, WaFor the width of the first image 313, HaFor the first image
313 length;WbFor the width of the second image 314, HbLength for the second image 314.As an example, by target pixel points
(such as coordinate, distance of the abscissa for the length of the 3131 to the first image of target pixel points 313, ordinate is mesh for 3131 position
The wide distance of mark pixel the 3131 to the first image 313) coordinate can be denoted as (qx, qy);By the position of pixel 3130
(such as coordinate, distance of the abscissa for the length of the 3130 to the first image of pixel 313, ordinate are pixel 3130 to the first
The wide distance of image 313) coordinate can be denoted as (max, may);By the position of pixel 3140, (such as coordinate, abscissa are
The distance of the length of the 3140 to the second image of pixel 314, ordinate are the wide distance of the 3140 to the second image of pixel 314)
Coordinate can be denoted as (mbx, mby).The correspondence that equation below determines target pixel points 3131 may be used in server as a result,
Position where point 3141:
tx=mbx–Wb/Wa*(max-qx)
ty=mby–Hb/Ha*(may-qy)
Wherein, txFor characterizing the distance of the length (a line) of the 3141 to the second image of corresponding points 314, tyIt is vertical for characterizing
Distance of the coordinate for the width (another a line) of the 3141 to the second image of corresponding points 314.Server generates target pixel points as a result,
3131 the second matching result, wherein, which can be 3141 place of corresponding points of target pixel points 3131
The information of position.
It is it is appreciated that equal-sized big with the first area when will zoom to the matched second area in first area
Hour, it can be by calculating the similarity of first area and second area, and comparing the similarity being calculated and preset phase
Like the magnitude relationship of degree threshold value (such as 0.99), the second matching result is generated.For example, it is preset when the similarity being calculated is less than
Similarity threshold when, the second matching result can be " there is no corresponding points ".
It should be noted that by the way that the second matching result is compared with the first matching result, contribute to generation more accurate
The matching result of true target pixel points.
In some optional realization methods of the present embodiment, screening conditions are preset including at least one of following:First is pre-
If distance values, the height of the first image and the product of the width three of the first image are less than the pixel quantity of the zone of convergency;Polymerization
The width in region is less than the width of the first image and the product of the second default distance values;The height of the zone of convergency is less than the first image
Height and the second default distance values product.
Wherein, the zone of convergency can be region or the border circular areas of rectangle.First default distance values and second
Default distance values can be pre-set, (such as be obtained for characterizing the subgraph in the first image using stingy diagram technology
Control in the image that subgraph or the page are presented when being shown in equipment) the distance between value.In practice, first is pre-
If distance values and the second default distance values can be rule of thumb configured by technical staff.For example, the first default distance values
Can be 0.01, the second default distance values can be 0.3.
It is appreciated that when the similarity of corresponding points and target pixel points is less than preset similarity threshold (such as 0.9),
First matching result can be " corresponding points are not present " or the similarity for characterizing corresponding points and target pixel points
Information can also be other information.
It should be noted that by pre-setting above-mentioned default screening conditions, so as to be screened to the zone of convergency, and then
First area is obtained, helps to generate the matching result of more accurately target pixel points.
In some optional realization methods of the present embodiment, the first word finder is presented in object pixel neighborhood of a point
It closes, the second image presents the second lexical set;And the above method further includes:For each first in the first lexical set
Vocabulary determines the second vocabulary with first terminology match, by first vocabulary and first vocabulary in the second lexical set
Matched second vocabulary combination, be determined as match vocabulary pair;Based on matching vocabulary pair and target pixel points, target is generated
The third matching result of pixel.
Wherein, above-mentioned first vocabulary and the second vocabulary can be the vocabulary that directly can carry out replicating etc. operation;Also may be used
To be the vocabulary (for example, the vocabulary that can not directly replicate) blended with image.It can be with the second vocabulary of the first terminology match
Including but not limited to:Second vocabulary of the solid colour presented with the first vocabulary;It is consistent with the font size that the first vocabulary is presented
The second vocabulary;Second vocabulary consistent with the font that the first vocabulary is presented.
As an example, above-mentioned step can carry out as follows:
E ' and Fig. 3 E " is please referred to Fig.3, first, above-mentioned electronic equipment can pass through OCR (Optical Character
Recognition, optical character identification) technology, identify on the first image 321 neighborhood 3210 of target pixel points 3211 and
The lexical information (the first lexical set and the second lexical set) of whole region on second image 322, and determine its (vocabulary
) coordinate position.
Then, above-mentioned electronic equipment can carry out cutting word to the first lexical set and the second lexical set.For example, according to
Space length carries out cutting word, i.e. word space is less than the vocabulary of preset word space threshold value, is considered same vocabulary, instead
It, then it is assumed that it is different vocabulary.In diagram, the first lexical set includes " hello ";Second lexical set includes " hello ", " you
It is good ".
Later, above-mentioned electronic equipment can be directed to each first vocabulary in the first lexical set, in the second lexical set
In determine with the second vocabulary of first terminology match (such as color, size, font size consistent vocabulary), by first vocabulary and
First vocabulary matched second vocabulary combination, be determined as match vocabulary pair.
Finally, above-mentioned electronic equipment can be according to the first vocabulary of matching vocabulary centering, the position of the second vocabulary and mesh
The position of pixel 3211 is marked, generates the third matching result of target pixel points 3211.Obtain the correspondence of target pixel points 3211
The position of point 3221 (shown in such as Fig. 3 E ' and Fig. 3 E ").
It is appreciated that when there is no during the second vocabulary with the first terminology match, third matching result can be " can not
With " etc. information.
It should be noted that above-mentioned electronic equipment can be by the position (such as center of the first vocabulary) of the first vocabulary
Coordinate can be denoted as (max, may), wherein, distance of the abscissa for the position to the length of the first image, ordinate is the position
To the wide distance of the first image;By the positions of target pixel points, (such as coordinate, abscissa are target pixel points to the first image
Length distance, ordinate for target pixel points to the first image wide distance) coordinate can be denoted as (qx, qy);Will with this
(such as coordinate, distance of the abscissa for the position to the length of the second image, indulges and sits for the position of second vocabulary of the first terminology match
The position is designated as to the wide distance of the second image) coordinate can be denoted as (mbx, mby).Server may be used as follows as a result,
Formula determines the position where the corresponding points of target pixel points:
tx=mbx–Wb/Wa*(max-qx)
ty=mby–Hb/Ha*(may-qy)
Wherein, WaFor the width of the first image, HaLength for the first image;WbFor the width of the second image, HbIt is second
The length of image.txFor characterizing corresponding points to the distance of the length (a line) of the second image, tyFor characterizing ordinate to correspond to
O'clock to the second image width (another a line) distance.Server generates the third matching result of target pixel points as a result,.Its
In, third matching result is the information of the position where the corresponding points of target pixel points.
It should be noted that when the first image and the second image include vocabulary, by being determined in the second lexical set
With the second vocabulary of first terminology match, and then third matching result is generated, contribute to generation more accurately target pixel points
Matching result.
In some optional realization methods of the present embodiment, the second vocabulary with first terminology match is determined, including:
Determine the four-corner system of first vocabulary;Determine that each second vocabulary in the second lexical set is similar to first vocabulary
Degree;By second vocabulary identical and/or maximum similarity with the four-corner system of first vocabulary, it is determined as and first vocabulary
The second vocabulary matched.Wherein, similarity can be true by modes such as the Euclidean distances of the feature vector for the characteristic point for calculating image
It is fixed.
It should be noted that due to the resolution ratio of electronic equipment is relatively low, font is smaller etc., the identification knot of OCR technique
There are certain error rates for fruit.Stagger the time when identifying, have very maximum probability can ensure the quadrangle coding of recognition result still with former word one
It causes.Therefore, by the use of the four-corner system of word (such as Chinese character) as matching foundation, matched accuracy rate can be greatly promoted.
In some optional realization methods of the present embodiment, the above method further includes:Using object pixel neighborhood of a point,
Template-matching operation is carried out to the second image, the image-region of the second image and the similarity of neighborhood are determined, by identified phase
It is determined as matching image-region like the image-region of the second image of similarity maximum in degree;Determine the selected pixel in neighborhood
Point and the matched pixel point that selected pixel is determined in image-region is matched;Based on selected pixel, matched pixel point with
And target pixel points, generate the 4th matching result of target pixel points.
Wherein, selected pixel can be any one pixel in neighborhood, and matched pixel point can be matching image
In region, with the corresponding pixel of selected pixel.Illustratively, when neighborhood is rectangular area, selected pixel can be with
It is the central point of neighborhood, matched pixel point can be the central point for matching image-region.
Herein, template-matching operation is the well-known operations studied extensively of technical staff of image processing field, herein not
It repeats again.
It is appreciated that can be by calculating the image-region of the second image and the similarity of neighborhood, and compare and be calculated
Similarity and preset similarity threshold (such as 0.99) magnitude relationship, generate the 4th matching result.For example, when calculating
When the similarity arrived is less than preset similarity threshold, the 4th matching result can be " there is no corresponding points ".
As an example, please refer to Fig.3 F ' and Fig. 3 F ".In diagram, the first image 331 includes target pixel points 3311, clothes
Image-region 3310 is determined as the neighborhood of target pixel points 3311 by business device, and then, server carries out template to the second image
With operation, each image-region of the second image 332 and the similarity of neighborhood 3310 are determined, it will be similar in identified similarity
The image-region 3320 for spending the second maximum image is determined as matching image-region, and later, server determines selected in neighborhood
Pixel 3312 and the matched pixel point 3322 that selected pixel 3312 is determined in image-region is matched, finally, server
Based on selected pixel 3312, matched pixel point 3322 and target pixel points 3311, pair of target pixel points 3311 is generated
3321 location information should be put.Wherein, the location information of above-mentioned corresponding points 3321 is the 4th matching result.
It is appreciated that the coordinate of the position of selected pixel can be denoted as (ma by above-mentioned electronic equipmentx, may),
In, distance of the abscissa for the position to the length of the first image, wide distance of the ordinate for the position to the first image;By mesh
Marking the position of pixel, (such as coordinate, abscissa are target pixel points to the distance of the length of the first image, and ordinate is target picture
Vegetarian refreshments is to the wide distance of the first image) coordinate can be denoted as (qx, qy);By the position of the matched pixel point for selecting pixel
Put (such as coordinate, distance of the abscissa for the position to the length of the second image, ordinate are the position to the wide of the second image
Distance) coordinate can be denoted as (mbx, mby).The correspondence that equation below determines target pixel points may be used in server as a result,
Position where point:
tx=mbx–Wb/Wa*(max-qx)
ty=mby–Hb/Ha*(may-qy)
Wherein, WaFor the width of the first image, HaLength for the first image;WbFor the width of the second image, HbIt is second
The length of image.txFor characterizing corresponding points to the distance of the length (a line) of the second image, tyFor characterizing ordinate to correspond to
O'clock to the second image width (another a line) distance.Server generates the 4th matching result (mesh of target pixel points as a result,
The information of position where the corresponding points of mark pixel).
It should be noted that by the method for above-mentioned template matches, by matchings such as the 4th matching result, the first matching results
As a result it is compared, aids in determining whether out the position of the more accurately corresponding points of target pixel points.
With further reference to Fig. 4, it illustrates for generating the flow 400 of another embodiment of the method for information.The use
In the flow 400 of the method for generation information, include the following steps:
Step 401, each second spy that each fisrt feature point fisrt feature point concentrated is concentrated with second feature point
Sign point is matched, and obtains matching characteristic point to set.
In the present embodiment, step 401 and the step 201 in Fig. 2 corresponding embodiments are basically identical, and which is not described herein again.
It should be noted that in the present embodiment, fisrt feature point is the spy for the object region that the first image includes
Point is levied, object region includes target pixel points, and second feature point is the characteristic point of the second image.First image is target network
When page is presented on the first electronic equipment, the image shown by the first electronic equipment, the second image is that target webpage is presented on second
During electronic equipment, the image shown by the second electronic equipment.
Step 402, determine to be contained in matching characteristic point to the second feature o'clock in set the second image image-region
It is close to be determined as matching characteristic point by the closeness of middle distribution for the corresponding image-region of closeness maximum in identified closeness
Collect region.
In the present embodiment, step 402 and the step 202 in Fig. 2 corresponding embodiments are basically identical, and which is not described herein again.
Step 403, determine to be contained in the second feature point of matching characteristic point close quarters.
In the present embodiment, step 403 and the step 203 in Fig. 2 corresponding embodiments are basically identical, and which is not described herein again.
Step 404, by matching characteristic point to the collection of the matching characteristic point pair comprising identified second feature point in set
It closes, is determined as revised matching characteristic point to set.
In the present embodiment, step 404 and the step 204 in Fig. 2 corresponding embodiments are basically identical, and which is not described herein again.
Step 405, the of target pixel points are generated to set and target pixel points based on revised matching characteristic point
One matching result.
In the present embodiment, step 405 and the step 205 in Fig. 2 corresponding embodiments are basically identical, and which is not described herein again.
Step 406, based on the matching result generated, final matching results are generated.
In the present embodiment, above-mentioned electronic equipment is also based on obtained matching result, generates final matching results.
Wherein, matching result includes above-mentioned first matching result and below at least one:Second matching result, third matching result.
4th matching result.This depends at least one of above-mentioned (the second matching result, third matching result.4th matching result) whether
Generation.It is appreciated that before the step is performed, if only generated the first matching result (without the second matching result of generation,
Third matching result, the 4th matching result), then the matching result generated only includes the first matching result (without including second
Matching result, third matching result, the 4th matching result);If only generated the first matching result, the 4th matching result (and
Do not generate the second matching result, third matching result), then the matching result generated only includes the first matching result, the 4th
Matching result (without including the second matching result, third matching result).
It is appreciated that based on the matching result generated, there are many realization methods that generate final matching results.It is exemplary
, when the matching result generated includes the first matching result, the second matching result, third matching result and the 4th matching knot
Fruit, and the first matching result is " corresponding point coordinates (100,100) ", the second matching result for " corresponding point coordinates (101,
101) ", third matching result for " corresponding point coordinates (100,100) " and the 4th matching result for " corresponding point coordinates (99,
99) when ", the most matching result of occurrence number can be determined as final matching results (at this point, final by above-mentioned electronic equipment
Can be " corresponding point coordinates (100,100) " with result);The integration information of the matching result generated can also be determined as most
Whole matching result (at this point, final matching results can be " corresponding point coordinates (100,100), (101,101), (99,99),
(100,100)”)。
Step 407, based on final matching results, compatibility test is carried out to the website involved by target webpage.
In the present embodiment, above-mentioned electronic equipment is also based on final matching results, to the net involved by target webpage
It stands and carries out compatibility test.Wherein, above-mentioned compatibility test can include but is not limited to:Browser compatibility is tested, screen ruler
Very little and resolution ratio compatibility test, Compatibility of Operating System test, distinct device model compatibility test.
Illustratively, when determining pair of the target pixel points of the first image in the second image by final matching results
After should putting, above-mentioned electronic equipment can realize above-mentioned first electronic equipment of simultaneously operating and the second electronic equipment.It for example, can
To click the input frame of the first image and input word, while realize that the input phase is same in the identical input frame on the second image
Word.And further determine that in the first electronic equipment and the second electronic equipment above-mentioned word display whether to exist it is abnormal
Etc..
It is appreciated that when final matching results characterize the corresponding points that target pixel points are not present in the second image, explanation
There may be compatibility issues for above-mentioned website.
Figure 4, it is seen that compared with the corresponding embodiments of Fig. 2, in the present embodiment for the method that generates information
Flow 400 highlight based on obtained matching result, generate final matching results and compatibility test carried out to website
The step of.The scheme of the present embodiment description can introduce more matching schemes as a result, so as to further improve determining target picture
The accuracy of the matching result of vegetarian refreshments helps to improve the efficiency of compatible Website test.
With further reference to Fig. 5, as the realization to method shown in above-mentioned each figure, this application provides one kind for generating letter
One embodiment of the device of breath, the device embodiment is corresponding with embodiment of the method shown in Fig. 2, which can specifically answer
For in various electronic equipments.
As shown in figure 5, the present embodiment includes for generating the device 500 of information:Matching unit 501, first determines single
First 502, second determination unit 503,504 and first generation unit 505 of third determination unit.Wherein, the configuration of matching unit 501 is used
In each fisrt feature point that fisrt feature point is concentrated is matched with each second feature point that second feature point is concentrated, obtain
To matching characteristic point to set, wherein, fisrt feature point is the characteristic point for the object region that the first image includes, target figure
Picture region includes target pixel points, and second feature point is the characteristic point of the second image;First determination unit 502 is configured to determine
Be contained in the closeness that matching characteristic point is distributed the second feature o'clock in set in the image-region of the second image, by really
The corresponding image-region of maximum closeness is determined as matching characteristic point close quarters in fixed closeness;Second determination unit
503 are configured to the second feature point for determining to be contained in matching characteristic point close quarters;Third determination unit 504 be configured to by
Matching characteristic point is determined as revised to the set of the matching characteristic point pair comprising identified second feature point in set
With characteristic point to set;First generation unit 505 is configured to based on revised matching characteristic point to set and target picture
Vegetarian refreshments generates the first matching result of target pixel points.
In the present embodiment, fisrt feature point can be concentrated for generating the matching unit 501 of the device 500 of information
Each fisrt feature point is matched with each second feature point that second feature point is concentrated, and obtains matching characteristic point to set.
Wherein, fisrt feature point is the characteristic point for the object region that the first image includes, and object region includes object pixel
Point, second feature point are the characteristic points of the second image.The shapes and sizes of above-mentioned object region can be shown pre-set.
Illustratively, the shape of above-mentioned target image can be round, for example, it may be (can also be the first figure with target pixel points
Other pixels as in) for the center of circle, the circle with 0.5 (can also be other numerical value) of the width of the first image times for radius
Shape;The shape of target image can also be rectangle, for example, it may be (can also be other numbers with the 0.5 of the width of the first image
Value) it is the length of side, the square centered on target pixel points (can also be other pixels in the first image) etc. again.
In the present embodiment, based on the matching characteristic point that matching unit 501 obtains to set, above-mentioned first determination unit
502 can determine to be contained in matching characteristic point the second feature o'clock in set is distributed in the image-region of the second image it is close
The corresponding image-region of closeness maximum in identified closeness is determined as matching characteristic point close quarters by intensity.Its
In, the shapes and sizes of the image-region of the second image can be pre-set.Closeness can be characterized as image-region
The quantity of second feature point that unit area includes.
In the present embodiment, above-mentioned second determination unit 503 is close based on the matching characteristic point that the first determination unit 502 obtains
Collection region can determine to be contained in the second feature point of matching characteristic point close quarters.
In the present embodiment, the second feature point that above-mentioned third determination unit 504 is determined based on the second determination unit 503 can
With by matching characteristic point in set include identified second feature point matching characteristic point pair set, after being determined as amendment
Matching characteristic point to set.
In the present embodiment, the revised matching that above-mentioned first generation unit 505 is obtained based on third determination unit 504
Characteristic point can generate the first matching result of target pixel points to set and target pixel points.
In some optional realization methods of the present embodiment, above device further includes:4th determination unit (is not shown in figure
Go out) it is configured to, using each pixel of the target pixel points in neighborhood in the first image as the first sub-pixel point, adopt
With algorithm of region growing, the zone of convergency for the first sub-pixel point for meeting default screening conditions is determined as first area;The
Five determination unit (not shown)s are configured to, using each pixel in the second image as second seed pixel, use
The zone of convergency of second seed pixel for meeting default screening conditions is determined as second area by algorithm of region growing;6th
Determination unit (not shown) is configured to the second area for meeting at least one of following matching condition being determined as and first
The second area of Region Matching:The difference of the compactedness of second area and the compactedness of first area is less than preset compactedness threshold
Value;The difference of the length-width ratio of second area and the length-width ratio of first area is less than preset length-width ratio threshold value;Second area and first
The similarity in region is more than preset first similarity threshold;7th determination unit is configured to first area and the firstth area
The combination of the matched second area in domain, is determined as matching area pair;Second generation unit, be configured to based on matching area to
And target pixel points, generate the second matching result of target pixel points.
Wherein, neighborhood of the target pixel points in the first image is the image-region for including target pixel points, also, its shape
Shape and size are pre-set.Illustratively, above-mentioned neighborhood can be the rectangle centered on target pixel points or pros
The image-region of shape.Second matching result can be corresponding points (i.e. match point) position of target pixel points in the second image
Information;Can also be for characterizing the information whether the second image includes the corresponding points of target pixel points.Default screening conditions are
It is pre-set, for screening the zone of convergency so as to obtain the condition of first area.It is appreciated that it is obtained using algorithm of region growing
To the zone of convergency may include region (it is too small to show as area) caused by noise or large stretch of background or white space
(it is excessive to show as area), and these regions to the matching of shape be do not have it is helpful.Above-mentioned default screening conditions are used to reject
These regions.
In some optional realization methods of the present embodiment, screening conditions are preset including at least one of following:First is pre-
If distance values, the height of the first image and the product of the width three of the first image are less than the pixel quantity of the zone of convergency;Polymerization
The width in region is less than the width of the first image and the product of the second default distance values;The height of the zone of convergency is less than the first image
Height and the second default distance values product.
Wherein, the zone of convergency can be the region of rectangle.First default distance values and the second default distance values can be
It is pre-set, for characterizing between the subgraph (such as equipment vendor presented the image of the control of the page) in the first image
Distance value.In practice, the first default distance values and the second default distance values can rule of thumb be carried out by technical staff
Setting.For example, the first default distance values can be 0.01, the second default distance values can be 0.3.
In some optional realization methods of the present embodiment, the first word finder is presented in object pixel neighborhood of a point
It closes, the second image presents the second lexical set;And above device further includes:8th determination unit (not shown) is configured
For being directed to each first vocabulary in the first lexical set, the with first terminology match is determined in the second lexical set
Two vocabulary, by first vocabulary and first vocabulary institute matched second vocabulary combination, be determined as matching vocabulary pair;Third is given birth to
It is configured to, based on matching vocabulary pair and target pixel points, generate the third of target pixel points into unit (not shown)
With result.
Wherein, above-mentioned first vocabulary and the second vocabulary can be the vocabulary that directly can carry out replicating etc. operation;Also may be used
To be the vocabulary (for example, the vocabulary that can not directly replicate) blended with image.It can be with the second vocabulary of the first terminology match
Including but not limited to:Second vocabulary of the solid colour presented with the first vocabulary;It is consistent with the font size that the first vocabulary is presented
The second vocabulary;Second vocabulary consistent with the font that the first vocabulary is presented.
In some optional realization methods of the present embodiment, the 8th determination unit includes:First determining module is (in figure not
Show) it is configured to determine the four-corner system of first vocabulary;Second determining module (not shown) is configured to determine the
The similarity of each second vocabulary and first vocabulary in two lexical sets;The configuration of third determining module (not shown) is used
In by second vocabulary identical and/or maximum similarity with the four-corner system of first vocabulary, it is determined as and first vocabulary
The second vocabulary matched.
It should be noted that due to the resolution ratio of electronic equipment is relatively low, font is smaller etc., the identification knot of OCR technique
There are certain error rates for fruit.Stagger the time when identifying, have very maximum probability can ensure the quadrangle coding of recognition result still with former word one
It causes.Therefore, by the use of the four-corner system of word (such as Chinese character) as matching foundation, discrimination can be greatly promoted.
In some optional realization methods of the present embodiment, above device further includes:9th determination unit (is not shown in figure
Go out) it is configured to using object pixel neighborhood of a point, template-matching operation is carried out to the second image, determines the image of the second image
The image-region of second image of similarity maximum in identified similarity is determined as matching by region and the similarity of neighborhood
Image-region;The selected pixel and scheme in matching that tenth determination unit (not shown) is configured in determining neighborhood
Matched pixel point as determining selected pixel in region;4th generation unit (not shown) is configured to based on selected picture
Vegetarian refreshments, matched pixel point and target pixel points generate the 4th matching result of target pixel points.
Wherein, selected pixel can be any one pixel in neighborhood, and matched pixel point can be matching image
In region, with the corresponding pixel of selected pixel.Illustratively, when neighborhood is rectangular area, selected pixel can be with
It is the central point of neighborhood, matched pixel point can be the central point for matching image-region.
In some optional realization methods of the present embodiment, above device further includes:5th generation unit (is not shown in figure
Go out) it is configured to, based on the matching result generated, generate final matching results.
Based on obtained matching result, final matching results are generated.Wherein, matching result includes the above-mentioned first matching knot
Fruit and at least one of following:Second matching result, third matching result.4th matching result.This depends on above-mentioned at least one
Item (the second matching result, third matching result.4th matching result) whether generate.It is appreciated that when performing the step, such as
Fruit has only generated the first matching result (without generating the second matching result, third matching result, the 4th matching result), then institute
The matching result of generation only includes the first matching result (without including the second matching result, third matching result.4th matching knot
Fruit);If the first matching result, the 4th matching result have been only generated (without generating the second matching result, third matching knot
Fruit), then the matching result generated only includes the first matching result, the 4th matching result (without including the second matching result,
Three matching results).
In some optional realization methods of the present embodiment, the first image is that target webpage is presented on the first electronic equipment
When, the image shown by the first electronic equipment, when the second image is that target webpage is presented on the second electronic equipment, the second electronics is set
Standby shown image.
In some optional realization methods of the present embodiment, above device further includes:Test cell (not shown)
It is configured to based on final matching results, compatibility test is carried out to the website involved by target webpage.
Wherein, above-mentioned compatibility test can include but is not limited to:(test program is different clear for browser compatibility test
Whether can with normal operation, can function normal use if looking on device), screen size and resolution ratio compatibility test (test program
Can normally be shown under different resolution), Compatibility of Operating System test (test program energy below different operating system
No normal operation, function can normal use, display whether correct etc.), the compatibility test of distinct device model is (such as in mainstream
It can normal operation, the phenomenon that can or can not collapsing etc. in equipment).
The device that above-described embodiment of the application provides, concentrated fisrt feature point by matching unit 501 each the
One characteristic point is matched with each second feature point that second feature point is concentrated, and obtains matching characteristic point to set, Ran Hou
One determination unit 502 determines to be contained in matching characteristic point to the second feature o'clock in set in the image-region of the second image minute
The corresponding image-region of closeness maximum in identified closeness is determined as matching characteristic point compact district by the closeness of cloth
Domain, later the second determination unit 503 determine to be contained in the second feature point of matching characteristic point close quarters, subsequent third determines list
Matching characteristic point to the set of the matching characteristic point pair comprising identified second feature point in set, is determined as repairing by member 504
For matching characteristic point after just to set, last first generation unit 505 is based on revised matching characteristic point to set and mesh
Pixel is marked, the first matching result of target pixel points is generated, so as to improve the standard of the matching result of determining target pixel points
True property.
Below with reference to Fig. 6, it illustrates suitable for being used for realizing the computer system 600 of the electronic equipment of the embodiment of the present application
Structure diagram.Electronic equipment shown in Fig. 6 is only an example, to the function of the embodiment of the present application and should not use model
Shroud carrys out any restrictions.
As shown in fig. 6, computer system 600 includes central processing unit (CPU) 601, it can be read-only according to being stored in
Program in memory (ROM) 602 or be loaded into program in random access storage device (RAM) 603 from storage section 608 and
Perform various appropriate actions and processing.In RAM 603, also it is stored with system 600 and operates required various programs and data.
CPU 601, ROM 602 and RAM 603 are connected with each other by bus 604.Input/output (I/O) interface 605 is also connected to always
Line 604.
I/O interfaces 605 are connected to lower component:Importation 606 including keyboard, mouse etc.;It is penetrated including such as cathode
The output par, c 607 of spool (CRT), liquid crystal display (LCD) etc. and loud speaker etc.;Storage section 608 including hard disk etc.;
And the communications portion 609 of the network interface card including LAN card, modem etc..Communications portion 609 via such as because
The network of spy's net performs communication process.Driver 610 is also according to needing to be connected to I/O interfaces 605.Detachable media 611, such as
Disk, CD, magneto-optic disk, semiconductor memory etc. are mounted on driver 610, as needed in order to be read from thereon
Computer program be mounted into storage section 608 as needed.
Particularly, in accordance with an embodiment of the present disclosure, it may be implemented as computer above with reference to the process of flow chart description
Software program.For example, embodiment of the disclosure includes a kind of computer program product, including being carried on computer-readable medium
On computer program, which includes for the program code of the method shown in execution flow chart.In such reality
It applies in example, which can be downloaded and installed from network by communications portion 609 and/or from detachable media
611 are mounted.When the computer program is performed by central processing unit (CPU) 601, perform what is limited in the present processes
Above-mentioned function.
It should be noted that computer-readable medium described herein can be computer-readable signal media or meter
Calculation machine readable storage medium storing program for executing either the two arbitrarily combines.Computer readable storage medium for example can be --- but not
It is limited to --- electricity, magnetic, optical, electromagnetic, system, device or the device of infrared ray or semiconductor or arbitrary above combination.Meter
The more specific example of calculation machine readable storage medium storing program for executing can include but is not limited to:Electrical connection with one or more conducting wires, just
It takes formula computer disk, hard disk, random access storage device (RAM), read-only memory (ROM), erasable type and may be programmed read-only storage
Device (EPROM or flash memory), optical fiber, portable compact disc read-only memory (CD-ROM), light storage device, magnetic memory device,
Or above-mentioned any appropriate combination.In this application, computer readable storage medium can any include or store journey
The tangible medium of sequence, the program can be commanded the either device use or in connection of execution system, device.And at this
In application, computer-readable signal media can include in a base band or as a carrier wave part propagation data-signal,
Wherein carry computer-readable program code.Diversified forms may be used in the data-signal of this propagation, including but it is unlimited
In electromagnetic signal, optical signal or above-mentioned any appropriate combination.Computer-readable signal media can also be that computer can
Any computer-readable medium other than storage medium is read, which can send, propagates or transmit and be used for
By instruction execution system, device either device use or program in connection.It is included on computer-readable medium
Program code can be transmitted with any appropriate medium, including but not limited to:Wirelessly, electric wire, optical cable, RF etc. or above-mentioned
Any appropriate combination.
Flow chart and block diagram in attached drawing, it is illustrated that according to the system of the various embodiments of the application, method and computer journey
Architectural framework in the cards, function and the operation of sequence product.In this regard, each box in flow chart or block diagram can generation
The part of one module of table, program segment or code, the part of the module, program segment or code include one or more use
In the executable instruction of logic function as defined in realization.It should also be noted that it in some implementations as replacements, is marked in box
The function of note can also be occurred with being different from the sequence marked in attached drawing.For example, two boxes succeedingly represented are actually
It can perform substantially in parallel, they can also be performed in the opposite order sometimes, this is depended on the functions involved.Also it to note
Meaning, the combination of each box in block diagram and/or flow chart and the box in block diagram and/or flow chart can be with holding
The dedicated hardware based system of functions or operations as defined in row is realized or can use specialized hardware and computer instruction
Combination realize.
Being described in unit involved in the embodiment of the present application can be realized by way of software, can also be by hard
The mode of part is realized.Described unit can also be set in the processor, for example, can be described as:A kind of processor packet
Include matching unit, the first determination unit, the second determination unit, third determination unit and generation unit.Wherein, the name of these units
Claim not forming the restriction to the unit in itself under certain conditions, for example, the first generation unit is also described as " generation
The unit of first matching result of target pixel points ".
As on the other hand, present invention also provides a kind of computer-readable medium, which can be
Included in electronic equipment described in above-described embodiment;Can also be individualism, and without be incorporated the electronic equipment in.
Above computer readable medium carries one or more program, when said one or multiple programs are held by the electronic equipment
During row so that the electronic equipment:Each fisrt feature point that fisrt feature point is concentrated and second feature point concentrate each the
Two characteristic points are matched, and obtain matching characteristic point to set, wherein, fisrt feature point is the target image that the first image includes
The characteristic point in region, object region include target pixel points, and second feature point is the characteristic point of the second image;It determines to include
In the closeness that matching characteristic point is distributed the second feature o'clock in set in the image-region of the second image, by determined by
The corresponding image-region of maximum closeness is determined as matching characteristic point close quarters in closeness;It determines to be contained in matching characteristic
The second feature point of point close quarters;By matching characteristic point to including the matching characteristic point of identified second feature point in set
To set, be determined as revised matching characteristic point to set;Based on revised matching characteristic point to set and target
Pixel generates the first matching result of target pixel points.
The preferred embodiment and the explanation to institute's application technology principle that above description is only the application.People in the art
Member should be appreciated that invention scope involved in the application, however it is not limited to the technology that the specific combination of above-mentioned technical characteristic forms
Scheme, while should also cover in the case where not departing from foregoing invention design, it is carried out by above-mentioned technical characteristic or its equivalent feature
The other technical solutions for arbitrarily combining and being formed.Such as features described above has similar work(with (but not limited to) disclosed herein
The technical solution that the technical characteristic of energy is replaced mutually and formed.
Claims (20)
1. a kind of method for generating information, including:
Each fisrt feature point that fisrt feature point is concentrated is matched with each second feature point that second feature point is concentrated,
Matching characteristic point is obtained to set, wherein, fisrt feature point is the characteristic point for the object region that the first image includes, described
Object region includes target pixel points, and second feature point is the characteristic point of the second image;
Determine to be contained in the matching characteristic point to the second feature point in set in the image-region of second image minute
The corresponding image-region of closeness maximum in identified closeness is determined as matching characteristic point compact district by the closeness of cloth
Domain;
Determine to be contained in the second feature point of the matching characteristic point close quarters;
By the matching characteristic point to the set of the matching characteristic point pair comprising identified second feature point in set, it is determined as
Revised matching characteristic point is to set;
Based on the revised matching characteristic point to set and the target pixel points, the of the target pixel points are generated
One matching result.
2. according to the method described in claim 1, wherein, the method further includes:
Using each pixel of the target pixel points in the neighborhood in described first image as the first sub-pixel point, adopt
With algorithm of region growing, the zone of convergency for the first sub-pixel point for meeting default screening conditions is determined as first area;
Using each pixel in second image as second seed pixel, using algorithm of region growing, by meeting
The zone of convergency for stating the second seed pixel of default screening conditions is determined as second area;
The second area for meeting at least one of following matching condition is determined as and the matched second area in the first area:The
The compactedness in two regions and the difference of the compactedness of the first area are less than preset compactedness threshold value;The length-width ratio of second area
It is less than preset length-width ratio threshold value with the difference of the length-width ratio of the first area;Second area and the similarity of the first area
More than preset first similarity threshold;
By the combination of the first area and the matched second area in the first area, it is determined as matching area pair;
Based on the matching area pair and the target pixel points, the second matching result of the target pixel points is generated.
3. according to the method described in claim 2, wherein, the default screening conditions include at least one of following:
The product of the width three of first default distance values, the height of described first image and described first image is less than polymeric area
The pixel quantity in domain;
The width of the zone of convergency is less than the width of described first image and the product of the second default distance values;
The height of the zone of convergency is less than the height of described first image and the product of the described second default distance values.
4. according to the method described in one of claim 1-3, wherein, present the first vocabulary in the object pixel neighborhood of a point
Set, second image present the second lexical set;And
The method further includes:
For each first vocabulary in first lexical set, determined and first vocabulary in second lexical set
Matched second vocabulary, by first vocabulary and first vocabulary institute matched second vocabulary combination, be determined as matching vocabulary
It is right;
Based on matching vocabulary pair and the target pixel points, the third matching result of the target pixel points is generated.
5. according to the method described in claim 4, wherein, second vocabulary determined with first terminology match, including:
Determine the four-corner system of first vocabulary;
Determine the similarity of each second vocabulary and first vocabulary in second lexical set;
By second vocabulary identical and/or maximum similarity with the four-corner system of first vocabulary, it is determined as and first vocabulary
Matched second vocabulary.
6. according to the method described in one of claim 1-5, wherein, the method further includes:
Using the object pixel neighborhood of a point, template-matching operation is carried out to second image, determines the figure of the second image
As region and the similarity of the neighborhood, the image-region of the second image of similarity maximum in identified similarity is determined
To match image-region;
It determines the selected pixel in the neighborhood and of the selected pixel is determined in the matching image-region
With pixel;
Based on the selected pixel, matched pixel point and the target pixel points, the target pixel points are generated
4th matching result.
7. according to the method described in one of claim 1-6, wherein, the method further includes:
Based on the matching result generated, final matching results are generated.
8. according to the method described in claim 7, wherein, described first image is that target webpage is presented on the first electronic equipment
When, the image shown by first electronic equipment, second image is that the target webpage is presented on the second electronic equipment
When, the image shown by second electronic equipment.
9. according to the method described in claim 8, wherein, the method further includes:
Based on the final matching results, compatibility test is carried out to the website involved by the target webpage.
10. it is a kind of for generating the device of information, including:
Matching unit, be configured to that each fisrt feature point for concentrating fisrt feature point and second feature point concentrate each the
Two characteristic points are matched, and obtain matching characteristic point to set, wherein, fisrt feature point is the target image that the first image includes
The characteristic point in region, the object region include target pixel points, and second feature point is the characteristic point of the second image;
First determination unit is configured to determine to be contained in the matching characteristic point to the second feature point in set described
The closeness being distributed in the image-region of two images, the corresponding image-region of closeness maximum in identified closeness is true
It is set to matching characteristic point close quarters;
Second determination unit is configured to the second feature point for determining to be contained in the matching characteristic point close quarters;
Third determination unit is configured to the matching characteristic point to including the matching of identified second feature point in set
The set of characteristic point pair is determined as revised matching characteristic point to set;
First generation unit, be configured to based on the revised matching characteristic point to set and the target pixel points,
Generate the first matching result of the target pixel points.
11. device according to claim 10, wherein, described device further includes:
4th determination unit is configured to each pixel in neighborhood in described first image by the target pixel points
As the first sub-pixel point, using algorithm of region growing, the polymerization of the first sub-pixel point of default screening conditions will be met
Region is determined as first area;
5th determination unit is configured to, using each pixel in second image as second seed pixel, use
The zone of convergency for the second seed pixel for meeting the default screening conditions is determined as second area by algorithm of region growing;
6th determination unit is configured to the second area for meeting at least one of following matching condition being determined as and described first
The second area of Region Matching:The compactedness of second area and the difference of the compactedness of the first area are less than preset compactedness
Threshold value;The length-width ratio of second area and the difference of the length-width ratio of the first area are less than preset length-width ratio threshold value;Second area
It is more than preset first similarity threshold with the similarity of the first area;
7th determination unit is configured to the combination by the first area and the matched second area in the first area, really
It is set to matching area pair;
Second generation unit is configured to, based on the matching area pair and the target pixel points, generate the target picture
Second matching result of vegetarian refreshments.
12. according to the devices described in claim 11, wherein, the default screening conditions include at least one of following:
The product of the width three of first default distance values, the height of described first image and described first image is less than polymeric area
The pixel quantity in domain;
The width of the zone of convergency is less than the width of described first image and the product of the second default distance values;
The height of the zone of convergency is less than the height of described first image and the product of the described second default distance values.
13. according to the device described in one of claim 10-12, wherein, present first in the object pixel neighborhood of a point
Lexical set, second image present the second lexical set;And
Described device further includes:
8th determination unit is configured to for each first vocabulary in first lexical set, in second vocabulary
The second vocabulary with first terminology match is determined in set, by matched second vocabulary of first vocabulary and first vocabulary institute
Combination, be determined as match vocabulary pair;
Third generation unit is configured to, based on matching vocabulary pair and the target pixel points, generate the target pixel points
Third matching result.
14. device according to claim 13, wherein, the 8th determination unit includes:
First determining module is configured to determine the four-corner system of first vocabulary;
Second determining module is configured to determine the phase of each second vocabulary and first vocabulary in second lexical set
Like degree;
Third determining module is configured to second word identical and/or maximum similarity with the four-corner system of first vocabulary
It converges, is determined as the second vocabulary with first terminology match.
15. according to the device described in one of claim 10-14, wherein, described device further includes:
9th determination unit, is configured to using the object pixel neighborhood of a point, and template matches are carried out to second image
Operation, determines the image-region of the second image and the similarity of the neighborhood, similarity in identified similarity is maximum
The image-region of second image is determined as matching image-region;
Tenth determination unit, be configured to determine the neighborhood in selected pixel and it is described matching image-region in
Determine the matched pixel point of the selected pixel;
4th generation unit is configured to based on the selected pixel, matched pixel point and the target pixel points,
Generate the 4th matching result of the target pixel points.
16. according to the device described in one of claim 10-15, wherein, described device further includes:
5th generation unit is configured to, based on the matching result generated, generate final matching results.
17. device according to claim 16, wherein, described first image is that target webpage is presented on the first electronic equipment
When, the image shown by first electronic equipment, second image is that the target webpage is presented on the second electronic equipment
When, the image shown by second electronic equipment.
18. device according to claim 17, wherein, described device further includes:
Test cell is configured to, based on the final matching results, be compatible with the website involved by the target webpage
Property test.
19. a kind of electronic equipment, including:
One or more processors;
Storage device, for storing one or more programs,
When one or more of programs are performed by one or more of processors so that one or more of processors are real
The now method as described in any in claim 1-9.
20. a kind of computer readable storage medium, is stored thereon with computer program, wherein, described program is executed by processor
Methods of the Shi Shixian as described in any in claim 1-9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810088070.5A CN108182457B (en) | 2018-01-30 | 2018-01-30 | Method and apparatus for generating information |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810088070.5A CN108182457B (en) | 2018-01-30 | 2018-01-30 | Method and apparatus for generating information |
Publications (2)
Publication Number | Publication Date |
---|---|
CN108182457A true CN108182457A (en) | 2018-06-19 |
CN108182457B CN108182457B (en) | 2022-01-28 |
Family
ID=62551752
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810088070.5A Active CN108182457B (en) | 2018-01-30 | 2018-01-30 | Method and apparatus for generating information |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN108182457B (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110648382A (en) * | 2019-09-30 | 2020-01-03 | 北京百度网讯科技有限公司 | Image generation method and device |
CN111079730A (en) * | 2019-11-20 | 2020-04-28 | 北京云聚智慧科技有限公司 | Method for determining area of sample image in interface image and electronic equipment |
CN112569591A (en) * | 2021-03-01 | 2021-03-30 | 腾讯科技(深圳)有限公司 | Data processing method, device and equipment and readable storage medium |
US11212572B2 (en) * | 2018-02-06 | 2021-12-28 | Nippon Telegraph And Telephone Corporation | Content determination device, content determination method, and program |
CN117351438A (en) * | 2023-10-24 | 2024-01-05 | 武汉无线飞翔科技有限公司 | Real-time vehicle position tracking method and system based on image recognition |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034235A (en) * | 2010-11-03 | 2011-04-27 | 山西大学 | Rotary model-based fisheye image quasi dense corresponding point matching diffusion method |
CN103020945A (en) * | 2011-09-21 | 2013-04-03 | 中国科学院电子学研究所 | Remote sensing image registration method of multi-source sensor |
CN103473565A (en) * | 2013-08-23 | 2013-12-25 | 华为技术有限公司 | Image matching method and device |
-
2018
- 2018-01-30 CN CN201810088070.5A patent/CN108182457B/en active Active
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102034235A (en) * | 2010-11-03 | 2011-04-27 | 山西大学 | Rotary model-based fisheye image quasi dense corresponding point matching diffusion method |
CN103020945A (en) * | 2011-09-21 | 2013-04-03 | 中国科学院电子学研究所 | Remote sensing image registration method of multi-source sensor |
CN103473565A (en) * | 2013-08-23 | 2013-12-25 | 华为技术有限公司 | Image matching method and device |
Non-Patent Citations (2)
Title |
---|
RAFFAY HAMID ET AL.: "Dense Non-Rigid Point-Matching Using Random Projections", 《2013 IEEE CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
蔡龙洲 等: "基于无人机图像的密集匹配方法研究", 《测绘科学》 * |
Cited By (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11212572B2 (en) * | 2018-02-06 | 2021-12-28 | Nippon Telegraph And Telephone Corporation | Content determination device, content determination method, and program |
CN110648382A (en) * | 2019-09-30 | 2020-01-03 | 北京百度网讯科技有限公司 | Image generation method and device |
CN110648382B (en) * | 2019-09-30 | 2023-02-24 | 北京百度网讯科技有限公司 | Image generation method and device |
CN111079730A (en) * | 2019-11-20 | 2020-04-28 | 北京云聚智慧科技有限公司 | Method for determining area of sample image in interface image and electronic equipment |
CN111079730B (en) * | 2019-11-20 | 2023-12-22 | 北京云聚智慧科技有限公司 | Method for determining area of sample graph in interface graph and electronic equipment |
CN112569591A (en) * | 2021-03-01 | 2021-03-30 | 腾讯科技(深圳)有限公司 | Data processing method, device and equipment and readable storage medium |
CN117351438A (en) * | 2023-10-24 | 2024-01-05 | 武汉无线飞翔科技有限公司 | Real-time vehicle position tracking method and system based on image recognition |
CN117351438B (en) * | 2023-10-24 | 2024-06-04 | 武汉无线飞翔科技有限公司 | Real-time vehicle position tracking method and system based on image recognition |
Also Published As
Publication number | Publication date |
---|---|
CN108182457B (en) | 2022-01-28 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN108898186B (en) | Method and device for extracting image | |
US20230081645A1 (en) | Detecting forged facial images using frequency domain information and local correlation | |
CN108304835A (en) | character detecting method and device | |
CN108182457A (en) | For generating the method and apparatus of information | |
CN108898185A (en) | Method and apparatus for generating image recognition model | |
CN108197618B (en) | Method and device for generating human face detection model | |
CN108038880A (en) | Method and apparatus for handling image | |
CN107911753A (en) | Method and apparatus for adding digital watermarking in video | |
CN108846440A (en) | Image processing method and device, computer-readable medium and electronic equipment | |
CN109711508B (en) | Image processing method and device | |
CN109308681A (en) | Image processing method and device | |
CN108229485A (en) | For testing the method and apparatus of user interface | |
CN108830329A (en) | Image processing method and device | |
CN108734185A (en) | Image verification method and apparatus | |
CN109344752A (en) | Method and apparatus for handling mouth image | |
CN109118456A (en) | Image processing method and device | |
CN109344762A (en) | Image processing method and device | |
CN109903392A (en) | Augmented reality method and apparatus | |
CN108171191A (en) | For detecting the method and apparatus of face | |
CN108171211A (en) | Biopsy method and device | |
CN108491825A (en) | information generating method and device | |
CN108882025A (en) | Video frame treating method and apparatus | |
CN109901988A (en) | A kind of page elements localization method and device for automatic test | |
CN108509994A (en) | character image clustering method and device | |
CN110427915A (en) | Method and apparatus for output information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |