[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN108803991A - Object screening method and device, computer readable storage medium and electronic terminal - Google Patents

Object screening method and device, computer readable storage medium and electronic terminal Download PDF

Info

Publication number
CN108803991A
CN108803991A CN201810601461.2A CN201810601461A CN108803991A CN 108803991 A CN108803991 A CN 108803991A CN 201810601461 A CN201810601461 A CN 201810601461A CN 108803991 A CN108803991 A CN 108803991A
Authority
CN
China
Prior art keywords
target
objects
operation information
screening
target area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810601461.2A
Other languages
Chinese (zh)
Inventor
赵政
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Leichen Technology Co ltd
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Original Assignee
Guangzhou Leichen Technology Co ltd
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Leichen Technology Co ltd, Guangzhou Shiyuan Electronics Thecnology Co Ltd filed Critical Guangzhou Leichen Technology Co ltd
Priority to CN201810601461.2A priority Critical patent/CN108803991A/en
Publication of CN108803991A publication Critical patent/CN108803991A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04842Selection of displayed objects or displayed text elements
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F9/00Arrangements for program control, e.g. control units
    • G06F9/06Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
    • G06F9/44Arrangements for executing specific programs
    • G06F9/451Execution arrangements for user interfaces

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)

Abstract

The embodiment of the invention provides an object screening method, an object screening device, a computer readable storage medium and an electronic terminal, and relates to the technical field of communication; then, screening the objects contained in the target area to obtain objects matched with the target characteristics; therefore, the operation process of object screening can be simplified, the operation of a user is reduced, and further, the labor cost is saved.

Description

Object screening method and device, computer readable storage medium and electronic terminal
[ technical field ] A method for producing a semiconductor device
The present invention relates to the field of communications technologies, and in particular, to a method and an apparatus for screening objects, a computer-readable storage medium, and an electronic terminal.
[ background of the invention ]
When an existing display terminal (e.g., a computer, a smart tablet, etc.) selects a plurality of objects such as graphics, icons, etc. included in a display interface, a keyboard and a mouse are usually required to be used in cooperation. When it is desired to screen out a certain class of objects, the user is required to press a shortcut key on the keyboard and click the mouse, thereby sequentially selecting such class of objects.
In the process of implementing the present invention, the inventor finds that, in the existing object screening method, when screening a plurality of target objects, each time a target object is selected, a user needs to perform one operation on a mouse and a keyboard, and when a display terminal implements the screening of a certain type of target objects from a plurality of objects, the user needs to perform multiple operations, which is time-consuming and labor-consuming.
[ summary of the invention ]
In view of this, embodiments of the present invention provide an object screening method, an object screening apparatus, a computer-readable storage medium, and an electronic terminal, which can automatically screen a plurality of objects matched with a target feature from a plurality of objects, simplify an operation process, reduce user operations, and save labor costs.
In one aspect, the present invention provides an object screening method, including:
acquiring the characteristics of an object indicated by the specified operation information to obtain target characteristics;
and screening the objects contained in the target area to obtain the objects matched with the target characteristics.
As to the above-mentioned aspect and any possible implementation manner, there is further provided an implementation manner, before the screening the objects included in the target area to obtain the object matching the target feature, the method further includes:
outputting first prompt information to prompt a user to select a target area on a current display interface;
and determining the target area according to the operation information of the user aiming at the first prompt message.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where the determining the target area according to the operation information of the user for the first prompt information includes:
acquiring operation information aiming at the first prompt message;
and when the operation information aiming at the first prompt information is a framing motion track, determining a framing area framed by the framing motion track as the target area.
As to the above-mentioned aspect and any possible implementation manner, there is further provided an implementation manner, before the screening the objects included in the target area to obtain the object matching the target feature, the method further includes:
acquiring all currently displayed areas as the target area; or,
and acquiring a partial area where the object indicated by the specified operation information is located as the target area.
The above-described aspects and any possible implementations further provide an implementation, and the method further includes:
and setting all objects contained in the target area to be in a selected state.
The above-described aspect and any possible implementation manner further provide an implementation manner, where setting all objects included in the target region to a selected state includes:
changing the current display states of all objects contained in the target area into a preset selected display state; or,
and outputting second prompt information, wherein the second prompt information is used for indicating that all objects contained in the target area are in a selected state.
The above-described aspects and any possible implementations further provide an implementation, and the method further includes:
and outputting third prompt information, wherein the third prompt information is used for prompting the screening result of the target object.
The above-described aspect and any possible implementation manner further provide an implementation manner, where outputting the third prompt information includes:
changing the display state of the target object; or,
outputting a list containing only the target object; or,
erasing all non-target objects in the target area; or,
an image is generated that contains only the target object.
As for the above-mentioned aspect and any possible implementation manner, there is further provided an implementation manner, where the obtaining the feature of the object indicated by the designated operation information, and the obtaining the target feature includes:
acquiring a plurality of characteristics of an object indicated by the specified operation information;
outputting fourth prompt information to prompt a user to select among the plurality of features;
and determining the target feature in the plurality of features according to the operation information of the user for the fourth prompt message.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where when the specified operation information indicates at least two objects, the obtaining a feature of the object indicated by the specified operation information to obtain a target feature includes:
acquiring the characteristics of each object indicated by the specified operation information;
and determining the same characteristic of each object as a target characteristic according to the characteristic of each object indicated by the specified operation information.
The foregoing aspect and any possible implementation manner further provide an implementation manner, where when the specified operation information indicates at least two objects, the obtaining a feature of the object indicated by the specified operation information to obtain a target feature includes:
acquiring relative position information among the objects indicated by the specified operation information and characteristics of the objects;
and determining the relative position relation and the characteristics of each object as target characteristics.
In a second aspect, the present invention provides an object screening apparatus, the apparatus comprising:
the acquisition unit is used for acquiring the characteristics of the object indicated by the specified operation information to obtain target characteristics;
and the screening unit screens the objects contained in the target area to obtain the objects matched with the target characteristics.
In a third aspect, the invention provides a computer-readable storage medium having a computer program stored thereon, wherein the program, when executed by a processor, implements the method of any of the first aspects.
In a fourth aspect, the present invention provides an electronic terminal, comprising a processor, a memory and an input/output interface; the processor, the memory and the input/output interface are communicated through a bus; the memory is configured with computer code that the processor can call to control an input output interface;
the computer code, when executed by the processor, causes the electronic terminal to implement the method of any of the first aspect.
According to the technical scheme provided by the embodiment of the invention, the characteristic of the object indicated by the specified operation information is used as the target characteristic, and the target characteristic is utilized to respectively screen each object contained in the target area, so that the object matched with the target characteristic is obtained. In the scheme provided by the embodiment of the invention, when any kind of objects are to be screened, only the specified operation needs to be carried out on any object in the kind of objects, the execution main body can be triggered to acquire the characteristics of the object as the target characteristics for object screening, and therefore, the target characteristics are taken as the conditions for object screening, so that the objects matched with the target characteristics can be automatically screened from all the objects contained in the target area, the object screening can be realized without the user selecting one by using a mouse and a keyboard, the operation process of object screening is simplified, the operation of the user is reduced, the labor cost is saved, and the efficiency of object screening is improved to a certain extent.
[ description of the drawings ]
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flow chart of an object screening method according to an embodiment of the present invention;
FIG. 2 is a schematic flow chart of another object screening method according to an embodiment of the present invention;
FIG. 3 is a schematic flow chart of another object screening method according to an embodiment of the present invention;
FIG. 4 is a schematic flow chart of another object screening method according to an embodiment of the present invention;
FIG. 5 is a flow chart provided by the embodiment of the present invention for implementing the step 102;
fig. 6 is a schematic diagram illustrating a display manner of a fourth prompt message according to an embodiment of the present invention;
FIG. 7 is a diagram of a display interface including a plurality of objects according to an embodiment of the present invention;
FIG. 8 is a block diagram of an object screening apparatus according to an embodiment of the present invention;
fig. 9 is a functional block diagram of an electronic terminal according to an embodiment of the present invention.
[ detailed description ] embodiments
For better understanding of the technical solutions of the present invention, the following detailed descriptions of the embodiments of the present invention are provided with reference to the accompanying drawings.
It should be understood that the described embodiments are only some embodiments of the invention, and not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The terminology used in the embodiments of the invention is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention. As used in the examples of the present invention and the appended claims, the singular forms "a," "an," and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise.
It should be understood that the term "and/or" as used herein is merely one type of association that describes an associated object, meaning that three relationships may exist, e.g., a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
It should be understood that although the terms first, second, etc. may be used to describe the prompting messages in the embodiments of the present invention, the prompting messages should not be limited to these terms. These terms are only used to distinguish the hints information from one another. For example, the first prompt message may also be referred to as the second prompt message, and similarly, the second prompt message may also be referred to as the first prompt message, without departing from the scope of embodiments of the present invention.
The word "if" as used herein may be interpreted as "at … …" or "when … …" or "in response to a determination" or "in response to a detection", depending on the context. Similarly, the phrases "if determined" or "if detected (a stated condition or event)" may be interpreted as "when determined" or "in response to a determination" or "when detected (a stated condition or event)" or "in response to a detection (a stated condition or event)", depending on the context.
Example one
An object screening method is provided in an embodiment of the present invention, please refer to fig. 1, which is a schematic flow chart of the object screening method provided in the embodiment of the present invention, as shown in fig. 1, the method includes the following steps:
102. and acquiring the characteristics of the object indicated by the specified operation information to obtain the target characteristics.
The specified operation information may include, but is not limited to: triggering and executing an operation instruction for extracting the characteristics of the object, and specifying the object indicated by the operation information.
It should be noted that the operation instruction triggering the execution of the feature of the extraction object is an operation instruction preset by the execution main body, where the operation instruction may be, but is not limited to: a single-click operation instruction, a double-click operation instruction, a single-click selection button operation instruction, or a selection box operation instruction, etc. And the object indicated in the operation information is designated as an object responding to the preset operation instruction.
For example, the operation instruction triggering execution of the feature of the extraction object is an operation instruction preset by the execution subject as a single-click operation. When a user clicks an object a in a current display interface through a mouse, a keyboard or a touch screen, an execution main body acquires operation information of the user, wherein the acquired operation information comprises a clicking operation instruction and the object a, and the execution main body determines that the acquired clicking operation instruction is a preset operation instruction for triggering execution of extraction of characteristics of the object, so that the execution main body is triggered to respond to the acquired operation information of the user, extract the characteristics of the object a and obtain target characteristics.
104. And screening the objects contained in the target area to obtain the objects matched with the target characteristics.
The target area is a screening range for performing object screening by the main body.
According to the technical scheme provided by the embodiment of the invention, the characteristic of the object indicated by the specified operation information is used as the target characteristic, and the target characteristic is utilized to respectively screen each object contained in the target area, so that the object matched with the target characteristic is obtained. In the scheme provided by the embodiment of the invention, when any kind of objects are required to be screened, only the specified operation is required to be carried out on any object in the kind of objects, the execution main body can be triggered to acquire the characteristics of the object as the target characteristics for object screening, and therefore, the target characteristics are taken as the conditions for object screening, so that the objects matched with the target characteristics can be automatically screened from all the objects contained in the target area, the object screening can be realized without the user selecting one by using a mouse and a keyboard, the operation process of object screening is simplified, the operation of the user is reduced, the labor cost is saved, and the efficiency of object screening is improved to a certain extent.
Further, the target area for object filtering may be a filtering area selected by the user himself or the target area may be an area where automatic assignment by the subject is performed. Based on the two situations, the invention respectively provides the implementation modes for realizing the two situations,
a first implementation manner is a manner for implementing a target area selected by a user for screening, a flowchart of which is shown in fig. 2, and before screening each object included in the target area to obtain an object matching with a target feature, the method further includes:
and A103-1, outputting first prompt information to prompt a user to select a target area on the current display interface.
And A103-2, determining a target area according to the operation information of the user aiming at the first prompt message.
In a specific implementation process, the determining of the target area may be to acquire the operation information for the first prompt information according to the operation information of the user for the first prompt information, and when the operation information for the first prompt information is the frame selection motion track, the frame selection area of the frame selection motion track is determined as the target area.
Specifically, after the first prompt information is output, the execution main body collects a frame selection motion track operated by a user through input equipment such as a mouse or a keyboard, obtains a starting point and an end point of the frame selection motion track of the operation, and determines a frame selection area of the frame selection motion track according to the positions of the starting point and the end point and a preset frame selection algorithm, so that a target area is determined.
For example, the operation on the first prompt information may be clicking, dragging, and the like.
For example, the preset framing algorithm includes, but is not limited to: taking the starting point and the end point as two end points of a line segment, taking the line segment as a diameter and the center of the line segment as a circle center, and selecting a circular area; or, the starting point and the end point are taken as two end points of a line segment, and the line segment is taken as a diagonal line, and a rectangular area is selected out.
It should be noted here that the first prompt information for prompting the user to select the region in the current display interface is output, so that the user can select the region for object screening based on the user requirement by performing corresponding operation on the first prompt information, and thus, the target region for object screening better meets the user requirement, and further, the efficiency of object screening is improved to a certain extent.
A second implementation manner is a manner of performing an automatic assignment of a subject for a target area used for screening, a flowchart of which is shown in fig. 3, before screening each object included in the target area to obtain an object matching a target feature, the method further includes:
and B103, acquiring all currently displayed areas as target areas.
In the implementation mode, the execution main body defaults to use all currently displayed areas as target areas, operation information of a user for the areas set for object screening does not need to be further acquired, user operations are reduced, operation steps of object screening are further simplified to a certain extent, and labor cost is saved.
In a third implementation manner, still a manner of performing automatic assignment of a subject for a target area used for screening, a flowchart of which is shown in fig. 4, before screening each object included in the target area to obtain an object matching a target feature, the method further includes:
and C103, acquiring a partial area where the object indicated by the specified operation information is positioned as a target area.
In the implementation manner, the execution subject defaults to use the partial area where the object indicated by the specified operation information is located as the target area, and further operation information of the user for the area set for object screening does not need to be acquired, so that the operation of the user can be reduced, the operation steps of object screening can be further simplified to a certain extent, and the labor cost is saved. In addition, the partial area is used as the target area, the range for executing object screening is limited to a relatively small range, the number of objects for executing screening is reduced to a certain extent, and therefore the speed and the efficiency of object screening are improved.
In the following, with respect to the acquisition of the partial area where the object indicated by the specified operation information is located, the present invention provides two embodiments,
in a specific embodiment, the execution main body sets the whole display area of the display interface to a plurality of sub-areas, acquires the coordinates of the object indicated by the designated operation information in the display interface, compares the coordinates of the object in the display interface with the respective corresponding coordinate range of each sub-area, and determines the sub-area corresponding to the coordinate range where the coordinates of the object are located as the target area. In this embodiment, it is not necessary to further acquire the operation information of the user for the region set for object screening, so that the user's operations can be reduced, the operation steps for object screening can be further simplified to a certain extent, and the labor cost can be saved. In addition, the partial area is used as the target area, the range for executing object screening is limited to a relatively small range, the number of objects for executing screening is reduced to a certain extent, and therefore the speed and the efficiency of object screening are improved.
In another possible implementation manner, the partial region where the specified operation information indicates the object may be a region obtained by the execution subject by radiating outward through a reference point where the specified operation information indicates the object. The mode of radiating outwards through the reference point can be that radiation is carried out by taking the position of the object indicated by the designated operation information as a circle center and taking the designated distance as a radius; alternatively, the position of the object indicated by the specified operation information may be set as a vertex, and the first distance and the second distance may be respectively radiated along the first direction and the second direction, where an included angle between the first direction and the second direction is greater than 0 ° and less than 180 °. It should be noted here that the two radiation modes are only two specific implementation modes provided by the present invention, and the present invention is not particularly limited to the mode of radiating with the position of the object indicated by the designated operation information as the reference point. In addition, in a mode that the position of the object indicated by the designated operation information is used as a reference point and the reference point radiates outwards to obtain the area, the obtained area is used as a target area, and the operation information of the area set for object screening by the user is not required to be further obtained, so that the operation of the user can be reduced, the operation steps of the object screening can be further simplified to a certain extent, and the labor cost is saved. In addition, the partial area is used as the target area, the range for executing object screening is limited to a relatively small range, the number of objects for executing screening is reduced to a certain extent, and therefore the speed and the efficiency of object screening are improved.
In order to facilitate the user to further check which objects need to be screened, the invention further provides that all the objects contained in the target area are set to be in a selected state, so that all the objects contained in the target area are distinguished from all the objects contained in the non-target area, and the user can quickly know which objects need to be screened by the execution main body.
Specifically, the setting of all the objects included in the target area to the selected state may be a change of the current display states of all the objects included in the target area to a preset selected display state; or outputting second prompt information for indicating that all objects contained in the target area are in the selected state. Wherein. The preset selected display state includes, but is not limited to, all objects highlighted, all objects blinking, all objects shaded, etc. The second prompt message may be a dialog box prompting that all objects are set to the selected state; alternatively, it may be in the form of a sound to indicate that all objects have been set to the selected state, etc.
Further, in order to facilitate the user to view the object screening result, the present invention further provides that after step 104 is executed, third prompt information for prompting the screening result of the target object may be output.
The target object is an object matched with the target feature in the target area.
Specifically, the present invention provides the following several implementation manners for outputting the third prompt information for prompting the screening result of the target object:
and changing the display state of the target object. Specifically, the normal display state of the target object obtained by screening is changed into a highlight display state; or, changing the normal display state of the screened target object into a flicker display state;
alternatively, a list containing only the target objects is output.
Alternatively, all non-target objects in the target area are erased. Wherein the non-target object is an object in the target region that does not match the target feature. Specifically, erasing all the non-target objects in the target area may be to hide the non-target objects in the target area and display only the target objects.
Alternatively, an image containing only the target object is generated. Specifically, an image including only the target object may be produced according to the screening result, and the execution subject may prompt the user to screen the large target object by calling the image and displaying the image to the user.
Further, in order to make the object obtained by screening better meet the needs of the user, in order to achieve this object, the present invention provides an implementation manner for implementing step 102, a flowchart of which is shown in fig. 5, where the step 102 obtains the feature of the object indicated by the specified operation information, and obtaining the target feature includes:
1021. a plurality of characteristics of an object indicated by the specified operation information are acquired.
1022. And outputting fourth prompt information for prompting the user to select among the plurality of features.
1023. And determining a target feature from the plurality of features according to the operation information of the user for the fourth prompt message.
Please refer to fig. 6, which is a schematic diagram illustrating a display manner of a fourth prompt message according to the present embodiment. In a specific scenario, a user double-clicks an object a in a display interface, and an execution main body is triggered to acquire features of an object indicated by specified operation information according to the acquired specified operation information to acquire target features, wherein the specified operation information is the object a in the double-click display interface, so that the execution main body extracts a plurality of features such as color, shape, name, extension and the like of the object a, generates an editable feature list from the extracted features, outputs the editable feature list, edits the editable feature list by using a mouse, a keyboard or a touch screen, selects one or more features from the editable feature list by the user, and determines one or more features selected by the user as the target features according to the operation information for aiming at the editable feature list.
In addition, the specified operation information in step 102 may indicate only one object, or the specified operation information may indicate a plurality of objects, and if the specified operation information indicates at least two objects, for this case, the present invention further provides two implementations of determining the target feature,
in a specific implementation process, when the specified operation information indicates at least two objects, acquiring characteristics of each object indicated by the specified operation information; then, the characteristics of the objects indicated by the specified operation information are acquired, and the same characteristics of each object are determined as target characteristics.
For example, the specified operation information indicates an object C and an object D, wherein the characteristics of the object C include: the color is red, the name is 1, the type is file, and the extension is doc; the features of object C include: the execution main body obtains the characteristics of the object C and the characteristics of the object D, then compares the characteristics of the object C with the characteristics of the object D to obtain the same characteristics of the object C and the object D, namely the color of the object C and the name of the object D are red and the name of the object D is 1, so that the two characteristics of the color of the object C and the name of the object D are used as target characteristics, and further, the execution main body screens all objects with the characteristics of the color of the object C and the name of the object D from the target area.
In another specific implementation process, the relative position information between the objects indicated by the specified operation information and the features of the objects may also be acquired, and then the relative position relationship and the features of the objects are determined as the target features.
For example, please refer to fig. 7, which is a schematic diagram of a display interface including a plurality of objects according to the present invention. In the intelligent whiteboard, the specified operation information indicates an object E and an object F, wherein the object E is characterized by being in a straight line shape, the object F is characterized by being in a straight line shape, the relative position information of the object A and the object B is that the included angle between the object E and the object F is 60 degrees, the execution main body obtains the relative position of each object as 60 degrees and the shape of each object as a straight line, the obtained relative position is 60 degrees and the two characteristics that the shape of each object is a straight line are used as target characteristics, and the execution main body screens out the objects which accord with the two characteristics from the target area.
It should be noted that the terminal according to the embodiment of the present invention may include, but is not limited to, a Personal Computer (PC), a Personal Digital Assistant (PDA), a wireless handheld device, a Tablet Computer (Tablet Computer), a mobile phone, and the like.
The embodiment of the invention further provides an embodiment of a device for realizing the steps and the method in the embodiment of the method.
Please refer to fig. 8, which is a functional block diagram of an object filtering apparatus according to an embodiment of the present invention. As shown, the apparatus comprises:
an acquisition unit 21 that acquires a feature of an object indicated by the specified operation information to obtain a target feature;
the screening unit 22 screens the objects included in the target area to obtain the objects matched with the target features.
Optionally, in an embodiment of the present invention, the apparatus further includes an output unit and a processing unit, before the screening unit 22 performs the screening on the objects included in the target area to obtain the object matching with the target feature, the output unit further includes a processing unit, and before the object matching with the target feature is obtained
The output unit is used for outputting first prompt information to prompt a user to select a target area on a current display interface.
The processing unit is configured to determine the target area according to operation information of the user for the first prompt information.
When the processing unit determines the target area according to the operation information of the user for the first prompt information, the processing unit specifically executes the following steps:
acquiring operation information aiming at the first prompt message;
and when the operation information aiming at the first prompt information is a framing motion track, determining a framing area framed by the framing motion track as the target area.
Optionally, in another embodiment of the present invention, before the screening unit 22 performs screening on the object included in the target area to obtain the object matching the target feature, the obtaining unit 21 in the apparatus is further configured to:
acquiring all currently displayed areas as the target area; or,
and acquiring a partial area where the object indicated by the specified operation information is located as the target area.
Wherein, in order to facilitate the user to quickly know which objects the filtering unit 22 needs to filter, the processing unit in the apparatus is further configured to set all objects contained in the target area to the selected state.
Specifically, the setting, by the processing unit, of all the objects included in the target region to the selected state may be a change of a current display state of all the objects included in the target region to a preset selected display state; or outputting second prompt information for indicating that all objects contained in the target area are in the selected state. Wherein. The preset selected display state includes, but is not limited to, all objects highlighted, all objects blinking, all objects shaded, etc. The second prompt message may be a dialog box prompting that all objects are set to the selected state; alternatively, it may be in the form of a sound to indicate that all objects have been set to the selected state, etc.
Furthermore, in order to facilitate the user to check and obtain the object screening result, the present invention further provides that after the screening unit finishes screening the objects included in the target area and obtains the object matched with the target feature, third prompt information for prompting the screening result of the target object may be output.
The target object is an object matched with the target feature in the target area.
Specifically, the present invention provides the following several implementation manners for outputting the third prompt information for prompting the screening result of the target object:
and changing the display state of the target object. Specifically, the normal display state of the target object obtained by screening is changed into a highlight display state; or, changing the normal display state of the screened target object into a flicker display state;
alternatively, a list containing only the target objects is output.
Alternatively, all non-target objects in the target area are erased. Wherein the non-target object is an object in the target region that does not match the target feature. Specifically, erasing all the non-target objects in the target area may be to hide the non-target objects in the target area and display only the target objects.
Alternatively, an image containing only the target object is generated. Specifically, an image including only the target object may be produced according to the screening result, and the execution subject may prompt the user to screen the large target object by calling the image and displaying the image to the user.
Optionally, the obtaining unit 21 in the present invention is specifically configured to:
acquiring a plurality of characteristics of an object indicated by the specified operation information;
outputting fourth prompt information to prompt a user to select among the plurality of features;
and determining the target feature in the plurality of features according to the operation information of the user for the fourth prompt message.
Optionally, the obtaining unit 21 in the present invention is specifically configured to:
acquiring the characteristics of each object indicated by the specified operation information;
and determining the same characteristic of each object as a target characteristic according to the characteristic of each object indicated by the specified operation information.
Optionally, in an embodiment of the present invention, when the specified operation information indicates at least two objects, the obtaining unit 21 is specifically configured to:
acquiring relative position information among the objects indicated by the specified operation information and characteristics of the objects;
and determining the relative position relation and the characteristics of each object as target characteristics.
Since each unit in the present embodiment can execute the foregoing object screening method, reference may be made to the related description of the object screening method in a part of the present embodiment that is not described in detail.
According to the technical scheme provided by the embodiment of the invention, the characteristic of the object indicated by the specified operation information is used as the target characteristic, and the target characteristic is utilized to respectively screen each object contained in the target area, so that the object matched with the target characteristic is obtained. In the scheme provided by the embodiment of the invention, when any kind of objects are to be screened, only the specified operation needs to be carried out on any object in the kind of objects, the execution main body can be triggered to acquire the characteristics of the object as the target characteristics for object screening, and therefore, the target characteristics are taken as the conditions for object screening, so that the objects matched with the target characteristics can be automatically screened from all the objects contained in the target area, the object screening can be realized without the user selecting one by using a mouse and a keyboard, the operation process of object screening is simplified, the operation of the user is reduced, the labor cost is saved, and the efficiency of object screening is improved to a certain extent.
The present invention provides a computer-readable storage medium having a computer program stored thereon, wherein the computer program is configured to implement any one of the object screening methods described above when executed by a processor.
The present invention provides an electronic terminal for executing the object screening method, the functional block diagram of the electronic terminal is shown in fig. 9, the electronic terminal includes a processor 31, a memory 32 and an input/output interface 33; the processor 31, the memory 32 and the input/output interface 33 communicate through a bus; the memory 32 is configured with computer code that the processor 31 can call to control the input output interface 33;
the computer code, when executed by the processor 31, causes the electronic terminal to implement the object screening method of any of the preceding claims.
According to the technical scheme provided by the embodiment of the invention, the characteristic of the object indicated by the specified operation information is used as the target characteristic, and the target characteristic is utilized to respectively screen each object contained in the target area, so that the object matched with the target characteristic is obtained. In the scheme provided by the embodiment of the invention, when any kind of objects are to be screened, only the specified operation needs to be carried out on any object in the kind of objects, the execution main body can be triggered to acquire the characteristics of the object as the target characteristics for object screening, and therefore, the target characteristics are taken as the conditions for object screening, so that the objects matched with the target characteristics can be automatically screened from all the objects contained in the target area, the object screening can be realized without the user selecting one by using a mouse and a keyboard, the operation process of object screening is simplified, the operation of the user is reduced, the labor cost is saved, and the efficiency of object screening is improved to a certain extent.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the above-described systems, apparatuses and units may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the embodiments provided in the present invention, it should be understood that the disclosed system, apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions in actual implementation, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, or in a form of hardware plus a software functional unit.
The integrated unit implemented in the form of a software functional unit may be stored in a computer readable storage medium. The software functional unit is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) or a Processor (Processor) to execute some steps of the methods according to the embodiments of the present invention. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (14)

1. A method of object screening, the method comprising:
acquiring the characteristics of an object indicated by the specified operation information to obtain target characteristics;
and screening the objects contained in the target area to obtain the objects matched with the target characteristics.
2. The method according to claim 1, wherein before the screening the objects contained in the target area to obtain the objects matching the target feature, the method further comprises:
outputting first prompt information to prompt a user to select a target area on a current display interface;
and determining the target area according to the operation information of the user aiming at the first prompt message.
3. The method according to claim 2, wherein the determining the target area according to the operation information of the user for the first prompt message comprises:
acquiring operation information aiming at the first prompt message;
and when the operation information aiming at the first prompt information is a framing motion track, determining a framing area framed by the framing motion track as the target area.
4. The method according to claim 1, wherein before the screening the objects contained in the target area to obtain the objects matching the target feature, the method further comprises:
acquiring all currently displayed areas as the target area; or,
and acquiring a partial area where the object indicated by the specified operation information is located as the target area.
5. The method according to claim 2 or 4, characterized in that the method further comprises:
and setting all objects contained in the target area to be in a selected state.
6. The method of claim 5, wherein setting all objects contained within the target region to a selected state comprises:
changing the current display states of all objects contained in the target area into a preset selected display state; or,
and outputting second prompt information, wherein the second prompt information is used for indicating that all objects contained in the target area are in a selected state.
7. The method of claim 1, further comprising:
and outputting third prompt information, wherein the third prompt information is used for prompting the screening result of the target object.
8. The method of claim 7, wherein outputting the third prompt comprises:
changing the display state of the target object; or,
outputting a list containing only the target object; or,
erasing all non-target objects in the target area; or,
an image is generated that contains only the target object.
9. The method of claim 1, wherein the obtaining the characteristics of the object indicated by the specific operation information and obtaining the target characteristics comprises:
acquiring a plurality of characteristics of an object indicated by the specified operation information;
outputting fourth prompt information to prompt a user to select among the plurality of features;
and determining the target feature in the plurality of features according to the operation information of the user for the fourth prompt message.
10. The method according to claim 1, wherein when the specified operation information indicates at least two objects, the obtaining of the feature of the object indicated by the specified operation information obtains a target feature, including:
acquiring the characteristics of each object indicated by the specified operation information;
and determining the same characteristic of each object as a target characteristic according to the characteristic of each object indicated by the specified operation information.
11. The method according to claim 1, wherein when the specified operation information indicates at least two objects, the obtaining of the feature of the object indicated by the specified operation information obtains a target feature, including:
acquiring relative position information among the objects indicated by the specified operation information and characteristics of the objects;
and determining the relative position relation and the characteristics of each object as target characteristics.
12. An object screening apparatus, characterized in that the apparatus comprises:
the acquisition unit is used for acquiring the characteristics of the object indicated by the specified operation information to obtain target characteristics;
and the screening unit screens the objects contained in the target area to obtain the objects matched with the target characteristics.
13. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the method of any one of claims 1 to 11.
14. An electronic terminal is characterized by comprising a processor, a memory and an input/output interface; the processor, the memory and the input/output interface are communicated through a bus; the memory is configured with computer code that the processor can call to control an input output interface;
the computer code, when executed by the processor, causes the electronic terminal to implement the method of any of claims 1-11.
CN201810601461.2A 2018-06-12 2018-06-12 Object screening method and device, computer readable storage medium and electronic terminal Pending CN108803991A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810601461.2A CN108803991A (en) 2018-06-12 2018-06-12 Object screening method and device, computer readable storage medium and electronic terminal

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810601461.2A CN108803991A (en) 2018-06-12 2018-06-12 Object screening method and device, computer readable storage medium and electronic terminal

Publications (1)

Publication Number Publication Date
CN108803991A true CN108803991A (en) 2018-11-13

Family

ID=64085367

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810601461.2A Pending CN108803991A (en) 2018-06-12 2018-06-12 Object screening method and device, computer readable storage medium and electronic terminal

Country Status (1)

Country Link
CN (1) CN108803991A (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382282A (en) * 2018-12-28 2020-07-07 北京国双科技有限公司 Method, device, storage medium and processor for processing data
CN112083863A (en) * 2020-09-17 2020-12-15 维沃移动通信有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112506402A (en) * 2020-11-25 2021-03-16 广州朗国电子科技有限公司 Electronic whiteboard based graph control method and device and storage medium

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850312A (en) * 2015-06-01 2015-08-19 联想(北京)有限公司 Electronic equipment and method for processing information thereof
CN105022844A (en) * 2015-08-25 2015-11-04 魅族科技(中国)有限公司 Method for batch processing of files as well as terminal
CN105138699A (en) * 2015-09-25 2015-12-09 广东欧珀移动通信有限公司 Photograph classification method and device based on shooting angle and mobile terminal
CN105912589A (en) * 2016-03-31 2016-08-31 联想(北京)有限公司 Information processing method and electronic device
CN107992631A (en) * 2017-12-26 2018-05-04 上海展扬通信技术有限公司 A kind of file management method and terminal

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104850312A (en) * 2015-06-01 2015-08-19 联想(北京)有限公司 Electronic equipment and method for processing information thereof
CN105022844A (en) * 2015-08-25 2015-11-04 魅族科技(中国)有限公司 Method for batch processing of files as well as terminal
CN105138699A (en) * 2015-09-25 2015-12-09 广东欧珀移动通信有限公司 Photograph classification method and device based on shooting angle and mobile terminal
CN105912589A (en) * 2016-03-31 2016-08-31 联想(北京)有限公司 Information processing method and electronic device
CN107992631A (en) * 2017-12-26 2018-05-04 上海展扬通信技术有限公司 A kind of file management method and terminal

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111382282A (en) * 2018-12-28 2020-07-07 北京国双科技有限公司 Method, device, storage medium and processor for processing data
CN112083863A (en) * 2020-09-17 2020-12-15 维沃移动通信有限公司 Image processing method and device, electronic equipment and readable storage medium
CN112506402A (en) * 2020-11-25 2021-03-16 广州朗国电子科技有限公司 Electronic whiteboard based graph control method and device and storage medium

Similar Documents

Publication Publication Date Title
EP3273334A1 (en) Information processing method, terminal and computer storage medium
EP2509390A1 (en) Method and mobile terminal for processing contacts
KR20150059466A (en) Method and apparatus for recognizing object of image in electronic device
CN108829314B (en) Screenshot selecting interface selection method, device, equipment and storage medium
CN105824496A (en) Method for setting icon brightness based on use of users and mobile terminal
CN112399006B (en) File sending method and device and electronic equipment
CN108803991A (en) Object screening method and device, computer readable storage medium and electronic terminal
CN110362257B (en) Data processing method, display method and client
CN113794795B (en) Information sharing method and device, electronic equipment and readable storage medium
CN106648281B (en) Screenshot method and device
CN110554898A (en) Marking method in game scene, touch terminal and readable storage medium
CN111338556A (en) Input method, input device, terminal equipment and storage medium
CN108845757A (en) Touch input method and device for intelligent interaction panel, computer readable storage medium and intelligent interaction panel
CN107391914B (en) Parameter display method, device and equipment
CN109298817B (en) Item display method, item display device, terminal and storage medium
CN112817447B (en) AR content display method and system
CN107835305B (en) Information input method and device for terminal equipment with screen
US9395837B2 (en) Management of data in an electronic device
CN110825298A (en) Information display method and terminal equipment
CN112416199A (en) Control method and device and electronic equipment
CN111796736B (en) Application sharing method and device and electronic equipment
CN107315527B (en) Identity identification code identification method and mobile terminal
CN104407763A (en) Content input method and system
CN113362410B (en) Drawing method, drawing device, electronic apparatus, and medium
CN112596883B (en) Application switching method and device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20181113

RJ01 Rejection of invention patent application after publication