US20130007875A1 - Interactive CAPTCHA - Google Patents
Interactive CAPTCHA Download PDFInfo
- Publication number
- US20130007875A1 US20130007875A1 US13/174,394 US201113174394A US2013007875A1 US 20130007875 A1 US20130007875 A1 US 20130007875A1 US 201113174394 A US201113174394 A US 201113174394A US 2013007875 A1 US2013007875 A1 US 2013007875A1
- Authority
- US
- United States
- Prior art keywords
- user
- media
- instructions
- constructing
- media components
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/31—User authentication
- G06F21/36—User authentication by graphic or iconic representation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F21/00—Security arrangements for protecting computers, components thereof, programs or data against unauthorised activity
- G06F21/30—Authentication, i.e. establishing the identity or authorisation of security principals
- G06F21/45—Structures or tools for the administration of authentication
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/04817—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance using icons
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0481—Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
- G06F3/0482—Interaction with lists of selectable items, e.g. menus
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/04842—Selection of displayed objects or displayed text elements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/048—Interaction techniques based on graphical user interfaces [GUI]
- G06F3/0484—Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
- G06F3/0486—Drag-and-drop
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F9/00—Arrangements for program control, e.g. control units
- G06F9/06—Arrangements for program control, e.g. control units using stored programs, i.e. using an internal store of processing equipment to receive or retain programs
- G06F9/44—Arrangements for executing specific programs
- G06F9/451—Execution arrangements for user interfaces
- G06F9/454—Multi-language systems; Localisation; Internationalisation
Definitions
- the present disclosure generally relates to computer security, more particularly, to a more secure and user friendly CAPTCHA.
- a conventional CAPTCHA may involve displaying obfuscated words or letters and asking a user to enter the displayed words or letters. Obfuscation of the words or letters is presumed to make the words or letters unrecognizable to a machine, while a human should be able to correctly identify the words or letters with relative ease.
- rapid advances in computer technology has made computers much more capable of solving CAPTCHAs, which means conventional CAPTCHAs may not offer as much reliability as they once did.
- traditional CAPTCHAs may be difficult for humans to correctly identify as well, particularly if the display is on a small screen such as a mobile telephone screen. Consequently, a human user could be erroneously classified as a machine, which defeats the purpose of the CAPTCHA.
- CAPTCHAs have been generally adequate for their intended purposes, they have not been entirely satisfactory in every aspect. It would be advantageous to offer an improve CAPTCHA that is both easy for humans to identify and hard for machines to solve.
- the method includes: receiving a request from a user; constructing, by a computer processor, a media object that contains a plurality of media components; transmitting the media object to the user; prompting the user to perform a task involving at least a subset of the media components, the task including at least one of the following: selecting each of the media components in the subset according to a predefined sequence; and modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation; obtaining a description of an action performed by the user in response to the prompting; and deciding whether or not to grant the request based on the obtaining.
- Another one of the broader forms of the present disclosure involves an apparatus comprising a non-transitory, tangible computer readable storage medium storing a computer program.
- the computer program has instructions that when executed, perform: receiving a request from a user; constructing a media object that contains a plurality of media components; transmitting the media object to the user; prompting the user to perform a task involving at least a subset of the media components, the task including at least one of the following: selecting each of the media components in the subset according to a predefined sequence; and modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation; obtaining a description of an action performed by the user in response to the prompting; and deciding whether or not to grant the request based on the obtaining.
- Yet another one of the broader forms of the present disclosure involves a system.
- the system involves: means for receiving a request from a user; means for constructing a media object that contains a plurality of media components; means for transmitting the media object to the user; means for prompting the user to perform a task involving at least a subset of the media components, the task including at least one of the following: selecting each of the media components in the subset according to a predefined sequence; and modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation; means for obtaining a description of an action performed by the user after the user is prompted; and means for deciding whether or not to grant the request based on the description obtained.
- FIG. 1 illustrates a simplified example of a user interface for an online shopping site at a production selection phase.
- FIG. 2 illustrates an example of a CAPTCHA according to various aspects of the present disclosure.
- FIGS. 3-7 illustrate additional examples of CAPTCHAs according to other aspects of the present disclosure.
- FIG. 8 illustrates a flowchart of verifying a user through a CAPTCHA according to various aspects of the present disclosure.
- FIG. 9 shows a block diagram of a computer system for implementing various methods and devices described according to various aspects of the present disclosure.
- FIG. 1 an example user interface 40 from an online shopping site is illustrated.
- the online shopping site is engaged in the selling of products (goods), where product or good is used herein to include physical goods, digital goods, services, charitable donations, etc.
- the user interface 40 is in the form of a web page.
- the user interface 40 displays a plurality of objects 50 that each represent a different product.
- the objects 50 may each contain a button, an icon, a picture, or combinations thereof.
- the online shopping site sells products at a physical point of sale, such as a store or kiosk, and a user may select physical products to be scanned at the point of sale.
- the products represented by the objects 50 may include physical and tangible goods, including (but not limited to) clothing, electronics, tools, toys, household appliances, books, movies, automotive components, sporting goods, groceries, etc.
- the products represented by the objects 50 may also include digital goods, which include goods that are stored, delivered, and/or used in an electronic format.
- digital goods may include electronic-books, digital music files, digital images, digital videos, and virtual items, etc. Similar to physical and tangible goods, digital goods can be bought and sold between interested parties. The buyer of a digital good may receive the purchased digital good through an email attachment, a download, or other suitable mechanisms.
- the user interface 40 informs a prospective buyer what products are available from the merchant.
- the prospective buyer may click on any one of the objects 50 to add it to the buyer's purchasing queue, which may be in the form of a virtual shopping cart.
- the prospective buyer may edit the purchasing queue at any time, such as adding or subtracting the quantity of a particular product in the queue or removing a product from the queue altogether. For the sake of simplicity, the details of the purchasing queue are not illustrated herein.
- the server may be a server for the online shopping site.
- the server may also be a server for a third party payment platform such as PAYPAL®.
- PAYPAL® a third party payment platform
- the CAPTCHA 100 is a picture-based CAPTCHA in an embodiment and may therefore be also referred to as picTCHA.
- the CAPTCHA 100 contains an instructions object 110 .
- the instructions object 110 displays word-based instructions (for example, instructions in English or another suitable language) that prompt the user to perform one or more particular tasks. As an example, the instructions object 110 illustrated herein prompts the user to perform the following two tasks, “drag the boat to the dog” and “drag the fish to the gift.”
- the instructions object 110 may be obfuscated to make automated processing of the instructions difficult.
- the obfuscation of the instructions object 110 makes it more difficult to be recognized by machines.
- the obfuscation in the illustrated embodiment is done by drawing random squiggly lines throughout the words of the instructions object 110 .
- the obfuscation may be done by distorting or warping the words of the instructions object 110 , crowding the words together, or otherwise adding more background clutter and noise to the instructions object 110 .
- the instructions object 110 may also contain instructions for notifying the user that the language of the instructions object 110 may be changed to a different language.
- the instructions object 110 may contain an image-based link 115 that can alter the displayed language of the instructions object 110 .
- the image-based link 115 is a Chinese flag, and the instructions adjacent to the Chinese flag notify the user that the instructions will be displayed in Chinese if the Chinese flag is clicked on.
- the image-based link 115 may contain a drop-down menu that can display a plurality of nations' flags (or the names of the nations), and the selection of a particular nation's flag will display the instructions in the language of that nation.
- the language of the instructions object 110 may be automatically adjusted based on retrieved contextual information from the user such as an Internet Protocol (IP) address of the user's computer, the character set of the user's computer, or the previous history of the user. For example, if the IP address of the user's computer indicates that the computer is located in Japan, then the language of the instructions object 110 may be automatically switched to Japanese.
- IP Internet Protocol
- the CAPTCHA 100 also contains a media object 120 .
- the media object 120 includes a plurality of media components, examples of which are illustrated herein as media components 121 - 133 .
- the media components 121 - 133 are images of common everyday items, for example things that exist in nature or things that are man-made.
- the media component 121 is a dog; the media component 122 is a mouse; the media component 123 is a car; the media component 124 is a camel; the media component 125 is a gift; the media component 126 is a head of a pig wearing sunglasses; the media component 127 is a cow; the media component 128 is a fish; the media component 129 is a zebra; the media component 130 is a boat; the media component 131 is a cat; the media component 132 is a fly; and the media component 133 is a bug.
- Each of these media components 121 - 133 is easy for a human user to recognize and identify, but they may be more difficult for a machine to identify.
- the concept or entity represented by each media component herein may take any number of different forms.
- the media component 126 is a cartoonized pig's head wearing sunglasses.
- a human user should have no difficulty recognizing that the media component 126 still represents a pig.
- a machine needs to scour a large database to find images tagged with “pig” that closely match the media component 126 . This scouring task may take a long time, after which it may still yield an incorrect result.
- At least a subset of the media components 121 - 133 may be obfuscated to make them even harder to identify by machines.
- the media components 125 - 126 are partially overlapping with one another. Thus, they may appear as a single entity for a machine, since there are no clear boundaries separating them. However, to a human user, it is clear that the media components 125 - 126 represent different items.
- Other examples of obfuscation techniques may include:
- the media components 121 - 133 need not necessarily be images or items with only visual characteristics.
- the media components 121 - 133 may be video clips and may have audio or tactile characteristics as well.
- the media components 121 - 133 may also have varying levels of abstraction.
- an image of a cow may be abstracted as a collection of lines that delineates the head of a cow.
- the image of the cow is not taken from any actual pictures of real cows or even caricatures of cows. It can merely be just a rough sketch of a part of a cow, for example a sketch resembling to what a child would draw as a cow.
- Such abstraction usually does not deter a human user from recognizing that the image indeed represents a cow, but it makes it even harder to be correctly identified by a machine.
- the media components 121 - 133 may also have different sizes.
- the media components 121 - 133 may be randomly generated according to certain underlying parameters indicating what the media components may look like.
- a computer server that performs the generation of the media components 121 - 133 may have access to a database that stores hundreds or thousands of images (or videos) for each of a plurality of different items, such as thousands of images of dogs, thousands of images of boats, thousands of images of automobiles, thousands of images of cows, etc.
- the server may receive a command to generate an image of a dog, a boat, an automobile, and a cow.
- the server may randomly choose one of the images of a dog, a boat, an automobile, and a cow from the thousands of available images.
- the user is prompted to manipulate a subset of the media components 121 - 133 according to the instructions given by the instructions object 110 .
- the user is supposed to first drag (using a finger or mouse) the boat (media component 130 ) to the dog (media component 121 ), and then drag the fish (media component 128 ) to the gift (media component 125 ).
- a human user should easily recognize these images and will be able to correctly perform the tasks according to the instructions.
- the instructions object 110 of the embodiment illustrated herein prompts the user to change the location properties of a subset of the media components by dragging them.
- the instructions object 110 may prompt the user to manipulate the media components 121 - 133 differently.
- the user may be asked to change the size of a subset of the media components: the instructions object 110 may read “enlarge the car, and shrink the cat.”
- the user may be asked to change the orientation of a subset of the media components: the instructions object 110 may read “rotate the cow clockwise by 90 degrees, and rotate the mouse counterclockwise by 180 degrees.”
- the user may be asked to change the appearance of a subset of the media components: the instructions object 110 may read “color the dog with a blue color, and color the zebra with a pink color.”
- a palette of colors may be displayed, and the user may be able to select, such as by clicking or tapping on a certain color in the palette to assign that color to a particular media component.
- the user may be asked to identify a subset of media component based on similar classifications: the instructions object 110 may read “click on all the mammals,” “click on all things that could be dangerous,” “click on all vehicles,” “click on the three images that are largest in the real world,” or “click on all images that are cartoon characters.”
- the instructions may be specific to a type of user device or more general. For example, if the system knows the user is on a touch screen device, the instructions may ask the user to “tap” on specific objects. A more general instruction may ask the user to “select” specific objects, which may include clicking, tapping, moving a pointer or mouse, etc.
- the user may be asked to group images relative to each other: the instructions object 110 may read “drag the image that is eaten to the image that eats it.” In other embodiments, the user may be asked to match suites of corporate products: the instructions object 110 may read “click on all characters that interact with Donald Duck.”
- the user may be asked to select a subset of the media components 121 - 133 according to a predefined sequence: the instructions object 110 may read “click on the dog, then click on the fish, then click on the boat, then click on the cow, and finally click on the camel.”
- the instructions object 110 may contain an image or an item that has visual and/or audio characteristics.
- the instructions object 110 may contain an image of a dog and instructs the user to click on all the images of the media object 120 that match that image.
- the user may be asked to perform two or more of the tasks discussed above concurrently, for example to simultaneously drag the dog to the car (with one finger) and to drag the fish to the boat (with another finger).
- the user may be asked to perform two or more of the tasks discussed above according to a predefined sequence. For example, the user may be asked to first drag the dog to the car, then to click on all the mammals, then to rotate the fish clockwise by 90 degrees, and then to shrink the zebra.
- FIGS. 3-7 illustrate additional aspects of the CAPTCHA disclosed herein.
- logic or a puzzle is utilized to manipulate media components.
- a media object 140 A contains a plurality of media components 141 - 145
- a media object 140 B contains a plurality of media components 146 - 150 .
- the media components 141 , 143 , and 145 are images of dogs.
- the media components 142 and 144 are images of cats.
- the media components 146 - 150 are images of a baby, a cat, a dog, a monkey, and a bear, respectively.
- the media components 141 - 145 form a sequence.
- An empty box 155 is placed at the end of the sequence.
- the box 155 may also be considered a media component.
- an instructions object 160 Interposed between the media objects 140 A and 140 B is an instructions object 160 , which displays a message, “drag an image below into the box that best completes the above sequence.”
- the instructions object 160 may be obfuscated.
- the sequence formed by the media components 141 - 145 is: dog/cat/dog/cat/dog.
- the logic behind the sequence involves cats and dogs alternating with each other. As such, since the last media component 145 in the sequence is a dog, the cat corresponding to media component 147 would best complete the sequence.
- a sliding puzzle such as the “gem puzzle” or “game of fifteen” or “mystic square” may be implemented.
- the user may be prompted to navigate tiles within a confined area until the tiles are ordered according to a certain sequence (for example, in a numerically ascending sequence).
- a certain sequence for example, in a numerically ascending sequence.
- the user may be shown a series of actions (or procedure), after which the user is prompted to repeat those actions.
- This idea is illustrated in FIGS. 4 and 5 .
- an original screenshot presented to the user is a structure 165 that contains a plurality of openings 166 - 169 .
- the opening 166 - 169 are located at the top, bottom, left, and right of the structure 165 , respectively.
- Also shown in the screenshot is a plurality of media components 170 - 178 that are located within the structure 165 .
- the media components 170 - 178 may have different shapes, colors, or sizes from one another.
- the media components 170 - 178 are balls (shown as circles) in the embodiment illustrated in FIG. 4 , but they may have different shapes in alternative embodiments, for example triangles or squares.
- a finger touches one of the media components 170 - 178 , for example the media component 173 , and subsequently drags the media component 173 out of one of the openings 166 - 169 , for example the opening 166 at the top of the structure 165 .
- the media component 173 may be a yellow ball in this embodiment, whereas the rest of the media components are balls of different colors.
- the screen is then reset.
- the media components 170 - 178 return to the same previous locations and have the same sizes, shapes, and colors.
- the media component 170 - 178 may be placed in different locations than in the previous screen and may have different sizes, shapes, or colors.
- the openings may be moved to different locations as well (for example, the top opening 166 may be shifted to its right by a certain distance). These changes may be implemented to enhance the difficulty of automation by a machine.
- the user is prompted to repeat the course of actions previously demonstrated.
- the prompting of the user may be performed through an instructions object (not illustrated herein) which may read, “repeat the previous series of actions.”
- a human user watching the previously demonstrated series of actions will understand that he is supposed to drag a yellow ball out of the top opening, even if the yellow ball is not necessarily at the same location as before, and the location of the top opening has been altered slightly.
- this task is much more difficult for the machine to perform.
- First it may be quite difficult for a machine to detect the demonstrated series of actions.
- Second even if the machine can correctly detect the demonstrated actions, it may not correctly decipher the instructions.
- the machine may not know how to properly “repeat” this procedure if the yellow ball (and/or the top opening) is no longer at the same location. For example, a red ball may be located at the yellow ball's previous location, whereas the yellow ball is now located elsewhere.
- the machine may be forced to make a difficult choice as to whether “repeating” the procedure entails moving the red ball (because it is where the previous object used to be) out of the top opening, or whether it entails moving the yellow ball (now located elsewhere) out of the top opening. Stated differently, even an “intelligent” machine that can detect the demonstrated procedure and decipher the instructions may still not be able to perform the task correctly, whereas a human user can do this with ease.
- FIG. 5 is another illustration of a concept similar to that discussed above in association with FIG. 4 .
- an original screenshot shown to the user includes a plurality of media components 180 - 183 stacked on top of one another.
- the media component 180 which is a rectangle in the illustrated embodiment.
- the media component 181 stacked on top of the media component 180 is a square in the illustrated embodiment.
- the media component 182 stacked on top of the media component 181 is a trapezoid in the illustrated embodiment.
- the media component 183 stacked on top of the media component 182 is a circle in the illustrated embodiment.
- An instructions object may contain instructions that prompt the user to memorize the stacking scheme formed by the media components 180 - 183 .
- the media components 180 - 183 are dispersed, and the user is instructed to assemble the media component 180 - 183 according to the stacking scheme shown in FIG. 5 .
- this task should be relatively easy for a user, but it may be difficult for a machine to solve.
- the user may accomplish the navigation of the media components discussed above using his finger(s) on a touch-screen device.
- the user may use a mouse or a keyboard.
- one of the objects may be initially highlighted. Pressing the “Tab” button on a keyboard may move the highlight cursor to another object, and pressing the directional arrow keys on the keyboard may move that highlighted object in the direction corresponding to the directional arrow key.
- the user may be asked to manipulate one media component relative to (or with respect to) another media component.
- An instructions object 190 displays instructions that prompt the user to move and/or rotate one of the media components so that it can fit inside the other media component.
- the media component 190 must be rotated by a particular angle (or in a particular range of angles) before it can fit inside the media component 191 .
- the user may need to resize one of the media components to accomplish this task.
- the media components 190 - 191 illustrated herein are merely examples, and that alternative shapes, sizes, and rotation angles may be randomly generated in other embodiments to enhance the difficulty of automation.
- an instructions object includes a textual component 195 A and a graphical component 195 B.
- the textual component 195 A reads, “Perform these steps to the box.”
- the graphical component 195 B graphically illustrates a rotational arrow sign with a corresponding text “90°”, a downward directional arrow sign, and a right directional arrow sign.
- a media component 198 that is a box in the illustrated embodiment.
- a human user may quickly understand that according to the graphical component 195 B, he is supposed to rotate the box by 90 degrees, and then move the box downward, and then move the box to the right. But once again, a machine may have a difficult time comprehending the instructions (particularly since the instructions are graphical in nature) and therefore will likely fail the CAPTCHA.
- a user may perform the tasks through a plurality of suitable mechanisms.
- the user performs the tasks on a touch-screen according to the displayed instructions.
- the user may perform the tasks using a joystick.
- the user may perform the tasks using a mouse or another suitable pointing device.
- the user may perform the tasks using an eyeball-tracker.
- the user may perform the tasks through a voice command system.
- a description of the user's actions is obtained by the server. For example, through a touch-screen display, movement of the user's fingers on the screen can be detected. The movement may include clicking, dragging, pinching, and/or circling.
- the user may manipulate objects using an accelerometer, for example by physically moving or rotating a handheld device.
- the server can identify the objects the user attempted to manipulate. Given the description of the user's actions, the server can determine whether the user's actions correspond with the instructions well enough to satisfy a predetermined threshold, and accordingly whether or not to grant the user's request.
- the predetermined threshold may be implemented differently in different embodiments.
- the threshold may be “did the outer boundary of the boat touch the outer boundary of the dog at the end of the dragging motion?”
- the predetermined threshold may be set up so that the two media components need not actually touch each other, but as long as they are dragged to be within a small distance (e.g., 1 millimeter), then that may satisfy the threshold.
- the verification of the user can be exact—when the task is solved perfectly.
- the verification of the user can be fuzzy—when the task is solved sufficiently well, but not perfectly.
- the threshold may also be dynamically adjusted based on the situation in which a request is made (e.g., high dollar value transaction or low dollar value transaction?) or on past performance of the account (e.g., did the user make identical or similar requests repeatedly in the past?), and/or computer data and IP address range (e.g., based on the user's location corresponding to the IP address, does the user's request appear suspicious?).
- a request e.g., high dollar value transaction or low dollar value transaction
- past performance of the account e.g., did the user make identical or similar requests repeatedly in the past?
- computer data and IP address range e.g., based on the user's location corresponding to the IP address, does the user's request appear suspicious?
- the CAPTCHA disclosed herein may also be adapted to filter out (or conversely, be confined to) particular types of users.
- the server may generate media components that children are familiar with, such as characters from popular children's shows. The instructions may be given in an audio manner. As such, children who cannot read or write may still be able to pass the CAPTCHA test and be verified as a legitimate human user.
- the CAPTCHA is supposed to only pass users older than a particular age, then the user is asked to identify things that are only recognized by users of a certain age, for example things that can be found in an office. Most children who have not been to an office will not be able to identify the objects in an office or and thus will likely fail the CAPTCHA test. As another example, the user may be asked to identify things that cost less than $20 each. Children who have no experience paying bills will likely fail this test as well.
- CAPTCHA In recent years, fraudsters have also relied on outsourcing to solve CAPTCHAs. For example, human labor can be hired in low wage countries to perform CAPTCHA solving tasks.
- the CAPTCHA disclosed herein can also be adapted to defeat such outsourcing (or at least reduce the likelihood of its success) by making the tasks culturally or geographically relevant.
- the CAPTCHA may display images of famous athletes of different sports (e.g., Michael Jordan for basketball or Joe Montana for football) as well as representations of these sports (e.g., a basketball hoop or a football field) and ask the user to match the athletes to their sports. Even human users from a foreign country may have a difficult time performing this task correctly, since they may not be familiar with these athletes who are famous in the United States.
- the CAPTCHA may play a tune from a well-known nursery rhyme in a given country or region and ask the user to pick a media component (for example an image) that relates to the nursery rhyme.
- a media component for example an image
- the tune from “London Bridge is falling down” may be played, and an image of a bridge may be displayed as one of the many media components. If the user clicks on the image of the bridge, he may be verified.
- Outsourced human labor may also encounter difficulty trying to solve these tasks because they likely have not been exposed to these nursery rhymes from other countries.
- the CAPTCHA may display a landmark or an item that is famous to users of a particular region, and ask the user to identify these landmarks or items. Such tasks will once again be difficult for outsourced human labor to solve, provided that they are not from that particular region.
- the CAPTCHA may also be fine-tuned to (or zeroed-in on) a specific human user.
- the CAPTCHA may be generated according to that specific human user's history or preferences. For example, the CAPTCHA may ask the user to “click on all images representing products you have recently bought on this site” or “click on all images that you have previously seen on this site.” In this manner, only a targeted human user will likely pass the verification test.
- the embodiments of the CAPTCHA disclosed herein offer advantages over conventional CAPTCHAs, it being understood that other embodiments may offer different advantages, and that no particular advantage is required for all embodiments.
- One advantage is that the CAPTCHA disclosed herein is much harder for machines to solve.
- Conventional CAPTCHAs often rely on obfuscated words alone to thwart a machine attack.
- OCR Optical Character Recognition
- machines have become more versatile and more capable of identifying fairly obfuscated words.
- the CAPTCHA disclosed herein utilizes media content such as images that each represent an underlying item or concept. Each item or concept may have thousands of different manifestations in images, audios, or videos.
- the images or other types of media content can be obfuscated (e.g., overlapped, fuzzed, partially hidden, etc.) to make their identification even more difficult for a machine.
- CAPTCHA is user-friendly, particularly if the user is using a relatively small display screen, for example a mobile handset device.
- the obfuscated words displayed on a small screen may make them hard for the human user to recognize. Even if the user can correctly recognize the words, it may still be inconvenient for the user to input them using a handset device with a small keyboard or no keyboard. This may lead to a frustrating verification experience for the user.
- the media-based CAPTCHA disclosed herein allows the users to easily identify the media components even on a small screen, because the media components represent common things that the user is familiar with.
- the CAPTCHA disclosed herein also allows the user to easily maneuver the media components using fingers (e.g., by dragging, clicking, pinching, or circling) or another suitable input mechanism according to the instructions given, thereby obviating an awkward inputting experience.
- the CAPTCHA disclosed herein also offers the advantage of reducing the probability of success of solving CAPTCHAs through outsourced human labor.
- the CAPTCHAs herein can be designed and implemented to be culturally and/or geographically relevant, which make them hard to solve for even human users from a different region.
- the CAPTCHA disclosed herein is also designed for a specific user based on his contextual information such as his purchasing history, thereby enhancing the security level of the CAPTCHA even further.
- FIG. 8 is a flowchart illustrating a method 200 for utilizing a CAPTCHA to verify a user according to various aspects of the present disclosure.
- the method 200 includes block 210 in which a request is received from a user.
- the user may be a human or a machine.
- the method 200 includes block 220 in which a media object is constructed by a computer processor.
- the media object has at least one of the following characteristics: a visual characteristic, an audio characteristic, and a tactile characteristic.
- the media object contains a plurality of media components.
- at least some of the media components may be obfuscated.
- at least some of the media components may be randomly generated.
- at least some of the media components are relevant to the user's culture or the user's geographical location.
- each media component is a representation of a man-made device or an item that exists in nature.
- the method 200 includes block 230 in which the media object is transmitted to the user.
- the transmission of the object may include displaying the media object on a mobile device such as a mobile handset device.
- the method 200 includes block 240 in which the user is prompted to perform a task involving at least a subset of the media components.
- the task includes at least one of the following: clicking on each of the media components in the subset according to a predefined sequence; and modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation.
- the block 240 may be carried out using obfuscated instructions.
- the method 200 includes block 250 in which a description of an action performed by the user is obtained in response to block 240 .
- the description of the user's actions may be obtained by detecting at least one of the following: detecting a clicking motion, detecting a dragging motion, detecting a pinching motion, and detecting a circling motion.
- the method 200 includes block 260 in which a server decides whether or not to grant the request based on block 250 .
- the server may make the decision based on an evaluation of whether the description of the action corresponds to the task well enough to satisfy a predetermined threshold.
- One or more of the actions in blocks 210 - 260 may be performed using a computer hardware device encoded with software instructions.
- FIG. 9 is a block diagram of a computer system 300 suitable for implementing various methods and devices described herein, for example, the various method blocks of the method 200 .
- user devices may comprise a network communications device (e.g., mobile cellular phone, laptop, personal computer, etc.) capable of communicating with a network
- a service provider device such as managed by a third party payment platform
- the service provider device may comprise a network communications device (e.g., mobile cellular phone, laptop, personal computer, etc.) capable of communicating with the network, without departing from the scope of the present disclosure.
- each of the devices may be implemented as the computer system 300 for communication with the network in a manner as follows.
- the computer system 300 such as a mobile communications device and/or a network server, includes a bus component 302 or other communication mechanisms for communicating information, which interconnects subsystems and components, such as processing component 304 (e.g., processor, micro-controller, digital signal processor (DSP), etc.), system memory component 306 (e.g., RAM), static storage component 308 (e.g., ROM), disk drive component 310 (e.g., magnetic or optical), network interface component 312 (e.g., modem or Ethernet card), display component 314 (e.g., cathode ray tube (CRT) or liquid crystal display (LCD)), input component 316 (e.g., keyboard), cursor control component 318 (e.g., mouse or trackball), and image capture component 320 (e.g., analog or digital camera).
- processing component 304 e.g., processor, micro-controller, digital signal processor (DSP), etc.
- system memory component 306 e.g., RAM
- computer system 300 performs specific operations by processor 304 executing one or more sequences of one or more instructions contained in system memory component 306 .
- Such instructions may be read into system memory component 306 from another computer readable medium, such as static storage component 308 or disk drive component 310 .
- static storage component 308 or disk drive component 310 may be another computer readable medium.
- hard-wired circuitry may be used in place of (or in combination with) software instructions to implement the present disclosure.
- Logic may be encoded in a computer readable medium, which may refer to any medium that participates in providing instructions to processor 304 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media.
- the computer readable medium is non-transitory.
- non-volatile media includes optical or magnetic disks, such as disk drive component 310
- volatile media includes dynamic memory, such as system memory component 306 .
- data and information related to execution instructions may be transmitted to computer system 300 via a transmission media, such as in the form of acoustic or light waves, including those generated during radio wave and infrared data communications.
- transmission media may include coaxial cables, copper wire, and fiber optics, including wires that comprise bus 302 .
- Computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, carrier wave, or any other medium from which a computer is adapted to read.
- execution of instruction sequences to practice the present disclosure may be performed by computer system 300 .
- a plurality of computer systems 300 coupled by communication link 330 e.g., a communications network, such as a LAN, WLAN, PTSN, and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks
- communication link 330 e.g., a communications network, such as a LAN, WLAN, PTSN, and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks
- Computer system 300 may transmit and receive messages, data, information and instructions, including one or more programs (i.e., application code) through communication link 330 and communication interface 312 .
- Received program code may be executed by processor 304 as received and/or stored in disk drive component 310 or some other non-volatile storage component for execution.
- various embodiments provided by the present disclosure may be implemented using hardware, software, or combinations of hardware and software.
- the various hardware components and/or software components set forth herein may be combined into composite components comprising software, hardware, and/or both without departing from the spirit of the present disclosure.
- the various hardware components and/or software components set forth herein may be separated into sub-components comprising software, hardware, or both without departing from the scope of the present disclosure.
- software components may be implemented as hardware components and vice-versa.
- Software in accordance with the present disclosure, such as computer program code and/or data, may be stored on one or more computer readable mediums. It is also contemplated that software identified herein may be implemented using one or more general purpose or specific purpose computers and/or computer systems, networked and/or otherwise. Where applicable, the ordering of various steps described herein may be changed, combined into composite steps, and/or separated into sub-steps to provide features described herein.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Software Systems (AREA)
- Human Computer Interaction (AREA)
- Computer Security & Cryptography (AREA)
- Computer Hardware Design (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The present disclosure involves a method. The method includes receiving a request from a user. The method includes constructing, by a computer processor, a media object that contains a plurality of media components. The method includes transmitting the media object to the user. The method includes prompting the user to perform a task involving at least a subset of the media components, the task including at least one of the following: selecting each of the media components in the subset according to a predefined sequence; and modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation. The method includes obtaining a description of an action performed by the user in response to the prompting. The method includes deciding whether or not to grant the request based on the obtaining.
Description
- 1. Technical Field
- The present disclosure generally relates to computer security, more particularly, to a more secure and user friendly CAPTCHA.
- 2. Related Art
- Online transactions are becoming more and more prevalent, with an ever-increasing number of online entities that may or may not have a physical real world counterpart. Furthermore, the services offered by these online entities have been improving as well. The popularity of online transactions is partially attributable to the ease and convenience of making a transaction online instead of at a physical location. As more and more transactions are done online, fraud prevention becomes critical. Among other methods, one method of detecting potential fraud is to check to see if a user is really a human being or a machine. This type of detection is known as CAPTCHA, which stands for “Completely Automated Public Turing Test to tell Computers and Humans Apart.”
- A conventional CAPTCHA may involve displaying obfuscated words or letters and asking a user to enter the displayed words or letters. Obfuscation of the words or letters is presumed to make the words or letters unrecognizable to a machine, while a human should be able to correctly identify the words or letters with relative ease. However, rapid advances in computer technology has made computers much more capable of solving CAPTCHAs, which means conventional CAPTCHAs may not offer as much reliability as they once did. In addition, traditional CAPTCHAs may be difficult for humans to correctly identify as well, particularly if the display is on a small screen such as a mobile telephone screen. Consequently, a human user could be erroneously classified as a machine, which defeats the purpose of the CAPTCHA.
- Therefore, while existing CAPTCHAs have been generally adequate for their intended purposes, they have not been entirely satisfactory in every aspect. It would be advantageous to offer an improve CAPTCHA that is both easy for humans to identify and hard for machines to solve.
- One of the broader forms of the present disclosure involves a method. The method includes: receiving a request from a user; constructing, by a computer processor, a media object that contains a plurality of media components; transmitting the media object to the user; prompting the user to perform a task involving at least a subset of the media components, the task including at least one of the following: selecting each of the media components in the subset according to a predefined sequence; and modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation; obtaining a description of an action performed by the user in response to the prompting; and deciding whether or not to grant the request based on the obtaining.
- Another one of the broader forms of the present disclosure involves an apparatus comprising a non-transitory, tangible computer readable storage medium storing a computer program. The computer program has instructions that when executed, perform: receiving a request from a user; constructing a media object that contains a plurality of media components; transmitting the media object to the user; prompting the user to perform a task involving at least a subset of the media components, the task including at least one of the following: selecting each of the media components in the subset according to a predefined sequence; and modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation; obtaining a description of an action performed by the user in response to the prompting; and deciding whether or not to grant the request based on the obtaining.
- Yet another one of the broader forms of the present disclosure involves a system. The system involves: means for receiving a request from a user; means for constructing a media object that contains a plurality of media components; means for transmitting the media object to the user; means for prompting the user to perform a task involving at least a subset of the media components, the task including at least one of the following: selecting each of the media components in the subset according to a predefined sequence; and modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation; means for obtaining a description of an action performed by the user after the user is prompted; and means for deciding whether or not to grant the request based on the description obtained.
-
FIG. 1 illustrates a simplified example of a user interface for an online shopping site at a production selection phase. -
FIG. 2 illustrates an example of a CAPTCHA according to various aspects of the present disclosure. -
FIGS. 3-7 illustrate additional examples of CAPTCHAs according to other aspects of the present disclosure. -
FIG. 8 illustrates a flowchart of verifying a user through a CAPTCHA according to various aspects of the present disclosure. -
FIG. 9 shows a block diagram of a computer system for implementing various methods and devices described according to various aspects of the present disclosure. - It is to be understood that the following disclosure provides many different embodiments, or examples, for implementing different features of the present disclosure. Specific examples of components and arrangements are described below to simplify the present disclosure. These are, of course, merely examples and are not intended to be limiting. Various features may be arbitrarily drawn in different scales for simplicity and clarity.
- In online transactions, a server may receive various requests from users. CAPTCHAs are used to determine whether these users are humans or machines.
FIGS. 1-2 and the accompanying discussions below describe one example situation in which a CAPTCHA may be used according to various aspects of the present disclosure. Referring toFIG. 1 , anexample user interface 40 from an online shopping site is illustrated. The online shopping site is engaged in the selling of products (goods), where product or good is used herein to include physical goods, digital goods, services, charitable donations, etc. In an embodiment, theuser interface 40 is in the form of a web page. Theuser interface 40 displays a plurality ofobjects 50 that each represent a different product. Theobjects 50 may each contain a button, an icon, a picture, or combinations thereof. In another embodiment, the online shopping site sells products at a physical point of sale, such as a store or kiosk, and a user may select physical products to be scanned at the point of sale. - The products represented by the
objects 50 may include physical and tangible goods, including (but not limited to) clothing, electronics, tools, toys, household appliances, books, movies, automotive components, sporting goods, groceries, etc. The products represented by theobjects 50 may also include digital goods, which include goods that are stored, delivered, and/or used in an electronic format. As non-limiting examples, digital goods may include electronic-books, digital music files, digital images, digital videos, and virtual items, etc. Similar to physical and tangible goods, digital goods can be bought and sold between interested parties. The buyer of a digital good may receive the purchased digital good through an email attachment, a download, or other suitable mechanisms. - As is illustrated in
FIG. 1 , theuser interface 40 informs a prospective buyer what products are available from the merchant. To initiate the purchasing process, the prospective buyer may click on any one of theobjects 50 to add it to the buyer's purchasing queue, which may be in the form of a virtual shopping cart. The prospective buyer may edit the purchasing queue at any time, such as adding or subtracting the quantity of a particular product in the queue or removing a product from the queue altogether. For the sake of simplicity, the details of the purchasing queue are not illustrated herein. - When the user is ready to check out, a request is sent to a server as part of a verification process. The server may be a server for the online shopping site. The server may also be a server for a third party payment platform such as PAYPAL®. To verify the user's identity and to make sure the user is a human user and not a machine, the user is asked to pass a CAPTCHA test, an embodiment of which is illustrated in
FIG. 2 asCAPTCHA 100. TheCAPTCHA 100 is a picture-based CAPTCHA in an embodiment and may therefore be also referred to as picTCHA. The CAPTCHA 100 contains aninstructions object 110. Theinstructions object 110 displays word-based instructions (for example, instructions in English or another suitable language) that prompt the user to perform one or more particular tasks. As an example, theinstructions object 110 illustrated herein prompts the user to perform the following two tasks, “drag the boat to the dog” and “drag the fish to the gift.” - The
instructions object 110 may be obfuscated to make automated processing of the instructions difficult. In other words, the obfuscation of theinstructions object 110 makes it more difficult to be recognized by machines. The obfuscation in the illustrated embodiment is done by drawing random squiggly lines throughout the words of the instructions object 110. In other embodiments, the obfuscation may be done by distorting or warping the words of the instructions object 110, crowding the words together, or otherwise adding more background clutter and noise to the instructions object 110. - The instructions object 110 may also contain instructions for notifying the user that the language of the instructions object 110 may be changed to a different language. For example, the instructions object 110 may contain an image-based
link 115 that can alter the displayed language of the instructions object 110. In the embodiment illustrated, the image-basedlink 115 is a Chinese flag, and the instructions adjacent to the Chinese flag notify the user that the instructions will be displayed in Chinese if the Chinese flag is clicked on. In other embodiments, the image-basedlink 115 may contain a drop-down menu that can display a plurality of nations' flags (or the names of the nations), and the selection of a particular nation's flag will display the instructions in the language of that nation. In some other alternative embodiments, the language of the instructions object 110 may be automatically adjusted based on retrieved contextual information from the user such as an Internet Protocol (IP) address of the user's computer, the character set of the user's computer, or the previous history of the user. For example, if the IP address of the user's computer indicates that the computer is located in Japan, then the language of the instructions object 110 may be automatically switched to Japanese. - The
CAPTCHA 100 also contains amedia object 120. The media object 120 includes a plurality of media components, examples of which are illustrated herein as media components 121-133. The media components 121-133 are images of common everyday items, for example things that exist in nature or things that are man-made. In more detail, themedia component 121 is a dog; themedia component 122 is a mouse; themedia component 123 is a car; themedia component 124 is a camel; themedia component 125 is a gift; themedia component 126 is a head of a pig wearing sunglasses; themedia component 127 is a cow; themedia component 128 is a fish; themedia component 129 is a zebra; themedia component 130 is a boat; themedia component 131 is a cat; themedia component 132 is a fly; and themedia component 133 is a bug. - Each of these media components 121-133 is easy for a human user to recognize and identify, but they may be more difficult for a machine to identify. Unlike characters or letters, the concept or entity represented by each media component herein may take any number of different forms. For example, the
media component 126 is a cartoonized pig's head wearing sunglasses. A human user should have no difficulty recognizing that themedia component 126 still represents a pig. However, a machine needs to scour a large database to find images tagged with “pig” that closely match themedia component 126. This scouring task may take a long time, after which it may still yield an incorrect result. - At least a subset of the media components 121-133 may be obfuscated to make them even harder to identify by machines. For example, the media components 125-126 are partially overlapping with one another. Thus, they may appear as a single entity for a machine, since there are no clear boundaries separating them. However, to a human user, it is clear that the media components 125-126 represent different items. Other examples of obfuscation techniques (not necessarily illustrated herein) may include:
-
- Making the images fuzzy or out of focus
- Reducing the contrast for the images
- Adding peculiar coloration to the images
- Rotating the images so that they are viewed from peculiar angles
- Partially hiding one image with another
- Transforming an image (e.g., making a cow squiggly or rip it up into multiple pieces and putting the pieces back together)
- Other suitable obfuscation techniques may apply, but they are not discussed herein for the sake of simplicity. It is also understood that the media components 121-133 need not necessarily be images or items with only visual characteristics. In some embodiments, the media components 121-133 may be video clips and may have audio or tactile characteristics as well.
- The media components 121-133 may also have varying levels of abstraction. As an example, an image of a cow may be abstracted as a collection of lines that delineates the head of a cow. In other words, the image of the cow is not taken from any actual pictures of real cows or even caricatures of cows. It can merely be just a rough sketch of a part of a cow, for example a sketch resembling to what a child would draw as a cow. Such abstraction usually does not deter a human user from recognizing that the image indeed represents a cow, but it makes it even harder to be correctly identified by a machine. The media components 121-133 may also have different sizes.
- The media components 121-133 may be randomly generated according to certain underlying parameters indicating what the media components may look like. For example, a computer server that performs the generation of the media components 121-133 may have access to a database that stores hundreds or thousands of images (or videos) for each of a plurality of different items, such as thousands of images of dogs, thousands of images of boats, thousands of images of automobiles, thousands of images of cows, etc. The server may receive a command to generate an image of a dog, a boat, an automobile, and a cow. In response, the server may randomly choose one of the images of a dog, a boat, an automobile, and a cow from the thousands of available images.
- The user is prompted to manipulate a subset of the media components 121-133 according to the instructions given by the instructions object 110. For example, in the embodiment illustrated herein, the user is supposed to first drag (using a finger or mouse) the boat (media component 130) to the dog (media component 121), and then drag the fish (media component 128) to the gift (media component 125). Once again, a human user should easily recognize these images and will be able to correctly perform the tasks according to the instructions.
- The instructions object 110 of the embodiment illustrated herein prompts the user to change the location properties of a subset of the media components by dragging them. In other embodiments, the instructions object 110 may prompt the user to manipulate the media components 121-133 differently. For example, in an alternative embodiment, the user may be asked to change the size of a subset of the media components: the instructions object 110 may read “enlarge the car, and shrink the cat.” In another embodiment, the user may be asked to change the orientation of a subset of the media components: the instructions object 110 may read “rotate the cow clockwise by 90 degrees, and rotate the mouse counterclockwise by 180 degrees.” In yet another embodiment, the user may be asked to change the appearance of a subset of the media components: the instructions object 110 may read “color the dog with a blue color, and color the zebra with a pink color.” In that embodiment, a palette of colors may be displayed, and the user may be able to select, such as by clicking or tapping on a certain color in the palette to assign that color to a particular media component.
- In one more embodiment, the user may be asked to identify a subset of media component based on similar classifications: the instructions object 110 may read “click on all the mammals,” “click on all things that could be dangerous,” “click on all vehicles,” “click on the three images that are largest in the real world,” or “click on all images that are cartoon characters.” The instructions may be specific to a type of user device or more general. For example, if the system knows the user is on a touch screen device, the instructions may ask the user to “tap” on specific objects. A more general instruction may ask the user to “select” specific objects, which may include clicking, tapping, moving a pointer or mouse, etc. In some embodiments, the user may be asked to group images relative to each other: the instructions object 110 may read “drag the image that is eaten to the image that eats it.” In other embodiments, the user may be asked to match suites of corporate products: the instructions object 110 may read “click on all characters that interact with Donald Duck.”
- In another embodiment, the user may be asked to select a subset of the media components 121-133 according to a predefined sequence: the instructions object 110 may read “click on the dog, then click on the fish, then click on the boat, then click on the cow, and finally click on the camel.” In yet another embodiment, even the instructions object 110 may contain an image or an item that has visual and/or audio characteristics. For example, the instructions object 110 may contain an image of a dog and instructs the user to click on all the images of the
media object 120 that match that image. In some embodiments, the user may be asked to perform two or more of the tasks discussed above concurrently, for example to simultaneously drag the dog to the car (with one finger) and to drag the fish to the boat (with another finger). In some other embodiments, the user may be asked to perform two or more of the tasks discussed above according to a predefined sequence. For example, the user may be asked to first drag the dog to the car, then to click on all the mammals, then to rotate the fish clockwise by 90 degrees, and then to shrink the zebra. -
FIGS. 3-7 illustrate additional aspects of the CAPTCHA disclosed herein. According to one aspect of the present disclosure, logic (or a puzzle) is utilized to manipulate media components. Referring toFIG. 3 , amedia object 140A contains a plurality of media components 141-145, and amedia object 140B contains a plurality of media components 146-150. Themedia components media components empty box 155 is placed at the end of the sequence. Thebox 155 may also be considered a media component. - Interposed between the media objects 140A and 140B is an
instructions object 160, which displays a message, “drag an image below into the box that best completes the above sequence.” In some embodiments, the instructions object 160 may be obfuscated. To a human user with basic logic reasoning skills, it is apparent that the sequence formed by the media components 141-145 is: dog/cat/dog/cat/dog. In other words, the logic behind the sequence involves cats and dogs alternating with each other. As such, since thelast media component 145 in the sequence is a dog, the cat corresponding tomedia component 147 would best complete the sequence. On the other hand, a machine most likely would not be able to understand the logic behind the sequence, particularly (as in the illustrated embodiment) if the images of the cats are all different from one another, as are the images of the dogs. In that case, it would be difficult for the machine to identify any type of pattern, and thus it would be hard for the machine to solve the logic problem disclosed herein. - It is understood that the puzzle disclosed herein is merely an example, and that other types of puzzles or logic problems may be implemented in alternative embodiments. For example, in an embodiment, a sliding puzzle such as the “gem puzzle” or “game of fifteen” or “mystic square” may be implemented. In such puzzle, the user may be prompted to navigate tiles within a confined area until the tiles are ordered according to a certain sequence (for example, in a numerically ascending sequence). For the sake of simplicity, these additional puzzles are not specifically illustrated herein.
- According to another aspect of the present disclosure, the user may be shown a series of actions (or procedure), after which the user is prompted to repeat those actions. This idea is illustrated in
FIGS. 4 and 5 . Referring toFIG. 4 , an original screenshot presented to the user is astructure 165 that contains a plurality of openings 166-169. The opening 166-169 are located at the top, bottom, left, and right of thestructure 165, respectively. Also shown in the screenshot is a plurality of media components 170-178 that are located within thestructure 165. The media components 170-178 may have different shapes, colors, or sizes from one another. The media components 170-178 are balls (shown as circles) in the embodiment illustrated inFIG. 4 , but they may have different shapes in alternative embodiments, for example triangles or squares. - The following series of actions may be demonstrated to the user: a finger (an image of a finger) touches one of the media components 170-178, for example the
media component 173, and subsequently drags themedia component 173 out of one of the openings 166-169, for example theopening 166 at the top of thestructure 165. Themedia component 173 may be a yellow ball in this embodiment, whereas the rest of the media components are balls of different colors. The screen is then reset. In an embodiment, the media components 170-178 return to the same previous locations and have the same sizes, shapes, and colors. In an alternative embodiment, the media component 170-178 may be placed in different locations than in the previous screen and may have different sizes, shapes, or colors. The openings may be moved to different locations as well (for example, thetop opening 166 may be shifted to its right by a certain distance). These changes may be implemented to enhance the difficulty of automation by a machine. - The user is prompted to repeat the course of actions previously demonstrated. The prompting of the user may be performed through an instructions object (not illustrated herein) which may read, “repeat the previous series of actions.” A human user watching the previously demonstrated series of actions will understand that he is supposed to drag a yellow ball out of the top opening, even if the yellow ball is not necessarily at the same location as before, and the location of the top opening has been altered slightly.
- On the other hand, this task is much more difficult for the machine to perform. First, it may be quite difficult for a machine to detect the demonstrated series of actions. Second, even if the machine can correctly detect the demonstrated actions, it may not correctly decipher the instructions. And lastly, even if a machine can detect that the actions involve moving a round yellow object from location A to location B, and that the task is to “repeat” these actions, the machine may not know how to properly “repeat” this procedure if the yellow ball (and/or the top opening) is no longer at the same location. For example, a red ball may be located at the yellow ball's previous location, whereas the yellow ball is now located elsewhere. The machine may be forced to make a difficult choice as to whether “repeating” the procedure entails moving the red ball (because it is where the previous object used to be) out of the top opening, or whether it entails moving the yellow ball (now located elsewhere) out of the top opening. Stated differently, even an “intelligent” machine that can detect the demonstrated procedure and decipher the instructions may still not be able to perform the task correctly, whereas a human user can do this with ease.
-
FIG. 5 is another illustration of a concept similar to that discussed above in association withFIG. 4 . Referring toFIG. 5 , an original screenshot shown to the user includes a plurality of media components 180-183 stacked on top of one another. In the illustrated embodiment, at the bottom of the stack is themedia component 180, which is a rectangle in the illustrated embodiment. Themedia component 181 stacked on top of themedia component 180 is a square in the illustrated embodiment. Themedia component 182 stacked on top of themedia component 181 is a trapezoid in the illustrated embodiment. Themedia component 183 stacked on top of themedia component 182 is a circle in the illustrated embodiment. - An instructions object (not illustrated herein) may contain instructions that prompt the user to memorize the stacking scheme formed by the media components 180-183. When the user is ready, the media components 180-183 are dispersed, and the user is instructed to assemble the media component 180-183 according to the stacking scheme shown in
FIG. 5 . Once again, this task should be relatively easy for a user, but it may be difficult for a machine to solve. - It is understood that the user may accomplish the navigation of the media components discussed above using his finger(s) on a touch-screen device. Alternatively, the user may use a mouse or a keyboard. For example, one of the objects may be initially highlighted. Pressing the “Tab” button on a keyboard may move the highlight cursor to another object, and pressing the directional arrow keys on the keyboard may move that highlighted object in the direction corresponding to the directional arrow key.
- According to another aspect of the present disclosure, the user may be asked to manipulate one media component relative to (or with respect to) another media component. Referring to
FIG. 6 , twomedia components media component 190 must be rotated by a particular angle (or in a particular range of angles) before it can fit inside themedia component 191. In some embodiments, the user may need to resize one of the media components to accomplish this task. It is understood that the media components 190-191 illustrated herein are merely examples, and that alternative shapes, sizes, and rotation angles may be randomly generated in other embodiments to enhance the difficulty of automation. - According to yet another aspect of the present disclosure, instructions may be presented graphically (in addition to or instead of textually) to the user as well, in part to further enhance the difficulty of automation by machines. Referring to
FIG. 7 , an instructions object includes atextual component 195A and agraphical component 195B. Thetextual component 195A reads, “Perform these steps to the box.” Thegraphical component 195B graphically illustrates a rotational arrow sign with a corresponding text “90°”, a downward directional arrow sign, and a right directional arrow sign. Also illustrated inFIG. 7 is amedia component 198 that is a box in the illustrated embodiment. A human user may quickly understand that according to thegraphical component 195B, he is supposed to rotate the box by 90 degrees, and then move the box downward, and then move the box to the right. But once again, a machine may have a difficult time comprehending the instructions (particularly since the instructions are graphical in nature) and therefore will likely fail the CAPTCHA. - A user may perform the tasks through a plurality of suitable mechanisms. In one embodiment, the user performs the tasks on a touch-screen according to the displayed instructions. In another embodiment, the user may perform the tasks using a joystick. In yet another embodiment, the user may perform the tasks using a mouse or another suitable pointing device. In another embodiment, the user may perform the tasks using an eyeball-tracker. In other embodiments, the user may perform the tasks through a voice command system.
- A description of the user's actions is obtained by the server. For example, through a touch-screen display, movement of the user's fingers on the screen can be detected. The movement may include clicking, dragging, pinching, and/or circling. In other embodiments, the user may manipulate objects using an accelerometer, for example by physically moving or rotating a handheld device. Based on the detected user actions (e.g., finger movement on a screen), the server can identify the objects the user attempted to manipulate. Given the description of the user's actions, the server can determine whether the user's actions correspond with the instructions well enough to satisfy a predetermined threshold, and accordingly whether or not to grant the user's request.
- The predetermined threshold may be implemented differently in different embodiments. For example, in one embodiment, the threshold may be “did the outer boundary of the boat touch the outer boundary of the dog at the end of the dragging motion?” In other embodiments, the predetermined threshold may be set up so that the two media components need not actually touch each other, but as long as they are dragged to be within a small distance (e.g., 1 millimeter), then that may satisfy the threshold. In other words, the verification of the user can be exact—when the task is solved perfectly. The verification of the user can be fuzzy—when the task is solved sufficiently well, but not perfectly.
- The threshold may also be dynamically adjusted based on the situation in which a request is made (e.g., high dollar value transaction or low dollar value transaction?) or on past performance of the account (e.g., did the user make identical or similar requests repeatedly in the past?), and/or computer data and IP address range (e.g., based on the user's location corresponding to the IP address, does the user's request appear suspicious?).
- It is understood that the CAPTCHA disclosed herein may also be adapted to filter out (or conversely, be confined to) particular types of users. As an example, the server may generate media components that children are familiar with, such as characters from popular children's shows. The instructions may be given in an audio manner. As such, children who cannot read or write may still be able to pass the CAPTCHA test and be verified as a legitimate human user. On the other hand, if the CAPTCHA is supposed to only pass users older than a particular age, then the user is asked to identify things that are only recognized by users of a certain age, for example things that can be found in an office. Most children who have not been to an office will not be able to identify the objects in an office or and thus will likely fail the CAPTCHA test. As another example, the user may be asked to identify things that cost less than $20 each. Children who have no experience paying bills will likely fail this test as well.
- In recent years, fraudsters have also relied on outsourcing to solve CAPTCHAs. For example, human labor can be hired in low wage countries to perform CAPTCHA solving tasks. The CAPTCHA disclosed herein can also be adapted to defeat such outsourcing (or at least reduce the likelihood of its success) by making the tasks culturally or geographically relevant. For example, if the CAPTCHA is aimed at verifying users of a particular country like the United States, the CAPTCHA may display images of famous athletes of different sports (e.g., Michael Jordan for basketball or Joe Montana for football) as well as representations of these sports (e.g., a basketball hoop or a football field) and ask the user to match the athletes to their sports. Even human users from a foreign country may have a difficult time performing this task correctly, since they may not be familiar with these athletes who are famous in the United States.
- As another example, the CAPTCHA may play a tune from a well-known nursery rhyme in a given country or region and ask the user to pick a media component (for example an image) that relates to the nursery rhyme. For instance, the tune from “London Bridge is falling down” may be played, and an image of a bridge may be displayed as one of the many media components. If the user clicks on the image of the bridge, he may be verified. Outsourced human labor may also encounter difficulty trying to solve these tasks because they likely have not been exposed to these nursery rhymes from other countries.
- As yet another example, among other things, the CAPTCHA may display a landmark or an item that is famous to users of a particular region, and ask the user to identify these landmarks or items. Such tasks will once again be difficult for outsourced human labor to solve, provided that they are not from that particular region.
- Using another example, the CAPTCHA may also be fine-tuned to (or zeroed-in on) a specific human user. The CAPTCHA may be generated according to that specific human user's history or preferences. For example, the CAPTCHA may ask the user to “click on all images representing products you have recently bought on this site” or “click on all images that you have previously seen on this site.” In this manner, only a targeted human user will likely pass the verification test.
- In each of the examples discussed above, the solving of the tasks requires relevant context, which the desired types of human users have but machines or other types of human users do not.
- The embodiments of the CAPTCHA disclosed herein offer advantages over conventional CAPTCHAs, it being understood that other embodiments may offer different advantages, and that no particular advantage is required for all embodiments. One advantage is that the CAPTCHA disclosed herein is much harder for machines to solve. Conventional CAPTCHAs often rely on obfuscated words alone to thwart a machine attack. However, as Optical Character Recognition (OCR) technologies continue to advance, machines have become more versatile and more capable of identifying fairly obfuscated words. In comparison, the CAPTCHA disclosed herein utilizes media content such as images that each represent an underlying item or concept. Each item or concept may have thousands of different manifestations in images, audios, or videos. Thus, it is much harder for a machine to correctly associate the media content with the item or concept that it represents. And as discussed above, the images or other types of media content can be obfuscated (e.g., overlapped, fuzzed, partially hidden, etc.) to make their identification even more difficult for a machine.
- Another advantage is that by having a plurality of different media components, the CAPTCHA herein is much more difficult to solve by a machine from a mathematical standpoint. For example, if there are two instructions and fifteen media components, then there are 15*14*13*12=32760 different permutations of tasks (assuming no identical repeat tasks). As such, there is only a 1/32760 chance for a machine to randomly solve the CAPTCHA correctly. As the number of instructions and/or the number of media components increase, the number of permutations of tasks increases as well, thereby reducing the likelihood that a machine can randomly solve the CAPTCHA correctly.
- Another advantage of the CAPTCHA disclosed herein is that it is user-friendly, particularly if the user is using a relatively small display screen, for example a mobile handset device. With conventional types of CAPTCHAs, the obfuscated words displayed on a small screen may make them hard for the human user to recognize. Even if the user can correctly recognize the words, it may still be inconvenient for the user to input them using a handset device with a small keyboard or no keyboard. This may lead to a frustrating verification experience for the user. In comparison, the media-based CAPTCHA disclosed herein allows the users to easily identify the media components even on a small screen, because the media components represent common things that the user is familiar with. The CAPTCHA disclosed herein also allows the user to easily maneuver the media components using fingers (e.g., by dragging, clicking, pinching, or circling) or another suitable input mechanism according to the instructions given, thereby obviating an awkward inputting experience.
- The CAPTCHA disclosed herein also offers the advantage of reducing the probability of success of solving CAPTCHAs through outsourced human labor. As discussed above, the CAPTCHAs herein can be designed and implemented to be culturally and/or geographically relevant, which make them hard to solve for even human users from a different region. In some cases, the CAPTCHA disclosed herein is also designed for a specific user based on his contextual information such as his purchasing history, thereby enhancing the security level of the CAPTCHA even further.
-
FIG. 8 is a flowchart illustrating amethod 200 for utilizing a CAPTCHA to verify a user according to various aspects of the present disclosure. Themethod 200 includesblock 210 in which a request is received from a user. The user may be a human or a machine. Themethod 200 includesblock 220 in which a media object is constructed by a computer processor. The media object has at least one of the following characteristics: a visual characteristic, an audio characteristic, and a tactile characteristic. The media object contains a plurality of media components. In an embodiment, at least some of the media components may be obfuscated. In an embodiment, at least some of the media components may be randomly generated. In an embodiment, at least some of the media components are relevant to the user's culture or the user's geographical location. In an embodiment, each media component is a representation of a man-made device or an item that exists in nature. - The
method 200 includesblock 230 in which the media object is transmitted to the user. The transmission of the object may include displaying the media object on a mobile device such as a mobile handset device. Themethod 200 includesblock 240 in which the user is prompted to perform a task involving at least a subset of the media components. The task includes at least one of the following: clicking on each of the media components in the subset according to a predefined sequence; and modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation. Theblock 240 may be carried out using obfuscated instructions. Themethod 200 includesblock 250 in which a description of an action performed by the user is obtained in response to block 240. The description of the user's actions may be obtained by detecting at least one of the following: detecting a clicking motion, detecting a dragging motion, detecting a pinching motion, and detecting a circling motion. - The
method 200 includesblock 260 in which a server decides whether or not to grant the request based onblock 250. The server may make the decision based on an evaluation of whether the description of the action corresponds to the task well enough to satisfy a predetermined threshold. One or more of the actions in blocks 210-260 may be performed using a computer hardware device encoded with software instructions. -
FIG. 9 is a block diagram of acomputer system 300 suitable for implementing various methods and devices described herein, for example, the various method blocks of themethod 200. In various implementations, user devices may comprise a network communications device (e.g., mobile cellular phone, laptop, personal computer, etc.) capable of communicating with a network, and a service provider device (such as managed by a third party payment platform) may comprise a network computing device (e.g., a network server). In other implementations, it should be appreciated that the service provider device may comprise a network communications device (e.g., mobile cellular phone, laptop, personal computer, etc.) capable of communicating with the network, without departing from the scope of the present disclosure. Accordingly, it should be appreciated that each of the devices may be implemented as thecomputer system 300 for communication with the network in a manner as follows. - In accordance with various embodiments of the present disclosure, the
computer system 300, such as a mobile communications device and/or a network server, includes abus component 302 or other communication mechanisms for communicating information, which interconnects subsystems and components, such as processing component 304 (e.g., processor, micro-controller, digital signal processor (DSP), etc.), system memory component 306 (e.g., RAM), static storage component 308 (e.g., ROM), disk drive component 310 (e.g., magnetic or optical), network interface component 312 (e.g., modem or Ethernet card), display component 314 (e.g., cathode ray tube (CRT) or liquid crystal display (LCD)), input component 316 (e.g., keyboard), cursor control component 318 (e.g., mouse or trackball), and image capture component 320 (e.g., analog or digital camera). In one implementation,disk drive component 310 may comprise a database having one or more disk drive components. - In accordance with embodiments of the present disclosure,
computer system 300 performs specific operations byprocessor 304 executing one or more sequences of one or more instructions contained insystem memory component 306. Such instructions may be read intosystem memory component 306 from another computer readable medium, such asstatic storage component 308 ordisk drive component 310. In other embodiments, hard-wired circuitry may be used in place of (or in combination with) software instructions to implement the present disclosure. - Logic may be encoded in a computer readable medium, which may refer to any medium that participates in providing instructions to
processor 304 for execution. Such a medium may take many forms, including but not limited to, non-volatile media and volatile media. In one embodiment, the computer readable medium is non-transitory. In various implementations, non-volatile media includes optical or magnetic disks, such asdisk drive component 310, and volatile media includes dynamic memory, such assystem memory component 306. In one aspect, data and information related to execution instructions may be transmitted tocomputer system 300 via a transmission media, such as in the form of acoustic or light waves, including those generated during radio wave and infrared data communications. In various implementations, transmission media may include coaxial cables, copper wire, and fiber optics, including wires that comprisebus 302. - Some common forms of computer readable media includes, for example, floppy disk, flexible disk, hard disk, magnetic tape, any other magnetic medium, CD-ROM, any other optical medium, punch cards, paper tape, any other physical medium with patterns of holes, RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or cartridge, carrier wave, or any other medium from which a computer is adapted to read.
- In various embodiments of the present disclosure, execution of instruction sequences to practice the present disclosure may be performed by
computer system 300. In various other embodiments of the present disclosure, a plurality ofcomputer systems 300 coupled by communication link 330 (e.g., a communications network, such as a LAN, WLAN, PTSN, and/or various other wired or wireless networks, including telecommunications, mobile, and cellular phone networks) may perform instruction sequences to practice the present disclosure in coordination with one another. -
Computer system 300 may transmit and receive messages, data, information and instructions, including one or more programs (i.e., application code) throughcommunication link 330 andcommunication interface 312. Received program code may be executed byprocessor 304 as received and/or stored indisk drive component 310 or some other non-volatile storage component for execution. - Where applicable, various embodiments provided by the present disclosure may be implemented using hardware, software, or combinations of hardware and software. Also, where applicable, the various hardware components and/or software components set forth herein may be combined into composite components comprising software, hardware, and/or both without departing from the spirit of the present disclosure. Where applicable, the various hardware components and/or software components set forth herein may be separated into sub-components comprising software, hardware, or both without departing from the scope of the present disclosure. In addition, where applicable, it is contemplated that software components may be implemented as hardware components and vice-versa.
- Software, in accordance with the present disclosure, such as computer program code and/or data, may be stored on one or more computer readable mediums. It is also contemplated that software identified herein may be implemented using one or more general purpose or specific purpose computers and/or computer systems, networked and/or otherwise. Where applicable, the ordering of various steps described herein may be changed, combined into composite steps, and/or separated into sub-steps to provide features described herein.
- It should be appreciated that like reference numerals are used to identify like elements illustrated in one or more of the figures, wherein these labeled figures are for purposes of illustrating embodiments of the present disclosure and not for purposes of limiting the same.
- The foregoing disclosure is not intended to limit the present disclosure to the precise forms or particular fields of use disclosed. As such, it is contemplated that various alternate embodiments and/or modifications to the present disclosure, whether explicitly described or implied herein, are possible in light of the disclosure. Having thus described embodiments of the present disclosure, persons of ordinary skill in the art will recognize that changes may be made in form and detail without departing from the scope of the present disclosure. Thus, the present disclosure is limited only by the claims.
Claims (29)
1. A method, comprising:
receiving a request from a user;
constructing, by a computer processor, a media object that contains a plurality of media components;
transmitting the media object to the user;
prompting the user to perform a task involving at least a subset of the media components, the task including at least one of the following:
selecting each of the media components in the subset according to a predefined sequence; and
modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation;
obtaining a description of an action performed by the user in response to the prompting; and
deciding whether or not to grant the request based on the obtaining.
2. The method of claim 1 , wherein at least one of the receiving, the transmitting, the prompting, the obtaining, and the deciding is carried out using a computer hardware device encoded with software instructions.
3. The method of claim 1 , wherein the deciding is performed based on an evaluation of whether the description of the action is within a predetermined threshold.
4. The method of claim 1 , wherein the obtaining the description comprises at least one of the following: detecting a clicking motion, detecting a dragging motion, detecting a pinching motion, detecting a tapping motion, and detecting a circling motion.
5. The method of claim 1 , wherein the media object has at least one of the following characteristics: a visual characteristic, an audio characteristic, and a tactile characteristic.
6. The method of claim 1 , wherein the constructing is carried out in a manner such that at least some of the media components are obfuscated.
7. The method of claim 1 , wherein the constructing is carried out in a manner such that at least some of the media components are randomly generated.
8. The method of claim 1 , wherein the constructing is carried out in a manner such that at least some of the media components are relevant to the user's culture or the user's geographical location.
9. The method of claim 1 , wherein each media component is a representation of a man-made device or an item that exists in nature.
10. The method of claim 1 , wherein the prompting the user is carried out using one or more obfuscated instructions.
11. The method of claim 1 , wherein the prompting includes displaying graphical instructions.
12. The method of claim 1 , wherein the transmitting comprises displaying the media object on a mobile device.
13. The method of claim 1 , wherein the user is one of: a human and a machine.
14. The method of claim 1 , wherein the task includes one of: solving a puzzle and repeating a demonstrated procedure.
15. An apparatus comprising a non-transitory, tangible computer readable storage medium storing a computer program, wherein the computer program contains instructions that when executed, perform:
receiving a request from a user;
constructing a media object that contains a plurality of media components;
transmitting the media object to the user;
prompting the user to perform a task involving at least a subset of the media components, the task including at least one of the following:
selecting each of the media components in the subset according to a predefined sequence; and
modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation;
obtaining a description of an action performed by the user in response to the prompting; and
deciding whether or not to grant the request based on the obtaining.
16. The apparatus of claim 15 , wherein the instructions for the deciding comprise instructions for making an evaluation of whether the description of the action is within a predetermined threshold.
17. The apparatus of claim 15 , wherein the instructions for the obtaining the description comprise instructions for performing at least one of the following: detecting a clicking motion, detecting a dragging motion, detecting a tapping motion, and detecting a circling motion.
18. The apparatus of claim 15 , wherein the instructions for constructing the media object comprise instructions for constructing a media object having at least one of the following characteristics: a visual characteristic, an audio characteristic, and a tactile characteristic.
19. The apparatus of claim 15 , wherein the instructions for constructing the media object comprise instructions for constructing a media object that is at least partially obfuscated.
20. The apparatus of claim 15 , wherein the instructions for constructing the media object comprise instructions for constructing a media object that is at least partially randomly generated.
21. The apparatus of claim 15 , wherein the instructions for constructing the media object comprise instructions for constructing a media object that is at least partially relevant to the user's culture or the user's geographical location.
22. A system, comprising:
means for receiving a request from a user;
means for constructing a media object that contains a plurality of media components;
means for transmitting the media object to the user;
means for prompting the user to perform a task involving at least a subset of the media components, the task including at least one of the following:
selecting each of the media components in the subset according to a predefined sequence; and
modifying at least one of the media components in the subset with respect to at least one of its following properties: location, size, appearance, and orientation;
means for obtaining a description of an action performed by the user after the user is prompted; and
means for deciding whether or not to grant the request based on the description obtained.
23. The system of claim 22 , wherein the means for deciding comprises means for evaluating whether the description of the action is within a predetermined threshold.
24. The system of claim 22 , wherein the means for obtaining the description comprises means for detecting at least one of the following motions: clicking, dragging, pinching, tapping, and circling.
25. The system of claim 22 , wherein the means for constructing the media object comprises means for constructing a media object having at least one of the following characteristics: a visual characteristic, an audio characteristic, and a tactile characteristic.
26. The system of claim 22 , wherein the means for constructing comprises means for randomly generating and obfuscating at least some of the media components.
27. The system of claim 22 , wherein the means for constructing comprises means for generating media components that are relevant to the user's culture or the user's geographical location.
28. The system of claim 22 , wherein the means for prompting the user comprises means for displaying obfuscated instructions to the user.
29. The system of claim 22 , wherein the means for transmitting the media object comprises means for displaying the media object on a handset device.
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/174,394 US20130007875A1 (en) | 2011-06-30 | 2011-06-30 | Interactive CAPTCHA |
US15/095,648 US9710643B2 (en) | 2011-06-30 | 2016-04-11 | Interactive CAPTCHA |
US15/653,086 US10114942B2 (en) | 2011-06-30 | 2017-07-18 | Interactive CAPTCHA |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/174,394 US20130007875A1 (en) | 2011-06-30 | 2011-06-30 | Interactive CAPTCHA |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/095,648 Continuation US9710643B2 (en) | 2011-06-30 | 2016-04-11 | Interactive CAPTCHA |
Publications (1)
Publication Number | Publication Date |
---|---|
US20130007875A1 true US20130007875A1 (en) | 2013-01-03 |
Family
ID=47392130
Family Applications (3)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/174,394 Abandoned US20130007875A1 (en) | 2011-06-30 | 2011-06-30 | Interactive CAPTCHA |
US15/095,648 Active US9710643B2 (en) | 2011-06-30 | 2016-04-11 | Interactive CAPTCHA |
US15/653,086 Active US10114942B2 (en) | 2011-06-30 | 2017-07-18 | Interactive CAPTCHA |
Family Applications After (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/095,648 Active US9710643B2 (en) | 2011-06-30 | 2016-04-11 | Interactive CAPTCHA |
US15/653,086 Active US10114942B2 (en) | 2011-06-30 | 2017-07-18 | Interactive CAPTCHA |
Country Status (1)
Country | Link |
---|---|
US (3) | US20130007875A1 (en) |
Cited By (31)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120047262A1 (en) * | 2009-04-27 | 2012-02-23 | Koninklijke Kpn N.V. | Managing Undesired Service Requests in a Network |
CN103078745A (en) * | 2013-02-04 | 2013-05-01 | 马骏 | Logon authentication code based on picture |
US20130160095A1 (en) * | 2011-12-14 | 2013-06-20 | Nokia Corporation | Method and apparatus for presenting a challenge response input mechanism |
US8789139B2 (en) * | 2012-12-20 | 2014-07-22 | Hewlett-Packard Development Company, L.P. | Automated test to tell computers and humans apart |
CN103971045A (en) * | 2013-01-25 | 2014-08-06 | 苏州精易会信息技术有限公司 | Click type verification code implementation method |
US20140230045A1 (en) * | 2013-02-14 | 2014-08-14 | Lg Cns Co., Ltd. | Graphical object-based user authentication for computerized devices |
US8904493B1 (en) * | 2012-08-29 | 2014-12-02 | Google Inc. | Image-based challenge-response testing |
CN104424426A (en) * | 2013-08-21 | 2015-03-18 | 北大方正集团有限公司 | Verification method and verification device |
US20150172276A1 (en) * | 2013-12-17 | 2015-06-18 | Mastercard International Incorporated | Systems and Methods for User Authentication |
TWI489312B (en) * | 2013-05-30 | 2015-06-21 | Univ Nat Taiwan Science Tech | Authentication method and authentication system |
US20150324162A1 (en) * | 2014-05-09 | 2015-11-12 | Samsung Electronics Co., Ltd. | Method and device for controlling multiple displays |
US20160004856A1 (en) * | 2014-07-04 | 2016-01-07 | Otto WANG | Server, user apparatus and terminal device |
US20160050211A1 (en) * | 2014-08-18 | 2016-02-18 | Dropbox, Inc. | Access management using electronic images |
JP2016513327A (en) * | 2013-03-01 | 2016-05-12 | ゴーゴー・エルエルシー | Judgment of human stimulation in computing devices |
US20160253746A1 (en) * | 2015-02-27 | 2016-09-01 | 3D Product Imaging Inc. | Augmented reality e-commerce |
CN106355072A (en) * | 2016-08-19 | 2017-01-25 | 沈建国 | Implementation method and device for three-dimensional model verification code |
CN106713241A (en) * | 2015-11-16 | 2017-05-24 | 腾讯科技(深圳)有限公司 | Identity verification method, device and system |
US9722985B2 (en) * | 2014-05-01 | 2017-08-01 | Qualcomm Incorporated | Sensory output for image association |
US9742751B2 (en) | 2012-11-05 | 2017-08-22 | Paypal, Inc. | Systems and methods for automatically identifying and removing weak stimuli used in stimulus-based authentication |
CN107317805A (en) * | 2017-06-21 | 2017-11-03 | 北京洋浦伟业科技发展有限公司 | One kind checking code verification method and device |
US20190026448A1 (en) * | 2017-07-18 | 2019-01-24 | International Business Machines Corporation | Cognitive behavioral security controls |
US20190163351A1 (en) * | 2016-05-13 | 2019-05-30 | Beijing Jingdong Century Trading Co., Ltd. | System and method for processing screenshot-type note of streaming document |
CN109951431A (en) * | 2017-12-21 | 2019-06-28 | 北京京东尚科信息技术有限公司 | Verification method, system, electronic equipment and computer-readable medium |
CN110266497A (en) * | 2013-12-04 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Auth method, device, server, user terminal and storage medium |
USD903712S1 (en) * | 2015-08-26 | 2020-12-01 | Truist Bank | Portion of a display screen with icon |
USD948569S1 (en) * | 2020-05-15 | 2022-04-12 | Barel Ip, Inc | Computing device display screen or portion thereof with an icon |
USD951296S1 (en) * | 2020-05-15 | 2022-05-10 | Barel Ip, Inc | Computing device display screen or portion thereof with an animated icon |
USD951297S1 (en) * | 2020-05-15 | 2022-05-10 | Barel Ip, Inc. | Computing device display screen or portion thereof with an icon |
US11347835B2 (en) * | 2017-02-20 | 2022-05-31 | Alibaba Group Holding Limited | Verification method and device |
US11501309B2 (en) | 2021-01-29 | 2022-11-15 | Shopify Inc. | Systems and methods for selectively preventing origination of transaction requests |
US11798047B2 (en) * | 2020-02-27 | 2023-10-24 | Shopify Inc. | Systems and methods to regulate sales of a product in an online store |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130007875A1 (en) * | 2011-06-30 | 2013-01-03 | Ebay, Inc. | Interactive CAPTCHA |
US9990040B2 (en) * | 2015-09-25 | 2018-06-05 | Immersion Corporation | Haptic CAPTCHA |
US10346606B2 (en) | 2017-08-16 | 2019-07-09 | International Business Machines Corporation | Generation of a captcha on a handheld touch screen device |
US10877560B2 (en) * | 2017-12-22 | 2020-12-29 | Mastercard International Incorporated | Haptic feedback for authentication and security in computer systems |
US11204648B2 (en) | 2018-06-12 | 2021-12-21 | Mastercard International Incorporated | Handshake to establish agreement between two parties in virtual reality |
US10896252B2 (en) | 2018-07-03 | 2021-01-19 | International Business Machines Corporation | Composite challenge task generation and deployment |
US10795982B2 (en) | 2018-08-23 | 2020-10-06 | International Business Machines Corporation | CAPTCHA generation based on environment-specific vocabulary |
WO2020121302A1 (en) | 2018-12-10 | 2020-06-18 | Conflu3Nce, Ltd. | System and method for user recognition based on cognitive interactions |
Citations (22)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5559961A (en) * | 1994-04-04 | 1996-09-24 | Lucent Technologies Inc. | Graphical password |
US6209104B1 (en) * | 1996-12-10 | 2001-03-27 | Reza Jalili | Secure data entry and visual authentication system and method |
US20090113294A1 (en) * | 2007-10-30 | 2009-04-30 | Yahoo! Inc. | Progressive captcha |
US20090276839A1 (en) * | 2008-05-02 | 2009-11-05 | Fortknock Protection Llc | Identity collection, verification and security access control system |
US20090328163A1 (en) * | 2008-06-28 | 2009-12-31 | Yahoo! Inc. | System and method using streaming captcha for online verification |
US20100043062A1 (en) * | 2007-09-17 | 2010-02-18 | Samuel Wayne Alexander | Methods and Systems for Management of Image-Based Password Accounts |
US20100169958A1 (en) * | 2006-10-13 | 2010-07-01 | Univeristy Of Idaho | Method for generating and using composite scene passcodes |
US20100180336A1 (en) * | 2009-01-13 | 2010-07-15 | Nolan Jones | System and Method for Authenticating a User Using a Graphical Password |
US20100306055A1 (en) * | 2009-05-26 | 2010-12-02 | Knowledge Probe, Inc. | Compelled user interaction with advertisement with dynamically generated challenge |
US20110016520A1 (en) * | 2009-07-15 | 2011-01-20 | Ira Cohen | Authentication system and methods |
US20110029781A1 (en) * | 2009-07-31 | 2011-02-03 | International Business Machines Corporation | System, method, and apparatus for graduated difficulty of human response tests |
US7992202B2 (en) * | 2007-12-28 | 2011-08-02 | Sungkyunkwan University Foundation For Corporate Collaboration | Apparatus and method for inputting graphical password using wheel interface in embedded system |
US8245277B2 (en) * | 2008-10-15 | 2012-08-14 | Towson University | Universally usable human-interaction proof |
US20120304284A1 (en) * | 2011-05-24 | 2012-11-29 | Microsoft Corporation | Picture gesture authentication |
US20130019278A1 (en) * | 2011-07-15 | 2013-01-17 | Industrial Technology Research Institute | Captcha image authentication method and system |
US8392975B1 (en) * | 2008-05-29 | 2013-03-05 | Google Inc. | Method and system for image-based user authentication |
US20130145441A1 (en) * | 2011-06-03 | 2013-06-06 | Dhawal Mujumdar | Captcha authentication processes and systems using visual object identification |
US8671058B1 (en) * | 2009-08-07 | 2014-03-11 | Gary Isaacs | Methods and systems for generating completely automated public tests to tell computers and humans apart (CAPTCHA) |
US8713703B2 (en) * | 2011-07-31 | 2014-04-29 | International Business Machines Corporation | Advanced CAPTCHA using images in sequence |
US20140282979A1 (en) * | 2013-03-15 | 2014-09-18 | Charles A. Andon | User authentication systems for remote computers, internet applications and online services |
US20140325220A1 (en) * | 2013-03-17 | 2014-10-30 | David Tunnell | "Unpassword": Risk Aware End-to-End Multi-Factor Authentication Via Dynamic Pairing |
US8918851B1 (en) * | 2013-07-26 | 2014-12-23 | Michael Iannamico | Juxtapositional image based authentication system and apparatus |
Family Cites Families (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US8892673B1 (en) * | 2003-08-08 | 2014-11-18 | Radix Holdings, Llc | Hybrid challenge-response |
US9014666B2 (en) * | 2006-12-15 | 2015-04-21 | Avaya Inc. | Authentication based on geo-location history |
US20110166916A1 (en) * | 2008-06-12 | 2011-07-07 | Ads Captcha Ltd. | Time-Resolved & User-Spatially-Activated Feedback Entrance and Method Thereof |
CN102207809A (en) * | 2010-03-30 | 2011-10-05 | 国基电子(上海)有限公司 | Handheld device and unlocking method thereof |
WO2012010743A1 (en) * | 2010-07-23 | 2012-01-26 | Nokia Corporation | Method and apparatus for authorizing a user or a user device based on location information |
US8401522B2 (en) * | 2011-02-21 | 2013-03-19 | Carmela R. Crawford | Systems, methods and apparatus for authenticating access to enterprise resources |
US8346217B2 (en) * | 2011-02-21 | 2013-01-01 | Knowledge Solutions, LLC | Systems, methods and apparatus for controlling access to mobile devices |
US20120222100A1 (en) * | 2011-02-24 | 2012-08-30 | International Business Machines Corporation | Advanced captcha using integrated images |
US20130007875A1 (en) * | 2011-06-30 | 2013-01-03 | Ebay, Inc. | Interactive CAPTCHA |
US20130036461A1 (en) * | 2011-08-01 | 2013-02-07 | Apple Inc. | Image-based authentication |
US9397841B2 (en) * | 2013-06-26 | 2016-07-19 | Excalibur Ip, Llc | Motion-based human verification system and method |
US9319419B2 (en) * | 2013-09-26 | 2016-04-19 | Wave Systems Corp. | Device identification scoring |
US9871784B2 (en) * | 2015-01-22 | 2018-01-16 | Tata Consultancy Services Limited | System and method for image based location specific password construction and authentication |
-
2011
- 2011-06-30 US US13/174,394 patent/US20130007875A1/en not_active Abandoned
-
2016
- 2016-04-11 US US15/095,648 patent/US9710643B2/en active Active
-
2017
- 2017-07-18 US US15/653,086 patent/US10114942B2/en active Active
Patent Citations (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US5559961A (en) * | 1994-04-04 | 1996-09-24 | Lucent Technologies Inc. | Graphical password |
US6209104B1 (en) * | 1996-12-10 | 2001-03-27 | Reza Jalili | Secure data entry and visual authentication system and method |
US20100169958A1 (en) * | 2006-10-13 | 2010-07-01 | Univeristy Of Idaho | Method for generating and using composite scene passcodes |
US20100043062A1 (en) * | 2007-09-17 | 2010-02-18 | Samuel Wayne Alexander | Methods and Systems for Management of Image-Based Password Accounts |
US20090113294A1 (en) * | 2007-10-30 | 2009-04-30 | Yahoo! Inc. | Progressive captcha |
US7992202B2 (en) * | 2007-12-28 | 2011-08-02 | Sungkyunkwan University Foundation For Corporate Collaboration | Apparatus and method for inputting graphical password using wheel interface in embedded system |
US20090276839A1 (en) * | 2008-05-02 | 2009-11-05 | Fortknock Protection Llc | Identity collection, verification and security access control system |
US8392975B1 (en) * | 2008-05-29 | 2013-03-05 | Google Inc. | Method and system for image-based user authentication |
US20090328163A1 (en) * | 2008-06-28 | 2009-12-31 | Yahoo! Inc. | System and method using streaming captcha for online verification |
US8245277B2 (en) * | 2008-10-15 | 2012-08-14 | Towson University | Universally usable human-interaction proof |
US20100180336A1 (en) * | 2009-01-13 | 2010-07-15 | Nolan Jones | System and Method for Authenticating a User Using a Graphical Password |
US8347103B2 (en) * | 2009-01-13 | 2013-01-01 | Nic, Inc. | System and method for authenticating a user using a graphical password |
US20100306055A1 (en) * | 2009-05-26 | 2010-12-02 | Knowledge Probe, Inc. | Compelled user interaction with advertisement with dynamically generated challenge |
US20110016520A1 (en) * | 2009-07-15 | 2011-01-20 | Ira Cohen | Authentication system and methods |
US20110029781A1 (en) * | 2009-07-31 | 2011-02-03 | International Business Machines Corporation | System, method, and apparatus for graduated difficulty of human response tests |
US8671058B1 (en) * | 2009-08-07 | 2014-03-11 | Gary Isaacs | Methods and systems for generating completely automated public tests to tell computers and humans apart (CAPTCHA) |
US20120304284A1 (en) * | 2011-05-24 | 2012-11-29 | Microsoft Corporation | Picture gesture authentication |
US20130145441A1 (en) * | 2011-06-03 | 2013-06-06 | Dhawal Mujumdar | Captcha authentication processes and systems using visual object identification |
US20130019278A1 (en) * | 2011-07-15 | 2013-01-17 | Industrial Technology Research Institute | Captcha image authentication method and system |
US8607331B2 (en) * | 2011-07-15 | 2013-12-10 | Industrial Technology Research Institute | Captcha image authentication method and system |
US8713703B2 (en) * | 2011-07-31 | 2014-04-29 | International Business Machines Corporation | Advanced CAPTCHA using images in sequence |
US20140282979A1 (en) * | 2013-03-15 | 2014-09-18 | Charles A. Andon | User authentication systems for remote computers, internet applications and online services |
US20140325220A1 (en) * | 2013-03-17 | 2014-10-30 | David Tunnell | "Unpassword": Risk Aware End-to-End Multi-Factor Authentication Via Dynamic Pairing |
US8918851B1 (en) * | 2013-07-26 | 2014-12-23 | Michael Iannamico | Juxtapositional image based authentication system and apparatus |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120047262A1 (en) * | 2009-04-27 | 2012-02-23 | Koninklijke Kpn N.V. | Managing Undesired Service Requests in a Network |
US9603022B2 (en) * | 2009-04-27 | 2017-03-21 | Koninklijke Kpn N.V. | Managing undesired service requests in a network |
US11234128B2 (en) | 2009-04-27 | 2022-01-25 | Koninklijke Kpn N.V. | Managing undesired service requests in a network |
US20130160095A1 (en) * | 2011-12-14 | 2013-06-20 | Nokia Corporation | Method and apparatus for presenting a challenge response input mechanism |
US8904493B1 (en) * | 2012-08-29 | 2014-12-02 | Google Inc. | Image-based challenge-response testing |
US9742751B2 (en) | 2012-11-05 | 2017-08-22 | Paypal, Inc. | Systems and methods for automatically identifying and removing weak stimuli used in stimulus-based authentication |
US8789139B2 (en) * | 2012-12-20 | 2014-07-22 | Hewlett-Packard Development Company, L.P. | Automated test to tell computers and humans apart |
CN103971045A (en) * | 2013-01-25 | 2014-08-06 | 苏州精易会信息技术有限公司 | Click type verification code implementation method |
CN103078745A (en) * | 2013-02-04 | 2013-05-01 | 马骏 | Logon authentication code based on picture |
US20140230045A1 (en) * | 2013-02-14 | 2014-08-14 | Lg Cns Co., Ltd. | Graphical object-based user authentication for computerized devices |
JP2016513327A (en) * | 2013-03-01 | 2016-05-12 | ゴーゴー・エルエルシー | Judgment of human stimulation in computing devices |
TWI489312B (en) * | 2013-05-30 | 2015-06-21 | Univ Nat Taiwan Science Tech | Authentication method and authentication system |
CN104424426A (en) * | 2013-08-21 | 2015-03-18 | 北大方正集团有限公司 | Verification method and verification device |
CN110266497A (en) * | 2013-12-04 | 2019-09-20 | 腾讯科技(深圳)有限公司 | Auth method, device, server, user terminal and storage medium |
US20150172276A1 (en) * | 2013-12-17 | 2015-06-18 | Mastercard International Incorporated | Systems and Methods for User Authentication |
US10164968B2 (en) * | 2013-12-17 | 2018-12-25 | Mastercard International Incorporated | Systems and methods for user authentication |
US9722985B2 (en) * | 2014-05-01 | 2017-08-01 | Qualcomm Incorporated | Sensory output for image association |
US20150324162A1 (en) * | 2014-05-09 | 2015-11-12 | Samsung Electronics Co., Ltd. | Method and device for controlling multiple displays |
US9886228B2 (en) * | 2014-05-09 | 2018-02-06 | Samsung Electronics Co., Ltd. | Method and device for controlling multiple displays using a plurality of symbol sets |
US20160004856A1 (en) * | 2014-07-04 | 2016-01-07 | Otto WANG | Server, user apparatus and terminal device |
US9633193B2 (en) * | 2014-07-04 | 2017-04-25 | Otto WANG | Server, user apparatus and terminal device |
US10181023B2 (en) * | 2014-07-04 | 2019-01-15 | Otto WANG | Verification method, method to open a web page and method to open an electronic file |
US20160050211A1 (en) * | 2014-08-18 | 2016-02-18 | Dropbox, Inc. | Access management using electronic images |
US10270780B2 (en) * | 2014-08-18 | 2019-04-23 | Dropbox, Inc. | Access management using electronic images |
US20160253746A1 (en) * | 2015-02-27 | 2016-09-01 | 3D Product Imaging Inc. | Augmented reality e-commerce |
US10497053B2 (en) * | 2015-02-27 | 2019-12-03 | 3D Product Imaging Inc. | Augmented reality E-commerce |
USD926817S1 (en) | 2015-08-26 | 2021-08-03 | Truist Bank | Portion of a display screen with icon |
USD903712S1 (en) * | 2015-08-26 | 2020-12-01 | Truist Bank | Portion of a display screen with icon |
CN106713241A (en) * | 2015-11-16 | 2017-05-24 | 腾讯科技(深圳)有限公司 | Identity verification method, device and system |
US10817154B2 (en) * | 2016-05-13 | 2020-10-27 | Beijing Jingdong Shangke Information Technology Co., Ltd. | System and method for processing screenshot-type note of streaming document |
US20190163351A1 (en) * | 2016-05-13 | 2019-05-30 | Beijing Jingdong Century Trading Co., Ltd. | System and method for processing screenshot-type note of streaming document |
CN106355072A (en) * | 2016-08-19 | 2017-01-25 | 沈建国 | Implementation method and device for three-dimensional model verification code |
US11347835B2 (en) * | 2017-02-20 | 2022-05-31 | Alibaba Group Holding Limited | Verification method and device |
US11860989B2 (en) | 2017-02-20 | 2024-01-02 | Alibaba Group Holding Limited | Verification method and device |
TWI787211B (en) * | 2017-02-20 | 2022-12-21 | 香港商阿里巴巴集團服務有限公司 | Verification method and device |
CN107317805A (en) * | 2017-06-21 | 2017-11-03 | 北京洋浦伟业科技发展有限公司 | One kind checking code verification method and device |
US20190026448A1 (en) * | 2017-07-18 | 2019-01-24 | International Business Machines Corporation | Cognitive behavioral security controls |
US10740445B2 (en) * | 2017-07-18 | 2020-08-11 | International Business Machines Corporation | Cognitive behavioral security controls |
CN109951431A (en) * | 2017-12-21 | 2019-06-28 | 北京京东尚科信息技术有限公司 | Verification method, system, electronic equipment and computer-readable medium |
US11798047B2 (en) * | 2020-02-27 | 2023-10-24 | Shopify Inc. | Systems and methods to regulate sales of a product in an online store |
USD951296S1 (en) * | 2020-05-15 | 2022-05-10 | Barel Ip, Inc | Computing device display screen or portion thereof with an animated icon |
USD951297S1 (en) * | 2020-05-15 | 2022-05-10 | Barel Ip, Inc. | Computing device display screen or portion thereof with an icon |
USD948569S1 (en) * | 2020-05-15 | 2022-04-12 | Barel Ip, Inc | Computing device display screen or portion thereof with an icon |
US11501309B2 (en) | 2021-01-29 | 2022-11-15 | Shopify Inc. | Systems and methods for selectively preventing origination of transaction requests |
US11847585B2 (en) | 2021-01-29 | 2023-12-19 | Shopify Inc. | Systems and methods for selectively preventing origination of transaction requests |
Also Published As
Publication number | Publication date |
---|---|
US10114942B2 (en) | 2018-10-30 |
US20170316200A1 (en) | 2017-11-02 |
US20160224783A1 (en) | 2016-08-04 |
US9710643B2 (en) | 2017-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10114942B2 (en) | Interactive CAPTCHA | |
US11687631B2 (en) | Method for generating a human likeness score | |
US11669884B2 (en) | System and method for providing data to a merchant device from a user device over a wireless link | |
US20190236259A1 (en) | Method for 3d graphical authentication on electronic devices | |
US9466081B2 (en) | System and method of providing a buy option response to a generalized search | |
US8645220B2 (en) | Method and system for creating an augmented reality experience in connection with a stored value token | |
US9985943B1 (en) | Automated agent detection using multiple factors | |
US20140258029A1 (en) | Embedded multimedia interaction platform | |
US10438225B1 (en) | Game-based automated agent detection | |
US11604859B2 (en) | Method and system for incorporating marketing in user authentication | |
US20140047527A1 (en) | System and Method for Detecting and Preventing Automated Interaction Based on Detected Actions Performed by User to Solve a Proffered Puzzle | |
CN107682368B (en) | Verification method, client, server and system based on interactive operation | |
US20210089639A1 (en) | Method and system for 3d graphical authentication on electronic devices | |
US20160140553A1 (en) | Authentication and transactions in a three-dimensional image enhancing display device | |
CN103903305A (en) | 3D Bot Detection | |
CN113190310B (en) | Verification code design method based on random position object semantic recognition | |
US20230297661A1 (en) | Computer challenge systems based on shape combinations | |
Mohamed et al. | On the security and usability of dynamic cognitive game CAPTCHAs | |
Liu et al. | A fluid human interactive proof in virtual environment | |
WO2015044590A1 (en) | Method for authenticating a user provided with a first device by a second device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AS | Assignment |
Owner name: EBAY INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JAKOBSSON, BJORN MARKUS;PALMER, JIM ROY;MALDONADO, GUSTAVO;SIGNING DATES FROM 20110630 TO 20110722;REEL/FRAME:026634/0436 |
|
AS | Assignment |
Owner name: PAYPAL, INC., CALIFORNIA Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:EBAY INC.;REEL/FRAME:036169/0774 Effective date: 20150717 |
|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |