CN106569599B - Method and device for automatically seeking help - Google Patents
Method and device for automatically seeking help Download PDFInfo
- Publication number
- CN106569599B CN106569599B CN201610936700.0A CN201610936700A CN106569599B CN 106569599 B CN106569599 B CN 106569599B CN 201610936700 A CN201610936700 A CN 201610936700A CN 106569599 B CN106569599 B CN 106569599B
- Authority
- CN
- China
- Prior art keywords
- information
- user
- mouth shape
- help
- seeking
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/011—Arrangements for interaction with the human body, e.g. for user immersion in virtual reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/72418—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality for supporting emergency services
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M11/00—Telephonic communication systems specially adapted for combination with other electrical systems
- H04M11/04—Telephonic communication systems specially adapted for combination with other electrical systems with alarm systems, e.g. fire, police or burglar alarm systems
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Signal Processing (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- Emergency Management (AREA)
- Computer Networks & Wireless Communication (AREA)
- Telephonic Communication Services (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The invention provides a method and a device for automatically seeking help, wherein the method comprises the following steps: obtaining mouth shape information from a user; according to the mouth shape information, help seeking information corresponding to the user is determined; and sending the help-seeking information to at least one preset number. According to the scheme of the invention, help seeking can be automatically initiated based on the mouth shape information of the user, the way of the mouth shape is hidden, so that the help seeking process has great concealment, especially, under the condition that the user encounters danger and has dangerous molecules around, the automatic help seeking can be realized in the concealment, so that the safety of the user is ensured, and the scheme is suitable for all people and can be used for various help seeking in daily life by specific people (such as users with limited movement, deaf people and the like).
Description
Technical Field
The invention relates to the technical field of computers, in particular to a method and a device for automatically seeking help.
Background
In the prior art, when a sudden dangerous situation is encountered, a user usually needs to make a call or send a message manually to give an alarm, which requires a period of time to perform the above manual operation and cannot be interrupted.
In view of the above problems, solutions for implementing an alarm by predetermined gestures or voice detection are proposed in the prior art, however, the concealment of the alarm solutions is poor, and particularly when a user is in an artificial dangerous environment, if an alarm behavior is discovered, the personal safety of the user is likely to be affected.
Disclosure of Invention
The invention aims to provide a method and a device for automatically seeking help.
According to one aspect of the present invention, a method for automatically seeking help is provided, wherein the method comprises:
obtaining mouth shape information from a user;
according to the mouth shape information, help seeking information corresponding to the user is determined;
and sending the help-seeking information to at least one preset number.
According to another aspect of the present invention, there is also provided an apparatus for automatically seeking help, wherein the apparatus includes:
means for obtaining lip shape information from a user;
means for determining help information corresponding to the user based on the lip shape information;
means for transmitting said help information to at least one predetermined number.
Compared with the prior art, the invention has the following advantages: can come automatic initiation to seek help based on user's mouth shape information, because the mouth shape mode is comparatively hidden for this process of seeking help has the disguised nature in the ground, especially meets under the dangerous and the dangerous molecular condition of being in the side at the user, can realize seeking help voluntarily, thereby guarantees user's security in a concealable way, and this scheme is applicable to all crowds, and can be used for carrying out various seeking help in daily life by specific crowd (like the limited user of action, deaf-mute etc.).
Drawings
Other features, objects and advantages of the invention will become more apparent upon reading of the detailed description of non-limiting embodiments made with reference to the following drawings:
FIG. 1 is a schematic flow diagram of a method for automatic help seeking according to one embodiment of the present invention;
FIG. 2 is a schematic flow chart of a method for automatic help seeking according to another embodiment of the present invention;
FIG. 3 is a schematic diagram of an apparatus for automatic help-seeking according to an embodiment of the present invention;
fig. 4 is a schematic structural diagram of an apparatus for automatically seeking help according to another embodiment of the present invention.
The same or similar reference numbers in the drawings identify the same or similar elements.
Detailed Description
Before discussing exemplary embodiments in more detail, it should be noted that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart may describe the operations as a sequential process, many of the operations can be performed in parallel, concurrently, or simultaneously. In addition, the order of the operations may be re-arranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figure. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The term "user equipment" in this context refers to an intelligent electronic device that can perform predetermined processes such as numerical calculation and/or logic calculation by executing predetermined programs or instructions, and may include a processor and a memory, wherein the predetermined processes are performed by the processor executing program instructions prestored in the memory, or the predetermined processes are performed by hardware such as ASIC, FPGA, DSP, or a combination thereof. The user equipment comprises any equipment which can be carried and used by a user, such as a tablet computer, a smart phone, a PDA and the like; preferably, the user device is a wearable device.
It should be noted that the ue is only an example, and other existing or future ues that may be present may be applicable to the present invention, and are included in the scope of the present invention and are also included herein by reference.
The methodologies discussed hereinafter, some of which are illustrated by flow diagrams, may be implemented by hardware, software, firmware, middleware, microcode, hardware description languages, or any combination thereof. When implemented in software, firmware, middleware or microcode, the program code or code segments to perform the necessary tasks may be stored in a machine or computer readable medium such as a storage medium. The processor(s) may perform the necessary tasks.
Specific structural and functional details disclosed herein are merely representative and are provided for purposes of describing example embodiments of the present invention. The present invention may, however, be embodied in many alternate forms and should not be construed as limited to only the embodiments set forth herein.
It will be understood that, although the terms first, second, etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another. For example, a first element may be termed a second element, and, similarly, a second element may be termed a first element, without departing from the scope of example embodiments. As used herein, the term "and/or" includes any and all combinations of one or more of the associated listed items.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of example embodiments. As used herein, the singular forms "a", "an" and "the" are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms "comprises" and/or "comprising," when used herein, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be noted that, in some alternative implementations, the functions/acts noted may occur out of the order noted in the figures. For example, two figures shown in succession may, in fact, be executed substantially concurrently, or the figures may sometimes be executed in the reverse order, depending upon the functionality/acts involved.
The present invention is described in further detail below with reference to the attached drawing figures.
Fig. 1 is a flowchart illustrating a method for automatically seeking help according to an embodiment of the present invention. The method according to the present embodiment includes step S1, step S2, and step S3.
In said step S1, the user equipment obtains the mouth shape information from the user.
Wherein the mouth shape information comprises any information relating to a mouth shape made by a user; preferably, the mouth shape information includes, but is not limited to: picture information corresponding to each mouth shape of the user, mouth shape characteristic information (such as a lip line, a lip height and a lip width) corresponding to each mouth shape, and the like. Preferably, the mouth shape information is used to indicate a series of mouth shapes made by the user continuously or intermittently over a period of time.
The user device may obtain the mouth shape information of the user in various ways, such as by using a camera or an infrared sensor in the user device to detect the mouth shape information of the user.
Wherein the user equipment can detect the mouth shape information of the user in real time or periodically.
Preferably, the user equipment obtains the mouth shape information from the user when it is detected that a predetermined trigger condition is met.
Wherein the predetermined trigger condition comprises any predetermined condition for triggering an operation of detecting a mouthpiece; preferably, the predetermined trigger conditions include, but are not limited to:
1) the user performs a predetermined action.
For example, a predetermined action for triggering the operation of detecting the mouth shape is stored in the user equipment in advance (preferably, for a possibly occurring artificial dangerous environment, an action with a small action amplitude and high imperceptibility can be set as the predetermined action, such as lifting an index finger and then lifting a thumb), when the user equipment detects that the user performs the predetermined action, the mouth shape of the user is detected, and the mouth shape information from the user is obtained.
2) The user emits a predetermined sound signal.
For example, the user device stores a predetermined sound signal for triggering the operation of detecting the mouth shape in advance (preferably, for a dangerous environment where human beings may occur, a sound signal with high concealment may be set as the predetermined sound signal, such as a breathing sound with a special frequency), and when the user device detects that the user performs the predetermined sound signal, the mouth shape of the user is detected, and the mouth shape information from the user is obtained.
It should be noted that the above-mentioned predetermined triggering condition is only an example and not a limitation of the present invention, and those skilled in the art should understand that any condition for triggering the operation of detecting the mouth shape should be included in the scope of the predetermined triggering condition described in the present invention.
It should be noted that the above examples are only for better illustrating the technical solutions of the present invention, and not for limiting the present invention, and those skilled in the art should understand that any implementation manner for obtaining the mouth shape information from the user should be included in the scope of the present invention.
In step S2, the user device determines help information corresponding to the user according to the mouth shape information.
Wherein the help information includes any information for requesting assistance to the user. Preferably, the help information includes, but is not limited to: help content information, help level information, help type information, etc.
Specifically, the implementation manner of determining, by the user equipment, help information corresponding to the user according to the mouth shape information includes, but is not limited to:
1) and obtaining effective text information corresponding to the mouth shape information according to the mouth shape information, and determining help seeking information corresponding to the user according to the effective text information. This implementation will be described in detail in the following embodiments.
2) When the mouth shape information comprises a second preset mouth shape, obtaining preset help-seeking information corresponding to the second preset mouth shape, and taking the preset help-seeking information as the help-seeking information corresponding to the user.
The user can preset at least one second preset mouth shape and preset help-seeking information corresponding to each second preset mouth shape in the user equipment. The predetermined help-seeking information includes, but is not limited to, any predetermined information for seeking help, such as help-seeking content information, help-seeking level information, and help-seeking type information. It should be noted that, preferably, the user may further set at least one predetermined number corresponding to each second predetermined mouth shape. For example, the user sets the following two second predetermined mouth shapes in the user equipment in advance: the help-seeking grade information corresponding to M1, M2 and M1 is the first grade, the corresponding preset number is the alarm telephone number '110', the help-seeking grade information corresponding to M2 is the second grade, and the corresponding preset number is the telephone number of family.
Specifically, the user equipment matches the mouth shape information with at least one second preset mouth shape stored in the user equipment in advance, when the matching is successful, preset help-seeking information corresponding to the matched second preset mouth shape is obtained, and the preset help-seeking information is used as the help-seeking information corresponding to the user.
Preferably, when a plurality of second predetermined mouth shapes are matched, predetermined help information having the highest level of help may be used as the help information corresponding to the user.
This implementation can realize seeking help fast and covertly, and is convenient for save user equipment's consumption.
It should be noted that, the above examples are only for better illustrating the technical solutions of the present invention, and not for limiting the present invention, and those skilled in the art should understand that any implementation manner for determining help-seeking information corresponding to a user according to the mouth shape information should be included in the scope of the present invention.
In step S3, the user equipment transmits the help information to at least one predetermined number.
Wherein the user equipment may obtain the at least one predetermined number in a plurality of ways. For example, if a user presets a plurality of preset numbers, the user equipment directly obtains the plurality of preset numbers; for another example, the user equipment stores at least one predetermined number corresponding to each second predetermined mouth shape in advance, and in step S2, the user equipment determines help seeking information based on the matched second predetermined mouth shape, and in step S3, the user equipment directly obtains the at least one predetermined number corresponding to the matched second predetermined mouth shape.
As a preferable solution, the method of the present embodiment further includes step S4 and step S5.
In step S4, the user device obtains environment information corresponding to the current location of the user.
The environment information includes any information related to the environment where the current location of the user is located, such as the temperature and humidity of the current location, a live-action picture or video at the current location, a person detection result (e.g., the number of people in a certain range of the user) near the current location, and the like.
Specifically, the user equipment may obtain the environmental information in various manners, such as detecting the temperature and humidity of the current location through a sensor, shooting live-action pictures around the current location in real time through a camera, detecting the number of people near the user through an infrared sensor or the camera, and the like.
It should be noted that the above examples are only for better illustrating the technical solutions of the present invention, and not for limiting the present invention, and those skilled in the art should understand that any implementation manner for obtaining the environment information corresponding to the current location of the user should be included in the scope of the present invention.
In step S5, the user equipment sends the context information to the at least one predetermined number.
It should be noted that the user equipment may send the environment information and the help information to the at least one predetermined number together, or send the environment information and the help information to the at least one predetermined number separately. Preferably, the user device may periodically transmit the environment information after transmitting the help information so that other persons can know the change of the environment around the user.
According to the scheme of the embodiment, help seeking can be automatically initiated based on the mouth shape information of the user, the help seeking process has great concealment due to the fact that the mouth shape mode is relatively concealed, especially, automatic help seeking can be realized in a concealed mode under the condition that the user is in danger and has dangerous molecules around, so that the safety of the user is guaranteed, and the scheme is suitable for all crowds and can be used for various help seeking in daily life by specific crowds (such as users with limited movement, deaf-mutes and the like); in addition, the environment information corresponding to the current position of the user can be further provided, so that other personnel can judge the condition of the user more accurately according to the environment information, and further more accurate rescue decision and rescue preparation work can be timely made.
Fig. 2 is a flowchart illustrating a method for automatically seeking help according to another embodiment of the present invention. The method according to the present embodiment includes step S1, step S2, and step S3; wherein the step S2 further includes step S21 and step S22. The steps S1 and S3 have been described in detail with reference to the embodiment shown in fig. 1, and are not described herein again.
In step S21, the user equipment obtains valid text information corresponding to the mouth shape information according to the mouth shape information of the user.
Wherein the effective text information represents text information indicating help content that the user actually wants to express. It should be noted that the mouth shape corresponding to the valid text information may be all or part of the mouth shape information. For example, the mouth shape information includes six mouth shapes made by the user continuously, and only the 1 st, 3 rd and 5 th mouth shapes are help content that the user actually wants to express, the text corresponding to the 1 st, 3 th and 5 th mouth shapes is valid text information corresponding to the mouth shape information.
Specifically, the implementation manner of the user equipment obtaining the effective text information corresponding to the mouth shape information according to the mouth shape information of the user includes, but is not limited to:
1) and executing text conversion operation on the mouth shape information, and taking the conversion result as effective text information corresponding to the mouth shape information.
For example, the mouth shape information includes six mouth shapes continuously made by the user, and the user equipment performs text conversion operations on the six mouth shapes in sequence based on the lip language model to obtain a conversion result "sales XX cell", and the conversion result is taken as valid text information corresponding to the mouth shape information.
2) Extracting a plurality of effective mouth shapes meeting a preset position condition from the mouth shape information; and executing text conversion operation on the plurality of mouth shapes, and taking the conversion result as effective text information corresponding to the mouth shape information.
Wherein the predetermined position condition comprises any condition for indicating the position of the valid lip. Preferably, the predetermined location conditions include, but are not limited to:
a) behind the location of the first predetermined mouth shape.
The user can preset a first preset mouth shape in the user equipment, the mouth shape behind the first preset mouth shape in the mouth shape information is the effective mouth shape of the user, and the mouth shape in front of the first preset mouth shape is the ineffective mouth shape. If the first predetermined lip shape is not detected in the lip shape information, the lip shape information is regarded as invalid.
b) Every two adjacent effective mouth shapes have the same position interval.
For example, if the predetermined position interval is 2, one valid shape may be extracted every 2 positions, and the shapes at other positions are all invalid shapes.
It should be noted that a) and b) above may be combined, for example, the predetermined position condition is that the position interval between every two adjacent effective shapes after the position of the first predetermined shape is 2.
It should be noted that the predetermined position condition is only an example and not a limitation of the present invention, and those skilled in the art should understand that any condition for indicating the position of the effective shapes (such as the position interval between each two adjacent effective shapes is sequentially increased or decreased, etc.) should be included in the predetermined position condition range of the present invention.
As an example, the mouth shape information includes ten mouth shapes continuously made by the user, and the predetermined position condition is "every two adjacent effective mouth shapes are separated by a position interval (that is, the effective mouth shapes are located at odd positions)", the user device extracts 1 st, 3 rd, 5 th, 7 th, and 9 th mouth shapes from the ten mouth shapes, and performs a text conversion operation on the extracted mouth shapes to obtain a conversion result "mountain way in kidnapping", and takes the conversion result as effective text information corresponding to the mouth shape information.
It should be noted that the above examples are only for better illustrating the technical solutions of the present invention, and not for limiting the present invention, and those skilled in the art should understand that any implementation manner for obtaining valid text information corresponding to the mouth shape information according to the mouth shape information of the user should be included in the scope of the present invention.
In step S22, the user device determines help information corresponding to the user according to the valid text information.
Specifically, the implementation manner of determining, by the user device, help information corresponding to the user according to the valid text information includes but is not limited to:
1) and directly using the effective text information as help information.
2) The effective text information is further screened to determine help information.
For example, information related to key factors such as a place, a person, and time is screened from the effective text information, and the screened information is used as help information.
3) And when the semanteme of the effective text information is not consistent, adjusting the text sequence in the effective text information to obtain new effective text information with consistent semanteme, and taking the new effective text information as help-seeking information corresponding to the user.
Specifically, the user equipment performs semantic analysis on the effective text information, adjusts the text sequence in the effective text information when the semantic of the effective text information is determined to be incoherent, so that the semantic of the new adjusted effective text information is coherent, and takes the new effective text information as help information corresponding to the user.
It should be noted that, the implementation manners 2) and 3) of the above step S22 may be combined. For example, when the semantics of the effective text information are not consistent, the text sequence in the effective text information is adjusted to obtain new effective text information with consistent semantics, information related to key factors such as places, people and time is screened out from the new effective text information, and the screened information is used as help seeking information.
It should be noted that, the above examples are only for better illustrating the technical solutions of the present invention, and not for limiting the present invention, and those skilled in the art should understand that any implementation manner for determining help-seeking information corresponding to the user according to the valid text information should be included in the scope of the present invention.
According to the scheme of the embodiment, the effective text information corresponding to the mouth shape information can be obtained according to the mouth shape information of the user, so that the help seeking information of the user is determined, the user can express important information (such as dangerous situations encountered by the user) in a mouth shape mode, and people receiving the help seeking information can know the actual situation of the user conveniently, so that corresponding rescue work is performed. In addition, effective text information can be obtained by extracting a plurality of effective mouth shapes satisfying a predetermined position condition, so that a user can mix an invalid mouth shape with an effective mouth shape to watch and listen in a mixed manner, and is prevented from being discovered to enhance the concealment of a help seeking process; in addition, the text sequence in the effective text information can be adjusted through semantic analysis to obtain semantically coherent help seeking information, so that a user can disorder the sequence of a plurality of effective mouth shapes corresponding to the actual information which the user wants to express, the concealment of the help seeking process is further enhanced, and the method is particularly suitable for artificial dangerous environments.
Fig. 3 is a schematic structural diagram of an apparatus for automatically seeking help according to an embodiment of the present invention. The apparatus for automatically seeking help (hereinafter simply referred to as "help seeking apparatus") includes first obtaining means 1, first determining means 2, and first transmitting means 3.
The first obtaining means 1 of the user equipment obtains the mouth shape information from the user.
Wherein the mouth shape information comprises any information relating to a mouth shape made by a user; preferably, the mouth shape information includes, but is not limited to: picture information corresponding to each mouth shape of the user, mouth shape characteristic information (such as a lip line, a lip height and a lip width) corresponding to each mouth shape, and the like. Preferably, the mouth shape information is used to indicate a series of mouth shapes made by the user continuously or intermittently over a period of time.
The first obtaining device 1 may obtain the mouth shape information of the user in various ways, such as detecting the mouth shape information of the user through a camera or an infrared sensor in the user equipment.
Wherein, the first obtaining device 1 can detect the mouth shape information of the user in real time or periodically.
Preferably, the first obtaining means 1 further comprises fourth obtaining means (not shown). The fourth obtaining device is used for obtaining the mouth shape information from the user when the preset triggering condition is detected to be met.
Wherein the predetermined trigger condition comprises any predetermined condition for triggering an operation of detecting a mouthpiece; preferably, the predetermined trigger conditions include, but are not limited to:
1) the user performs a predetermined action.
For example, a predetermined action for triggering the operation of detecting the mouth shape is stored in the user device in advance (preferably, for a human dangerous environment which may occur, an action with a smaller action amplitude and higher imperceptibility may be set as the predetermined action, such as lifting the index finger and then lifting the thumb), and when the predetermined action is detected to be performed by the user, the fourth obtaining device detects the mouth shape of the user and obtains the mouth shape information from the user.
2) The user emits a predetermined sound signal.
For example, the user device stores a predetermined sound signal for triggering the operation of detecting the mouth shape in advance (preferably, a sound signal with high concealment may be set as the predetermined sound signal for a human dangerous environment that may occur), and when the predetermined sound signal is detected to be executed by the user, the fourth obtaining means detects the mouth shape of the user and obtains the mouth shape information from the user.
It should be noted that the above-mentioned predetermined triggering condition is only an example and not a limitation of the present invention, and those skilled in the art should understand that any condition for triggering the operation of detecting the mouth shape should be included in the scope of the predetermined triggering condition described in the present invention.
It should be noted that the above examples are only for better illustrating the technical solutions of the present invention, and not for limiting the present invention, and those skilled in the art should understand that any implementation manner for obtaining the mouth shape information from the user should be included in the scope of the present invention.
The first determination device 2 determines help information corresponding to the user according to the mouth shape information.
Wherein the help information includes any information for requesting assistance to the user. Preferably, the help information includes, but is not limited to: help content information, help level information, help type information, etc.
Specifically, the implementation manner of the first determining device 2 determining the help information corresponding to the user according to the mouth shape information includes, but is not limited to:
1) and obtaining effective text information corresponding to the mouth shape information according to the mouth shape information, and determining help seeking information corresponding to the user according to the effective text information. This implementation will be described in detail in the following embodiments.
2) The first determining means 2 further comprise third obtaining means (not shown). And the third obtaining device is used for obtaining preset help-seeking information corresponding to a second preset mouth shape when the mouth shape information is judged to comprise the second preset mouth shape, and the preset help-seeking information is used as the help-seeking information corresponding to the user.
The user can preset at least one second preset mouth shape and preset help-seeking information corresponding to each second preset mouth shape in the user equipment. The predetermined help-seeking information includes, but is not limited to, any predetermined information for seeking help, such as help-seeking content information, help-seeking level information, and help-seeking type information. It should be noted that, preferably, the user may further set at least one predetermined number corresponding to each second predetermined mouth shape. For example, the user sets the following two second predetermined mouth shapes in the user equipment in advance: the help-seeking grade information corresponding to M1, M2 and M1 is the first grade, the corresponding preset number is the alarm telephone number '110', the help-seeking grade information corresponding to M2 is the second grade, and the corresponding preset number is the telephone number of family.
Specifically, the third obtaining device matches the mouth shape information with at least one second predetermined mouth shape stored in the user equipment in advance, and when the matching is successful, obtains predetermined help-seeking information corresponding to the matched second predetermined mouth shape, and takes the predetermined help-seeking information as help-seeking information corresponding to the user.
Preferably, when a plurality of second predetermined mouth shapes are matched, predetermined help information having the highest level of help may be used as the help information corresponding to the user.
This implementation can realize seeking help fast and covertly, and is convenient for save user equipment's consumption.
It should be noted that, the above examples are only for better illustrating the technical solutions of the present invention, and not for limiting the present invention, and those skilled in the art should understand that any implementation manner for determining help-seeking information corresponding to a user according to the mouth shape information should be included in the scope of the present invention.
The first transmitting means 3 transmits the help-seeking information to at least one predetermined number.
Wherein the first sending means 3 may obtain the at least one predetermined number in a number of ways. For example, the user sets a plurality of predetermined numbers in advance, and the first transmitting apparatus 3 directly obtains the plurality of predetermined numbers; for another example, at least one predetermined number corresponding to each second predetermined mouth shape is stored in the user equipment in advance, the third obtaining device determines the help-seeking information based on the matched second predetermined mouth shape, and the first sending device 3 directly obtains the at least one predetermined number corresponding to the matched second predetermined mouth shape.
As a preferable scheme, the help device of the embodiment further includes a fifth obtaining device (not shown) and a second sending device (not shown).
Fifth obtaining means is for obtaining environment information corresponding to the current location of the user.
The environment information includes any information related to the environment where the current location of the user is located, such as the temperature and humidity of the current location, a live-action picture or video at the current location, a person detection result (e.g., the number of people in a certain range of the user) near the current location, and the like.
Specifically, the fifth obtaining device may obtain the environmental information in a plurality of manners, such as detecting the temperature and humidity of the current position through a sensor, shooting live-action pictures around the current position in real time through a camera, detecting the number of people near the user through an infrared sensor or the camera, and the like.
It should be noted that the above examples are only for better illustrating the technical solutions of the present invention, and not for limiting the present invention, and those skilled in the art should understand that any implementation manner for obtaining the environment information corresponding to the current location of the user should be included in the scope of the present invention.
The second sending means is for sending the context information to the at least one predetermined number.
It should be noted that the environment information and the help information may be transmitted to the at least one predetermined number together, or both may be transmitted to the at least one predetermined number separately. Preferably, the second transmitting device may periodically perform an operation to transmit the environmental information after the first transmitting device 3 to enable other persons to know the environmental change situation around the user.
According to the scheme of the embodiment, help seeking can be automatically initiated based on the mouth shape information of the user, the help seeking process has great concealment due to the fact that the mouth shape mode is relatively concealed, especially, automatic help seeking can be realized in a concealed mode under the condition that the user is in danger and has dangerous molecules around, so that the safety of the user is guaranteed, and the scheme is suitable for all crowds and can be used for various help seeking in daily life by specific crowds (such as users with limited movement, deaf-mutes and the like); in addition, the environment information corresponding to the current position of the user can be further provided, so that other personnel can judge the condition of the user more accurately according to the environment information, and further more accurate rescue decision and rescue preparation work can be timely made.
Fig. 4 is a schematic structural diagram of an apparatus for automatically seeking help according to another embodiment of the present invention. The help means according to the present embodiment includes first obtaining means 1, first determining means 2, and first transmitting means 3, wherein the first determining means 2 further includes second obtaining means 21 and second determining means 22. The first obtaining device 1 and the first sending device 3 have been described in detail in the embodiment shown in fig. 3, and are not described herein again.
The second obtaining means 21 is used for obtaining effective text information corresponding to the mouth shape information according to the mouth shape information of the user.
Wherein the effective text information represents text information indicating help content that the user actually wants to express. It should be noted that the mouth shape corresponding to the valid text information may be all or part of the mouth shape information. For example, the mouth shape information includes six mouth shapes made by the user continuously, and only the 1 st, 3 rd and 5 th mouth shapes are help content that the user actually wants to express, the text corresponding to the 1 st, 3 th and 5 th mouth shapes is valid text information corresponding to the mouth shape information.
Specifically, the implementation manner of obtaining the valid text information corresponding to the mouth shape information by the second obtaining device 21 according to the mouth shape information of the user includes but is not limited to:
1) the second obtaining means 21 further comprise first converting means (not shown). The first conversion device is used for executing text conversion operation on the mouth shape information and taking the conversion result as effective text information corresponding to the mouth shape information.
For example, the mouth shape information includes six mouth shapes continuously made by the user, and the first conversion device performs text conversion operations on the six mouth shapes in order based on the lip language model to obtain a conversion result "sales XX cell", and takes the conversion result as valid text information corresponding to the mouth shape information.
2) The second obtaining means 21 further comprises extracting means (not shown) and second converting means (not shown). The extracting device is used for extracting a plurality of effective mouth shapes meeting the preset position condition from the mouth shape information; the second conversion means is used for executing text conversion operation on the plurality of mouth shapes and taking the conversion result as effective text information corresponding to the mouth shape information.
Wherein the predetermined position condition comprises any condition for indicating the position of the valid lip. Preferably, the predetermined location conditions include, but are not limited to:
a) behind the location of the first predetermined mouth shape.
The user can preset a first preset mouth shape in the user equipment, the mouth shape behind the first preset mouth shape in the mouth shape information is the effective mouth shape of the user, and the mouth shape in front of the first preset mouth shape is the ineffective mouth shape. If the first predetermined lip shape is not detected in the lip shape information, the lip shape information is regarded as invalid.
b) Every two adjacent effective mouth shapes have the same position interval.
For example, if the predetermined position interval is 2, one valid shape may be extracted every 2 positions, and the shapes at other positions are all invalid shapes.
It should be noted that a) and b) above may be combined, for example, the predetermined position condition is that the position interval between every two adjacent effective shapes after the position of the first predetermined shape is 2.
It should be noted that the predetermined position condition is only an example and not a limitation of the present invention, and those skilled in the art should understand that any condition for indicating the position of the effective shapes (such as the position interval between each two adjacent effective shapes is sequentially increased or decreased, etc.) should be included in the predetermined position condition range of the present invention.
As an example, the mouth shape information includes ten mouth shapes continuously made by the user, the predetermined position condition is "every two adjacent effective mouth shapes are separated by a position interval (that is, the effective mouth shapes are located at odd positions)", the extracting means extracts 1 st, 3 rd, 5 th, 7 th, and 9 th mouth shapes from the ten mouth shapes, the second converting means performs a text converting operation on the extracted mouth shapes, obtains a conversion result "mountain way in kidnapping", and takes the conversion result as effective text information corresponding to the mouth shape information.
It should be noted that the above examples are only for better illustrating the technical solutions of the present invention, and not for limiting the present invention, and those skilled in the art should understand that any implementation manner for obtaining valid text information corresponding to the mouth shape information according to the mouth shape information of the user should be included in the scope of the present invention.
The second determining device 22 determines help information corresponding to the user according to the effective text information.
Specifically, the implementation manner of determining the help information corresponding to the user by the second determining device 22 according to the valid text information includes, but is not limited to:
1) and directly using the effective text information as help information.
2) The effective text information is further screened to determine help information.
For example, information related to key factors such as a place, a person, and time is screened from the effective text information, and the screened information is used as help information.
3) The second determining means 22 further comprise adjusting means (not shown). The adjusting device is used for adjusting the text sequence in the effective text information when the semantics of the effective text information are not consistent, obtaining new effective text information with consistent semantics, and taking the new effective text information as help seeking information corresponding to the user.
Specifically, the adjusting device performs semantic analysis on the effective text information, adjusts the text sequence in the effective text information when the semantic of the effective text information is determined to be disconnected, so that the semantic of the new adjusted effective text information is connected, and the new effective text information is used as help information corresponding to the user.
It should be noted that the above-mentioned implementations 2) and 3) of the second determination device 22 may be combined. For example, when the semantics of the effective text information are not consistent, the text sequence in the effective text information is adjusted to obtain new effective text information with consistent semantics, information related to key factors such as places, people and time is screened out from the new effective text information, and the screened information is used as help seeking information.
It should be noted that, the above examples are only for better illustrating the technical solutions of the present invention, and not for limiting the present invention, and those skilled in the art should understand that any implementation manner for determining help-seeking information corresponding to the user according to the valid text information should be included in the scope of the present invention.
According to the scheme of the embodiment, the effective text information corresponding to the mouth shape information can be obtained according to the mouth shape information of the user, so that the help seeking information of the user is determined, the user can express important information (such as dangerous situations encountered by the user) in a mouth shape mode, and people receiving the help seeking information can know the actual situation of the user conveniently, so that corresponding rescue work is performed. In addition, effective text information can be obtained by extracting a plurality of effective mouth shapes satisfying a predetermined position condition, so that a user can mix an invalid mouth shape with an effective mouth shape to watch and listen in a mixed manner, and is prevented from being discovered to enhance the concealment of a help seeking process; in addition, the text sequence in the effective text information can be adjusted through semantic analysis to obtain semantically coherent help seeking information, so that a user can disorder the sequence of a plurality of effective mouth shapes corresponding to the actual information which the user wants to express, the concealment of the help seeking process is further enhanced, and the method is particularly suitable for artificial dangerous environments.
It will be evident to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied in other specific forms without departing from the spirit or essential attributes thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is obvious that the word "comprising" does not exclude other elements or steps, and the singular does not exclude the plural. A plurality of units or means recited in the system claims may also be implemented by one unit or means in software or hardware. The terms first, second, etc. are used to denote names, but not any particular order.
Claims (14)
1. A method for automatic help seeking, wherein the method comprises:
obtaining mouth shape information from a user;
according to the mouth shape information, help seeking information corresponding to the user is determined;
sending the help-seeking information to at least one preset number;
wherein the step of determining help information corresponding to the user according to the mouth shape information comprises:
extracting a plurality of effective mouth shapes meeting a preset position condition from the mouth shape information, wherein the preset position condition is used for indicating the positions of the effective mouth shapes;
performing a text conversion operation on the plurality of effective mouth shapes, and taking a conversion result as effective text information corresponding to the mouth shape information;
according to the effective text information, help seeking information corresponding to the user is determined;
wherein the step of determining help information corresponding to the user according to the effective text information comprises:
when the semantics of the effective text information are not consistent, adjusting the text sequence in the effective text information to obtain new effective text information with consistent semantics, screening out information related to key factors from the new effective text information, and taking the screened out information as help seeking information corresponding to the user.
2. The method of claim 1, wherein the step of obtaining valid text information corresponding to the mouth shape information according to the mouth shape information comprises:
and executing text conversion operation on the mouth shape information, and taking the conversion result as effective text information corresponding to the mouth shape information.
3. The method of claim 1, wherein the predetermined location condition comprises at least one of:
-after the first predetermined mouth shape;
-every two adjacent effective mouth shapes have the same position spacing.
4. The method of claim 1, wherein the step of determining help information corresponding to the user from the mouth shape information comprises:
when the mouth shape information comprises a second preset mouth shape, obtaining preset help-seeking information corresponding to the second preset mouth shape, and taking the preset help-seeking information as the help-seeking information corresponding to the user.
5. The method of claim 1, wherein the step of obtaining the mouth shape information from the user comprises:
when it is detected that a predetermined trigger condition is satisfied, mouth shape information from the user is obtained.
6. The method of claim 5, wherein the predetermined trigger condition comprises at least one of:
-the user performs a predetermined action;
-the user emitting a predetermined sound signal.
7. The method of any of claims 1 to 6, wherein the method further comprises:
obtaining environment information corresponding to a current location of the user;
and sending the environment information to the at least one preset number.
8. An apparatus for automatic help, wherein the apparatus comprises:
means for obtaining lip shape information from a user;
means for determining help information corresponding to the user based on the lip shape information;
means for sending said help information to at least one predetermined number;
wherein the means for determining help information corresponding to the user based on the mouth shape information comprises:
means for extracting a plurality of valid shapes satisfying a predetermined position condition from the shape information, wherein the predetermined position condition is used for indicating the positions of the valid shapes;
means for performing a text conversion operation on the plurality of valid mouth shapes and regarding a conversion result as valid text information corresponding to the mouth shape information;
means for determining help information corresponding to the user based on the valid text information;
wherein the means for determining help information corresponding to the user based on the valid text information comprises:
and the device is used for adjusting the text sequence in the effective text information when the semantics of the effective text information are not consistent, obtaining new effective text information with consistent semantics, screening out information related to key factors from the new effective text information, and taking the screened information as help seeking information corresponding to the user.
9. The apparatus of claim 8, wherein the means for obtaining valid text information corresponding to the mouth shape information from the mouth shape information comprises:
and executing a text conversion operation on the mouth shape information, and taking the conversion result as effective text information corresponding to the mouth shape information.
10. The apparatus of claim 8, wherein the predetermined location condition comprises at least one of:
-after the first predetermined mouth shape;
-every two adjacent effective mouth shapes have the same position spacing.
11. The apparatus of claim 8, wherein the means for determining help information corresponding to the user based on the mouth shape information comprises:
and when the mouth shape information comprises a second preset mouth shape, obtaining preset help-seeking information corresponding to the second preset mouth shape, and using the preset help-seeking information as the help-seeking information corresponding to the user.
12. The apparatus of claim 8, wherein the means for obtaining the mouth shape information from the user comprises:
means for obtaining lip shape information from the user when it is detected that a predetermined trigger condition is met.
13. The apparatus of claim 12, wherein the predetermined trigger condition comprises at least one of:
-the user performs a predetermined action;
-the user emitting a predetermined sound signal.
14. The apparatus of any one of claims 8 to 13, wherein the apparatus further comprises:
means for obtaining environmental information corresponding to a current location of the user;
means for sending said context information to said at least one predetermined number.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610936700.0A CN106569599B (en) | 2016-10-24 | 2016-10-24 | Method and device for automatically seeking help |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610936700.0A CN106569599B (en) | 2016-10-24 | 2016-10-24 | Method and device for automatically seeking help |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106569599A CN106569599A (en) | 2017-04-19 |
CN106569599B true CN106569599B (en) | 2020-05-01 |
Family
ID=60414446
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610936700.0A Active CN106569599B (en) | 2016-10-24 | 2016-10-24 | Method and device for automatically seeking help |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106569599B (en) |
Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202352332U (en) * | 2011-11-30 | 2012-07-25 | 李扬德 | A Portable Lip Recognizer |
CN102970438A (en) * | 2012-11-29 | 2013-03-13 | 广东欧珀移动通信有限公司 | A mobile phone automatic alarm method and automatic alarm device |
CN103268769A (en) * | 2013-02-06 | 2013-08-28 | 方科峰 | Audio-visual system application method based on voice keyboard |
CN104238732A (en) * | 2013-06-24 | 2014-12-24 | 由田新技股份有限公司 | Device, method and computer readable recording medium for detecting facial movements to generate signals |
CN105227709A (en) * | 2015-09-23 | 2016-01-06 | 努比亚技术有限公司 | The apparatus and method of crying for help are carried out by mobile terminal |
CN105807925A (en) * | 2016-03-07 | 2016-07-27 | 浙江理工大学 | Flexible electronic skin based lip language identification system and method |
CN105989328A (en) * | 2014-12-11 | 2016-10-05 | 由田新技股份有限公司 | Method and device for detecting use of handheld device by person |
CN105989329A (en) * | 2014-12-11 | 2016-10-05 | 由田新技股份有限公司 | Method and device for detecting use of handheld device by person |
-
2016
- 2016-10-24 CN CN201610936700.0A patent/CN106569599B/en active Active
Patent Citations (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN202352332U (en) * | 2011-11-30 | 2012-07-25 | 李扬德 | A Portable Lip Recognizer |
CN102970438A (en) * | 2012-11-29 | 2013-03-13 | 广东欧珀移动通信有限公司 | A mobile phone automatic alarm method and automatic alarm device |
CN103268769A (en) * | 2013-02-06 | 2013-08-28 | 方科峰 | Audio-visual system application method based on voice keyboard |
CN104238732A (en) * | 2013-06-24 | 2014-12-24 | 由田新技股份有限公司 | Device, method and computer readable recording medium for detecting facial movements to generate signals |
CN105989328A (en) * | 2014-12-11 | 2016-10-05 | 由田新技股份有限公司 | Method and device for detecting use of handheld device by person |
CN105989329A (en) * | 2014-12-11 | 2016-10-05 | 由田新技股份有限公司 | Method and device for detecting use of handheld device by person |
CN105227709A (en) * | 2015-09-23 | 2016-01-06 | 努比亚技术有限公司 | The apparatus and method of crying for help are carried out by mobile terminal |
CN105807925A (en) * | 2016-03-07 | 2016-07-27 | 浙江理工大学 | Flexible electronic skin based lip language identification system and method |
Also Published As
Publication number | Publication date |
---|---|
CN106569599A (en) | 2017-04-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9769435B2 (en) | Monitoring systems and methods | |
US20160071399A1 (en) | Personal security system | |
US20160094979A1 (en) | Social reminders | |
US20150092055A1 (en) | Pool monitor systems and methods | |
JP7162412B2 (en) | detection recognition system | |
US10096234B1 (en) | Smart band for autonumously recognizing crisis situation and automatically requesting rescue on the basis of sound and motion patterns | |
JPWO2020152851A1 (en) | Digital search security systems, methods and programs | |
US20200074839A1 (en) | Situational awareness platform, methods, and devices | |
WO2017000347A1 (en) | Method and apparatus for protecting mobile terminal | |
KR102094510B1 (en) | Method, Appratus and System of Inserting Watermark Data | |
CN105336091A (en) | Method of carrying out alarm aiming at portable positioning equipment, apparatus and system | |
KR101609914B1 (en) | the emergency situation sensing device responding to physical and mental shock and the emergency situation sensing method using the same | |
CN107464406A (en) | Alarm method, system and corresponding wearable device based on wearable device | |
CN109155098A (en) | Method and apparatus for controlling urgency communication | |
US10163317B2 (en) | Server, system, method and recording medium for searching for missing children using mobile crowdsourcing | |
Yang et al. | Efficient in-pocket detection with mobile phones | |
CN104023314A (en) | Personal protection method or application program for sending distress call information and related place information and related communication terminal | |
CN111932825B (en) | Target object monitoring method and device, computer equipment and storage medium | |
CN103401932B (en) | Based reminding method is carried based on the mobile phone of robot | |
CN104581636A (en) | Intelligent rescue method and system for mobile terminal | |
CN106569599B (en) | Method and device for automatically seeking help | |
CN114038147B (en) | Fire rescue communication method, electronic equipment and storage medium | |
KR20110123968A (en) | Personal Safety Device, System and Method Using Ambient Situation Awareness | |
CN110287774A (en) | Object method for detecting, system and storage medium based on WIFI | |
MX2020007058A (en) | System and method for monitoring life signs of a person. |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |