[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN104915646A - Method and terminal for conference management - Google Patents

Method and terminal for conference management Download PDF

Info

Publication number
CN104915646A
CN104915646A CN201510293453.2A CN201510293453A CN104915646A CN 104915646 A CN104915646 A CN 104915646A CN 201510293453 A CN201510293453 A CN 201510293453A CN 104915646 A CN104915646 A CN 104915646A
Authority
CN
China
Prior art keywords
terminal
target participant
preset state
target
participant
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201510293453.2A
Other languages
Chinese (zh)
Other versions
CN104915646B (en
Inventor
曾元清
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Oppo Mobile Telecommunications Corp Ltd
Original Assignee
Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Oppo Mobile Telecommunications Corp Ltd filed Critical Guangdong Oppo Mobile Telecommunications Corp Ltd
Priority to CN201510293453.2A priority Critical patent/CN104915646B/en
Publication of CN104915646A publication Critical patent/CN104915646A/en
Application granted granted Critical
Publication of CN104915646B publication Critical patent/CN104915646B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The invention discloses a method and a terminal for conference management. The method comprises steps of utilizing cameras on the conference site to obtain figure data of conference participants, determining identities of the conference participants according to the human figure data, detecting whether a target conference participant who is currently in a preset state exists in the conference participants, if yes, transmitting prompt information to a terminal corresponding to the identity of the target conference participant, wherein the prompt information is used for prompting that the target participant is out of the preset state. The invention also discloses a terminal. The method disclosed by the invention has the steps of detecting whether a target participant who is in the preset state exists in the participants, and if yes, transmitting the prompt information to the terminal corresponding to the identity of the target participant, which simplifies the conference site maintenance procedure and saves the human resource cost.

Description

Conference management method and terminal
Technical Field
The embodiment of the invention relates to the technical field of video monitoring equipment, in particular to a conference management method and a terminal.
Background
In some large conferences, the number of people in the conference is large, and in general, the order of the conference place is maintained by special staff, for example, there is staff in the conference place to supervise the staff who attends the conference, and if the staff who attends the conference goes to bed during the meeting, the staff reminds the staff, but this reminding method is usually: the staff walks to the side of the sleeping staff who attends the meeting, then wakes up the staff, and if the sleeping staff who attends the meeting sits at the middle position of the meeting place, the staff goes past more trouble, so this kind of maintenance mode is comparatively loaded down with trivial details, wastes manpower resources cost moreover, and how to effectively maintain the meeting place order is the problem that needs to be solved urgently.
Disclosure of Invention
The invention provides a conference management method and a terminal, aiming at simplifying the field maintenance process of a conference and saving the cost of human resources.
A first aspect of an embodiment of the present invention provides a method for conference management, including:
utilizing a camera of a conference site to obtain character feature data of participants, and determining the identities of the participants according to the character feature data;
detecting whether a target participant currently in a preset state exists in the participants;
and if the target participating personnel in the preset state exists in the participating personnel, sending prompt information to the terminal corresponding to the identity of the target participating personnel, wherein the prompt information is used for prompting the target participating personnel to leave the preset state.
With reference to the first aspect, in a first possible implementation of the first aspect, the sending the prompt message to the terminal corresponding to the identity of the target participant includes:
matching the character characteristic information of the target participant with character characteristic information in a database to obtain terminal identification information corresponding to the target participant;
and sending prompt information to the terminal corresponding to the identity of the target participant according to the terminal identification information.
With reference to the first aspect, in a second possible implementation of the first aspect, the preset state is: the eye closing time exceeds a first preset time;
or,
the head lowering time exceeds a second preset time;
or,
the leaving position exceeds a third preset time.
With reference to the first aspect, in a third possible implementation of the first aspect, the human character feature data includes at least one of the following information: face data, clothing data, and body shape data.
With reference to the first aspect, or the first possible implementation of the first aspect, or the second possible implementation of the first aspect, or the third possible implementation of the first aspect, in a fourth possible implementation of the first aspect, after the sending the prompt message to the terminal of the target participant, the method further includes:
detecting whether the target participant is separated from the preset state;
and if the target participant is detected not to be separated from the preset state within the preset time, sending the prompt information to the terminal of the target participant again.
With reference to the third possible implementation of the first aspect, in a fourth possible implementation of the first aspect, the method further includes:
and if the target participant is detected to be separated from the preset state, the reminding mode is released.
A second aspect of an embodiment of the present invention provides a terminal, including:
the system comprises a determining unit, a processing unit and a processing unit, wherein the determining unit is used for acquiring character characteristic data of participants by using a camera of a conference site and determining the identities of the participants according to the character characteristic data;
the detection unit is used for detecting whether the target participant in the preset state exists in the participants determined by the determination unit;
and the sending unit is used for sending prompt information to a terminal corresponding to the identity of the target participant if the detection unit detects that the target participant currently in the preset state exists in the participants, wherein the prompt information is used for prompting the target participant to leave the preset state.
With reference to the second aspect, in a first possible implementation of the second aspect, the sending unit includes:
the matching unit is used for matching the character characteristic information of the target participant with the character characteristic information in the database to obtain the terminal identification information corresponding to the target participant;
and the sending subunit is used for sending prompt information to the terminal corresponding to the identity of the target participant according to the terminal identification information matched by the matching unit.
With reference to the second aspect, in a second possible implementation manner of the second aspect, the preset state is: the eye closing time exceeds a first preset time;
or,
the head lowering time exceeds a second preset time;
or,
the leaving position exceeds a third preset time.
With reference to the second aspect, in a third possible implementation of the second aspect, the human characteristic data includes at least one of the following information: face data, clothing data, and body shape data.
With reference to the second aspect or the first possible implementation of the second aspect, or the second possible implementation of the second aspect or the third possible implementation of the second aspect, in a fourth possible implementation of the second aspect, the detecting unit is further specifically configured to:
detecting whether the target participant is separated from the preset state;
the sending unit is further specifically configured to:
and if the detection unit detects that the target participant does not depart from the preset state within the preset time, sending prompt information to the terminal corresponding to the identity of the target participant who does not depart from the preset state again.
The embodiment of the invention has the following beneficial effects:
the embodiment of the invention utilizes a camera of a conference site to obtain character characteristic data of participants, and determines the identities of the participants according to the character characteristic data; detecting whether a target participant currently in a preset state exists in the participants; and if the target participating personnel in the preset state exists in the participating personnel, sending prompt information to the terminal corresponding to the identity of the target participating personnel, wherein the prompt information is used for prompting the target participating personnel to leave the preset state. After the conference participants are determined, whether the target conference participants in the preset state exist in the conference participants is detected, and if the target conference participants exist, prompt information is sent to the terminal corresponding to the identities of the target conference participants, so that the conference site maintenance process is simplified, and the human resource cost is saved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without creative efforts.
Fig. 1 is a schematic flowchart of a conference management method according to a first embodiment of the present invention;
fig. 2 is a schematic flowchart of a conference management method according to a second embodiment of the present invention;
fig. 3 is a schematic flowchart of a conference management method according to a third embodiment of the present invention;
fig. 4a is a schematic structural diagram of a terminal according to a first embodiment of the present invention;
fig. 4b is a schematic structural diagram of a first embodiment of a terminal according to an embodiment of the present invention;
fig. 5 is a schematic structural diagram of a terminal according to a second embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
In a specific implementation, in the embodiment of the present invention, the terminal may include, but is not limited to: notebook computers, cell phones, tablet computers, smart wearable devices, and the like. The system of the terminal refers to an operating system of the terminal, and may include but is not limited to: android system, saiban system, Windows system, IOS (mobile operating system developed by apple inc.) system, and the like. It should be noted that the Android terminal refers to an Android system terminal, the shift terminal refers to a shift system terminal, and the like. The above terminals are only examples, not exhaustive, and include but are not limited to the above terminals.
Conference sites mentioned in embodiments of the present invention may include, but are not limited to: the site of a typical small venue, the site of a large venue, or the site of a video conference. Or, environments similar to conference sites, such as: classroom lectures or church prayers, etc., as applied to the present technology, are all within the scope of embodiments of the present invention.
The method and the terminal for conference management according to the embodiments of the present invention are described in detail below with reference to the specific embodiments shown in fig. 1 to fig. 5, specifically as follows:
referring to fig. 1, fig. 1 is a flowchart illustrating a conference management method according to a first embodiment of the present invention. The conference management method of the embodiment of the invention comprises the following steps:
s101, acquiring character feature data of participants by using a camera of a conference site, and determining the identities of the participants according to the character feature data;
s102, detecting whether a target participant in a preset state exists in the participants or not;
s103, if the target participant in the preset state is detected to exist in the participants, sending prompt information to a terminal corresponding to the identity of the target participant, wherein the prompt information is used for prompting the target participant to leave the preset state.
In step S101, the terminal may obtain character feature data of the participant by using a camera in the conference site, where the terminal and the camera in the conference site may form a system, and the system is used for omni-directionally monitoring the conference site. Cameras at the conference site may include, but are not limited to: the terminal comprises a camera, a camera controlled by wireless technology (such as cameras controlled by technologies of wifi, bluetooth and the like) or a USB camera; still further, the camera of the conference site may further include: a micro-optic camera, a visible light camera, or an infrared camera, etc.
Further, the character profile data of the participant may include, but is not limited to: face data, clothing data, body shape data. Face data may include, but is not limited to: eye features, nose features, face features, facial features (e.g., a distinct mole in a certain location of the face), or hair style features, among others. Clothing data may include, but is not limited to: jacket color or style, whether to wear a hat, whether to wear a watch, pants color or style, or shoes color or style. Body shape features may include, but are not limited to: obesity characteristics, upper and lower body scale characteristics, or palm size, etc.
Further, the terminal can determine the identities of the participants according to the character characteristic data, besides, the terminal can also determine the sexes of the participants and can number the participants according to the identities of the participants so as to facilitate tracking of the participants, for example, the terminal numbers the participants, and the movement of the corresponding numbers can be displayed on a display of the terminal in the process that the participants move in a conference site. During the start of the conference, the terminal can further determine the meeting place positions of the participants.
In step S102, after the terminal determines the participant, the terminal may detect a target participant currently in a preset state, where the preset state may include but is not limited to: the eye closing time exceeds a first preset time, the head lowering time exceeds a second preset time or the leaving position exceeds a third preset time, and further, the preset state can be further as follows: a conversation state, such as participant X talking to participant Y, where participant X and participant Y may be any two of the participants in the conference site. Alternatively, the first preset time may include, but is not limited to: 5 seconds, 10 seconds, 10.5 seconds, 30 seconds, and the like, the first preset time may be set by default of a system of the terminal or by a user according to actual needs, and values and setting methods of the second preset time and the third preset time may refer to the first preset time, which is not described herein again.
In step S103, if the terminal detects that there is a target participant currently in a preset state among the participants, sending a prompt message to the terminal corresponding to the identity of the target participant, where the prompt message is used to prompt the target participant to leave the preset state. For example, when the terminal detects a target participant currently in a preset state, the terminal sends a prompt message to the terminal corresponding to the identity of the target participant, where the terminal of the target participant may include but is not limited to: a mobile phone of the target participant, a tablet computer of the conference site location of the target participant, or a communication device (such as an electronic ring) of the conference site of the target participant. The reminder information may include, but is not limited to: short message mode, ring tone, flashing lights or vibration, etc. The purpose of the prompt message is to prompt the target participant in the preset state to leave the preset state. The meeting personnel who are out of the preset state and in the preset state begin to listen to or take notes seriously. The preset state of the participant in the embodiment of the invention is that the participant in the preset state is separated from the preset state, but is not switched from one preset state to another preset state.
The embodiment of the invention utilizes a camera of a conference site to obtain character characteristic data of participants, and determines the identities of the participants according to the character characteristic data; detecting whether a target participant currently in a preset state exists in the participants; and if the target participating personnel in the preset state exists in the participating personnel, sending prompt information to the terminal corresponding to the identity of the target participating personnel, wherein the prompt information is used for prompting the target participating personnel to leave the preset state. After the conference participants are determined, whether the target conference participants in the preset state exist in the conference participants is detected, and if the target conference participants exist, prompt information is sent to the terminal corresponding to the identities of the target conference participants, so that the conference site maintenance process is simplified, and the human resource cost is saved.
Referring to fig. 2, fig. 2 is a flowchart illustrating a conference management method according to a second embodiment of the present invention. The conference management method of the embodiment of the invention comprises the following steps:
s201, acquiring character feature data of the participants by using a camera of a conference site, and determining the identities of the participants according to the character feature data.
S202, detecting whether target participants in a preset state exist in the participants or not.
For the detailed description of steps S201 to S202, refer to steps S101 to S102 of the embodiment of the present invention described in fig. 1.
And S203, if the target participant in the preset state is detected to exist in the participants, matching the character feature information of the target participant with the character feature information in the database to obtain the terminal identification information corresponding to the target participant.
And S204, sending prompt information to the terminal corresponding to the identity of the target participant according to the terminal identification information.
In the specific implementation, if the terminal detects that a target participant currently in a preset state exists in the participants, the character feature information of the target participant is matched with the character feature information in the database to obtain terminal identification information corresponding to the target participant, and therefore prompt information is sent to the terminal corresponding to the identity of the target participant according to the terminal identification information. The terminal identification information may include, but is not limited to: the Mobile phone number, the International Mobile Equipment Identity (IMEI) of the Mobile equipment or the identification information (such as QQ, e-mail, microblog, WeChat, etc.) running on the terminal equipment.
Further, the terminal may enter the identity information of the participant before the participant joins the conference, such as: name, job title, venue location, cell phone number, mailbox address, QQ, and the like. The character characteristic information in the database can be that the participant comes to the conference site, and the camera in the conference site scans the character characteristic information, so that the character characteristic information of the participant can be obtained, and meanwhile, the terminal can store the character characteristic information of the participant in the database at a position corresponding to the identity information of the participant.
The embodiment of the invention utilizes a camera of a conference site to obtain character characteristic data of participants, and determines the identities of the participants according to the character characteristic data; detecting whether a target participant currently in a preset state exists in the participants; if the fact that target participants in preset states exist in the participants is detected, matching the character feature information of the target participants with character feature information in a database to obtain terminal identification information corresponding to the target participants; and sending prompt information to the terminal corresponding to the identity of the target participant according to the terminal identification information. By adopting the embodiment of the invention, whether the target participant in the preset state exists in the participants can be detected after the participants are determined, and if so, the prompt information is sent to the terminal corresponding to the identity of the target participant, so that the conference site maintenance process is simplified, and the human resource cost is saved.
Referring to fig. 3, fig. 3 is a flowchart illustrating a conference management method according to a third embodiment of the present invention. The conference management method of the embodiment of the invention comprises the following steps:
s301, acquiring character feature data of the participants by using a camera of a conference site, and determining the identities of the participants according to the character feature data;
s302, detecting whether a target participant in a preset state exists in the participants or not;
and S303, if the target participant in the preset state is detected to exist in the participants, sending prompt information to the terminal corresponding to the identity of the target participant, wherein the prompt information is used for prompting the target participant to leave the preset state.
For the detailed description of steps S301 to S303, refer to steps S101 to S103 of the embodiment of the present invention described in fig. 1.
S304, detecting whether the target participant is separated from the preset state;
s305, if the target participant is detected not to be separated from the preset state within the preset time, sending the prompt information to the terminal corresponding to the identity of the target participant not separated from the preset state again.
In specific implementation, the terminal can detect whether the target participant is separated from the preset state; and if the target participant is detected not to be separated from the preset state within the preset time, sending the prompt information to the terminal corresponding to the identity of the target participant not separated from the preset state again. For example, when the terminal detects that the eye opening time of the target participant exceeds the preset time, the terminal sends the prompt message to the terminal corresponding to Zhang III, at the moment, the terminal can continuously detect whether Zhang III is separated from the preset state, and if not, the terminal sends the prompt message to Zhang III again.
The embodiment of the invention utilizes a camera of a conference site to obtain character characteristic data of participants, and determines the identities of the participants according to the character characteristic data; detecting whether a target participant currently in a preset state exists in the participants; if the target participant in the preset state exists in the participants, sending prompt information to a terminal corresponding to the identity of the target participant, wherein the prompt information is used for prompting the target participant to leave the preset state; detecting whether the target participant is separated from the preset state; and if the target participant is detected not to be separated from the preset state within the preset time, sending the prompt information to the terminal corresponding to the identity of the target participant not separated from the preset state again. By adopting the embodiment of the invention, whether the target participant is in the preset state currently is detected in the participants after the participants are determined, if so, the prompt information is sent to the terminal corresponding to the identity of the target participant, and the prompt information is sent to the terminal corresponding to the identity of the target participant again when the user does not depart from the preset state within the preset time, so that the conference site maintenance process is simplified, and the human resource cost is saved.
Referring to fig. 4a, fig. 4a is a schematic structural diagram of a terminal according to a first embodiment of the present invention. The terminal described in the embodiment of the present invention includes: the determining unit 401, the detecting unit 402 and the sending unit 403 are specifically as follows:
the determining unit 401 is configured to acquire character feature data of the participant by using a camera in a conference site, and determine the identity of the participant according to the character feature data;
a detecting unit 402, configured to detect whether a target participant currently in a preset state exists among the participants determined by the determining unit 401;
a sending unit 403, configured to send, if the detecting unit 402 detects that there is a target participant currently in a preset state among the participants, prompt information to a terminal corresponding to the identity of the target participant, where the prompt information is used to prompt the target participant to leave the preset state.
In a specific implementation, the determining unit 401 may obtain the character feature data of the participant by using a camera in the conference site, where the terminal may form a system with the camera in the conference site, and the system is used for performing all-around monitoring on the conference site. Cameras at the conference site may include, but are not limited to: the terminal comprises a camera, a camera controlled by wireless technology (such as cameras controlled by technologies of wifi, bluetooth and the like) or a USB camera; still further, the camera of the conference site may further include: a micro-optic camera, a visible light camera, or an infrared camera, etc.
Further, the character profile data of the participant may include, but is not limited to: face data, clothing data, body shape data. Face data may include, but is not limited to: eye features, nose features, face features, facial features (e.g., a distinct mole in a certain location of the face), or hair style features, among others. Clothing data may include, but is not limited to: jacket color or style, whether to wear a hat, whether to wear a watch, pants color or style, or shoes color or style. Body shape features may include, but are not limited to: obesity characteristics, upper and lower body scale characteristics, or palm size, etc.
Further, the determining unit 401 may determine the identity of the participant according to the character feature data, in addition, the determining unit 401 may determine the gender of the participant, and may number the participant according to the identity of the participant, so as to facilitate tracking of the participant, for example, the terminal numbers the participant, and during the movement of the participant in the conference site, the movement corresponding to the number may be displayed on the display of the terminal. During the start of the conference, the terminal can further determine the meeting place positions of the participants.
Further, after the determination unit 401 determines the participant, the detection unit 402 may detect the target participant currently in a preset state, wherein the preset state may include, but is not limited to: the eye closing time exceeds a first preset time, the head lowering time exceeds a second preset time or the leaving position exceeds a third preset time, and further, the preset state can be further as follows: a conversation state, such as participant X talking to participant Y, where participant X and participant Y may be any two of the participants in the conference site. Alternatively, the first preset time may include, but is not limited to: 5 seconds, 10 seconds, 10.5 seconds, 30 seconds, and the like, the first preset time may be set by default of a system of the terminal or by a user according to actual needs, and values and setting methods of the second preset time and the third preset time may refer to the first preset time, which is not described herein again.
Further, if the detecting unit 402 detects that there is a target participant currently in a preset state among the participants, the sending unit 403 sends a prompt message to the terminal corresponding to the identity of the target participant, where the prompt message is used to prompt the target participant to leave the preset state. For example, when the terminal detects a target participant currently in a preset state, the terminal sends a prompt message to the terminal corresponding to the identity of the target participant, where the terminal of the target participant may include but is not limited to: a mobile phone of the target participant, a tablet computer of the conference site location of the target participant, or a communication device (such as an electronic ring) of the conference site of the target participant. The reminder information may include, but is not limited to: short message mode, ring tone, flashing lights or vibration, etc. The purpose of the prompt message is to prompt the target participant in the preset state to leave the preset state. The meeting personnel who are out of the preset state and in the preset state begin to listen to or take notes seriously. The preset state of the participant in the embodiment of the invention is that the participant in the preset state is separated from the preset state, but is not switched from one preset state to another preset state.
As a possible implementation, the detecting unit 402 is further specifically configured to: detecting whether the target participant is separated from the preset state; if the detecting unit 402 detects that the target participant does not depart from the preset state within the preset time, it sends the prompt message to the terminal corresponding to the identity of the target participant who does not depart from the preset state again.
In a specific implementation, the detecting unit 402 may detect whether the target participant is out of the preset state; if the detecting unit 402 detects that the target participant does not depart from the preset state within the preset time, the sending unit 403 sends the prompt message to the terminal corresponding to the identity of the target participant who does not depart from the preset state again. For example, when the detection unit 402 detects that the eye-closing time of the target participant exceeds the preset time, the sending unit 403 sends the prompt message to the terminal corresponding to zhang san, at this moment, the detection unit 402 may continue to detect whether zhang san is out of the preset state, and if not, the sending unit 403 sends the prompt message to zhang san again.
As a possible implementation, as shown in fig. 4b, fig. 4b is another schematic structural diagram of fig. 4a, that is, the determining unit 401 and the detecting unit 402 in fig. 4b have the same functions and structures as the determining unit 401 and the detecting unit 402 in fig. 4a, except that the transmitting unit in fig. 4a may include a matching unit 4011 and a transmitting sub-unit 4012 as shown in fig. 4b, specifically as follows:
the matching unit 4011 is configured to match the person feature information of the target participant with the person feature information in the database, so as to obtain terminal identification information corresponding to the target participant.
And the sending sub-unit 4012 is configured to send prompt information to the terminal corresponding to the identity of the target participant according to the terminal identification information matched by the matching unit.
In specific implementation, if the detecting unit 402 detects that there is a target participant currently in a preset state among the participants, the matching unit 4011 matches the character feature information of the target participant with the character feature information in the database to obtain terminal identification information corresponding to the target participant, and thus the sending subunit 4012 sends prompt information to the terminal corresponding to the identity of the target participant according to the terminal identification information. The terminal identification information may include, but is not limited to: the mobile phone number, the international identity code of the mobile equipment or identification information (such as QQ, e-mail, microblog, WeChat and the like) operated on the terminal equipment.
Further, the terminal may further include a storage unit (not shown) that can record the identity information of the participant before the participant joins the conference, such as: name, job title, venue location, cell phone number, mailbox address, QQ, and the like. The character characteristic information in the database can be that the participant comes to the conference site, and the camera in the conference site scans the character characteristic information, so that the character characteristic information of the participant can be obtained, and meanwhile, the terminal can store the character characteristic information of the participant in the database at a position corresponding to the identity information of the participant.
The terminal of the embodiment of the invention utilizes the camera of the conference site to obtain the character characteristic data of the participants and determines the identities of the participants according to the character characteristic data; detecting whether a target participant currently in a preset state exists in the participants; and if the target participating personnel in the preset state exists in the participating personnel, sending prompt information to the terminal corresponding to the identity of the target participating personnel, wherein the prompt information is used for prompting the target participating personnel to leave the preset state. After the conference participants are determined, whether the target conference participants in the preset state exist in the conference participants is detected, and if the target conference participants exist, prompt information is sent to the terminal corresponding to the identities of the target conference participants, so that the conference site maintenance process is simplified, and the human resource cost is saved.
Referring to fig. 5, fig. 5 is a schematic structural diagram of a second embodiment of a terminal according to an embodiment of the present invention. The terminal described in this embodiment includes: at least one input device 1000; at least one output device 2000; at least one processor 3000, e.g., a CPU; and a memory 4000, the input device 1000, the output device 2000, the processor 3000, and the memory 4000 being connected by a bus 5000.
The input device 1000 may be a touch panel, a general PC, a liquid crystal display, a touch screen, or the like.
The memory 4000 may be a high-speed RAM memory or a non-volatile memory (e.g., a magnetic disk memory). The memory 4000 is used for storing a set of program codes, and the input device 1000, the output device 2000 and the processor 3000 are used for calling the program codes stored in the memory 4000 to execute the following operations:
the input device 1000 described above, configured to:
utilizing a camera of a conference site to obtain character feature data of participants, and determining the identities of the participants according to the character feature data;
the processor 3000 is further specifically configured to:
detecting whether a target participant currently in a preset state exists in the participants;
the processor 3000 is further specifically configured to:
and if the target participating personnel in the preset state exists in the participating personnel, sending prompt information to the terminal corresponding to the identity of the target participating personnel, wherein the prompt information is used for prompting the target participating personnel to leave the preset state.
In some possible embodiments, the processor 3000 is further specifically configured to:
matching the character characteristic information of the target participant with character characteristic information in a database to obtain terminal identification information corresponding to the target participant;
and sending prompt information to the terminal corresponding to the identity of the target participant according to the terminal identification information.
As a possible implementation, the preset state may include, but is not limited to: the eye closing time exceeds a first preset time, the head lowering time exceeds a second preset time or the leaving position exceeds a third preset time.
As a possible implementation, the human character feature data may include, but is not limited to: face data, clothing data, and body shape data.
In some possible embodiments, after sending the prompt to the terminal corresponding to the identity of the target participant, the processor 3000 is further specifically configured to:
detecting whether the target participant is separated from the preset state;
and if the target participant is detected not to be separated from the preset state within the preset time, sending the prompt information to the terminal corresponding to the identity of the target participant not separated from the preset state again.
An embodiment of the present invention further provides a computer storage medium, where the computer storage medium may store a program, and the program includes, when executed, some or all of the steps of any one of the signal processing methods described in the above method embodiments.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It should be noted that, for simplicity of description, the above-mentioned method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present invention is not limited by the order of acts, as some steps may occur in other orders or concurrently in accordance with the invention. Further, those skilled in the art should also appreciate that the embodiments described in the specification are preferred embodiments and that the acts and modules referred to are not necessarily required by the invention.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus may be implemented in other manners. For example, the above-described embodiments of the apparatus are merely illustrative, and for example, the above-described division of the units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of some interfaces, devices or units, and may be an electric or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present invention may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated unit may be stored in a computer-readable storage medium if it is implemented in the form of a software functional unit and sold or used as a separate product. Based on such understanding, the technical solution of the present invention may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, a network device, or the like, and may specifically be a processor in the computer device) to execute all or part of the steps of the above-described method according to the embodiments of the present invention. The storage medium may include: various media capable of storing program codes, such as a usb disk, a removable hard disk, a magnetic disk, an optical disk, a Read-Only Memory (ROM), or a Random Access Memory (RAM).
The above-mentioned embodiments are only used for illustrating the technical solutions of the present invention, and not for limiting the same; although the present invention has been described in detail with reference to the foregoing embodiments, it will be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; and such modifications or substitutions do not depart from the spirit and scope of the corresponding technical solutions of the embodiments of the present invention.

Claims (10)

1. A method of conference management, comprising:
utilizing a camera of a conference site to obtain character feature data of participants, and determining the identities of the participants according to the character feature data;
detecting whether a target participant currently in a preset state exists in the participants;
and if the target participating personnel in the preset state exists in the participating personnel, sending prompt information to the terminal corresponding to the identity of the target participating personnel, wherein the prompt information is used for prompting the target participating personnel to leave the preset state.
2. The method of claim 1, wherein the sending the prompt to the terminal corresponding to the identity of the target participant comprises:
matching the character characteristic information of the target participant with character characteristic information in a database to obtain terminal identification information corresponding to the target participant;
and sending prompt information to the terminal corresponding to the identity of the target participant according to the terminal identification information.
3. The method of claim 1, wherein the predetermined state is: the eye closing time exceeds a first preset time;
or,
the head lowering time exceeds a second preset time;
or,
the leaving position exceeds a third preset time.
4. The method of claim 1, wherein the human character feature data includes at least one of the following information: face data, clothing data, and body shape data.
5. The method according to any one of claims 1 to 4, wherein after said sending the prompt message to the terminal corresponding to the identity of the target participant, the method further comprises:
detecting whether the target participant is separated from the preset state;
and if the target participant is detected not to be separated from the preset state within the preset time, sending the prompt information to the terminal corresponding to the identity of the target participant not separated from the preset state again.
6. A terminal, comprising:
the system comprises a determining unit, a processing unit and a processing unit, wherein the determining unit is used for acquiring character characteristic data of participants by using a camera of a conference site and determining the identities of the participants according to the character characteristic data;
the detection unit is used for detecting whether the target participant in the preset state exists in the participants determined by the determination unit;
and the sending unit is used for sending prompt information to a terminal corresponding to the identity of the target participant if the detection unit detects that the target participant currently in the preset state exists in the participants, wherein the prompt information is used for prompting the target participant to leave the preset state.
7. The terminal of claim 6, wherein the transmitting unit comprises:
the matching unit is used for matching the character characteristic information of the target participant with the character characteristic information in the database to obtain the terminal identification information corresponding to the target participant;
and the sending subunit is used for sending prompt information to the terminal corresponding to the identity of the target participant according to the terminal identification information matched by the matching unit.
8. The terminal of claim 6, wherein the preset state is: the eye closing time exceeds a first preset time;
or,
the head lowering time exceeds a second preset time;
or,
the leaving position exceeds a third preset time.
9. The terminal of claim 6, wherein the character feature data includes at least one of the following information: face data, clothing data, and body shape data.
10. The terminal according to any one of claims 6 to 9, wherein the detecting unit is further specifically configured to:
detecting whether the target participant is separated from the preset state;
the sending unit is further specifically configured to:
and if the detection unit detects that the target participant does not depart from the preset state within the preset time, sending prompt information to the terminal corresponding to the identity of the target participant who does not depart from the preset state again.
CN201510293453.2A 2015-05-30 2015-05-30 A kind of method and terminal of conference management Expired - Fee Related CN104915646B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510293453.2A CN104915646B (en) 2015-05-30 2015-05-30 A kind of method and terminal of conference management

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510293453.2A CN104915646B (en) 2015-05-30 2015-05-30 A kind of method and terminal of conference management

Publications (2)

Publication Number Publication Date
CN104915646A true CN104915646A (en) 2015-09-16
CN104915646B CN104915646B (en) 2018-09-04

Family

ID=54084698

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510293453.2A Expired - Fee Related CN104915646B (en) 2015-05-30 2015-05-30 A kind of method and terminal of conference management

Country Status (1)

Country Link
CN (1) CN104915646B (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017193987A1 (en) * 2016-05-13 2017-11-16 中兴通讯股份有限公司 Method and device for indicating information of participant in video conference
CN109034008A (en) * 2018-07-06 2018-12-18 合肥安华信息科技有限公司 A kind of show ground management system based on feature identification
CN110312098A (en) * 2018-03-20 2019-10-08 麦奇数位股份有限公司 Immediately monitoring method for interactive online teaching
CN111126181A (en) * 2019-12-06 2020-05-08 深圳市中电数通智慧安全科技股份有限公司 Refueling safety supervision method and device and terminal equipment

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103501410A (en) * 2013-10-08 2014-01-08 百度在线网络技术(北京)有限公司 Reminding method and device of shooting as well as generation method and device of detection mode
CN104284017A (en) * 2014-09-04 2015-01-14 广东欧珀移动通信有限公司 Information prompting method and device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103501410A (en) * 2013-10-08 2014-01-08 百度在线网络技术(北京)有限公司 Reminding method and device of shooting as well as generation method and device of detection mode
CN104284017A (en) * 2014-09-04 2015-01-14 广东欧珀移动通信有限公司 Information prompting method and device

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2017193987A1 (en) * 2016-05-13 2017-11-16 中兴通讯股份有限公司 Method and device for indicating information of participant in video conference
CN110312098A (en) * 2018-03-20 2019-10-08 麦奇数位股份有限公司 Immediately monitoring method for interactive online teaching
CN109034008A (en) * 2018-07-06 2018-12-18 合肥安华信息科技有限公司 A kind of show ground management system based on feature identification
CN111126181A (en) * 2019-12-06 2020-05-08 深圳市中电数通智慧安全科技股份有限公司 Refueling safety supervision method and device and terminal equipment

Also Published As

Publication number Publication date
CN104915646B (en) 2018-09-04

Similar Documents

Publication Publication Date Title
EP3116199B1 (en) Wearable-device-based information delivery method and related device
CN110266879B (en) Playing interface display method, device, terminal and storage medium
CN104850995B (en) Operation execution method and device
US10038790B2 (en) Communication service operating method and electronic device supporting the same
US20160028741A1 (en) Methods and devices for verification using verification code
WO2019062620A1 (en) Attendance check method and apparatus, and attendance check device
US11159260B2 (en) Method, device, system, and storage medium for live broadcast detection and data processing
CN104539776A (en) Alarm prompting method and device
US20180053177A1 (en) Resource transfer method, apparatus and storage medium
CN107613087B (en) Control method and mobile terminal
KR102607647B1 (en) Electronic apparatus and tethering connection method thereof
US20170031640A1 (en) Method, device and system for starting target function
CN104951072A (en) Application control method and terminal equipment
CN104915646B (en) A kind of method and terminal of conference management
WO2017219497A1 (en) Message generation method and apparatus
US10083346B2 (en) Method and apparatus for providing contact card
KR20190016671A (en) Communication device, server and communication method thereof
CN105516282B (en) A kind of method and wearable device of data synchronization processing
CN105159676B (en) The loading method of progress bar, device and system
US10728498B2 (en) Communication device, server and communication method thereof
WO2023051508A1 (en) Graphic code display method and apparatus
US11146672B2 (en) Method, device and storage medium for outputting communication message
CN103905837A (en) Image processing method and device and terminal
US20160088149A1 (en) Method for providing information and an electronic device thereof
CN112996138B (en) Communication establishing method, device and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
CP01 Change in the name or title of a patent holder
CP01 Change in the name or title of a patent holder

Address after: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Patentee after: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

Address before: Changan town in Guangdong province Dongguan 523860 usha Beach Road No. 18

Patentee before: GUANGDONG OPPO MOBILE TELECOMMUNICATIONS Corp.,Ltd.

CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180904