[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN117275459B - Information acquisition equipment and information acquisition method based on big data service - Google Patents

Information acquisition equipment and information acquisition method based on big data service Download PDF

Info

Publication number
CN117275459B
CN117275459B CN202311193749.8A CN202311193749A CN117275459B CN 117275459 B CN117275459 B CN 117275459B CN 202311193749 A CN202311193749 A CN 202311193749A CN 117275459 B CN117275459 B CN 117275459B
Authority
CN
China
Prior art keywords
language
communication
robot
information
value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311193749.8A
Other languages
Chinese (zh)
Other versions
CN117275459A (en
Inventor
曹少天
石桂林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Youxunjia Electronic Technology Co ltd
Original Assignee
Shenzhen Youxunjia Electronic Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Youxunjia Electronic Technology Co ltd filed Critical Shenzhen Youxunjia Electronic Technology Co ltd
Priority to CN202311193749.8A priority Critical patent/CN117275459B/en
Publication of CN117275459A publication Critical patent/CN117275459A/en
Application granted granted Critical
Publication of CN117275459B publication Critical patent/CN117275459B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/005Language recognition
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/20Speech recognition techniques specially adapted for robustness in adverse environments, e.g. in noise, of stress induced speech
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use
    • G10L25/51Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination
    • G10L25/63Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use for comparison or discrimination for estimating an emotional state
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/08Speech classification or search
    • G10L2015/086Recognition of spelled words
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/223Execution procedure of a spoken command
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L15/00Speech recognition
    • G10L15/22Procedures used during a speech recognition process, e.g. man-machine dialogue
    • G10L2015/226Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics
    • G10L2015/227Procedures used during a speech recognition process, e.g. man-machine dialogue using non-speech characteristics of the speaker; Human-factor methodology

Landscapes

  • Engineering & Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • Acoustics & Sound (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • Computational Linguistics (AREA)
  • Multimedia (AREA)
  • Child & Adolescent Psychology (AREA)
  • General Health & Medical Sciences (AREA)
  • Hospice & Palliative Care (AREA)
  • Psychiatry (AREA)
  • Signal Processing (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses information acquisition equipment and an information acquisition method based on big data service, and relates to the technical field of big data service, wherein the technical scheme is characterized by comprising a data acquisition module, wherein the data acquisition module comprises a first data acquisition unit and a second data acquisition unit; the first data acquisition unit is used for acquiring basic information when an exchange person carries out language exchange with the robot; the basic information comprises language information, tone quality information and language emotion information; the second data acquisition unit is used for acquiring other language information of the environment where the robot is in language communication; the voice quality information of the communication person when the communication person carries out the language communication with the robot is identified, and the language content of the communication person is identified after the voice quality of the communication person is tracked, so that the robot provides relevant services according to the language content of the communication person.

Description

Information acquisition equipment and information acquisition method based on big data service
Technical Field
The invention relates to the technical field of big data service, in particular to information acquisition equipment and an information acquisition method based on big data service.
Background
The virtual robot is an intelligent session system based on natural language processing, is an intelligent robot fused with a multi-element artificial technology, can understand and answer the questions of the communication person, and answers the consultation questions of the communication person based on natural language and neural network related technologies, and when the communication person and the robot speak in voice, other noises are possibly generated in the environment around the communication person, however, the conventional information acquisition equipment based on big data service cannot recognize the voice quality information when the communication person and the robot communicate in voice, namely cannot capture and track the voice content of the consultation questions required by the communication person after voice quality recognition, so that the robot cannot provide related services for the communication person well.
Disclosure of Invention
Aiming at the defects existing in the prior art, the invention aims to provide information acquisition equipment and an information acquisition method based on big data service.
In order to achieve the above purpose, the present invention provides the following technical solutions:
an information acquisition device based on big data service includes.
The data acquisition module comprises a first data acquisition unit and a second data acquisition unit.
The first data acquisition unit is used for acquiring basic information when an exchange person carries out language exchange with the robot; the basic information comprises language information, tone quality information and language emotion information.
The second data acquisition unit is used for acquiring other language information of the environment where the robot is in language communication; the language information comprises language tone quality information, language tone quality type information and language volume information.
The classification module is used for classifying and classifying voice quality information into effective communication languages after processing and analyzing the voice quality information when the communication person is in language communication with the robot; the voice quality information of other languages in the environment where the robot is in language communication is classified as invalid communication languages.
And the tracking acquisition module is used for tracking and acquiring the voice quality information of the same language as the effective communication language to acquire the language content of the communication person in real time, and the robot provides relevant services for the communication person according to the language content of the communication person.
The pushing module is used for processing and analyzing language emotion information when the communication person is in language communication with the robot to obtain a state value, processing and analyzing other language tone quality type information and language volume information of the environment where the communication person is in the language communication with the robot to obtain an environment value, and processing and analyzing the state value and the environment value in a combined mode to obtain a pushing value, wherein the robot promotes and selects services to the communication person according to the pushing value.
Preferably, the robot provides relevant services for the communication person according to the language content of the communication person, specifically:
identifying language information when the communication person is carrying out language communication with the robot.
The robot selects the same language according to the language content of the communication person to provide relevant services for the communication person.
When the communication person inquires the questions through the voice, the robot receives the voice consultation questions of the communication person, the robot matches answers related to the consultation questions of the communication person in the database, the communication person inquires the questions of the robot in different languages, if the language of the communication person asking the robot is Chinese, the robot selects to answer the questions of the communication person in Chinese, and if the language of the communication person asking the robot is English, the robot selects to answer the questions of the communication person in English.
Preferably, the state value ZTZ is obtained by taking and marking the language emotion information when the communication partner is in language communication with the robot.
It should be noted that, by analyzing the language of the ac user, the mood state of the ac user can be determined, for example, when the ac user speaks the robot and the words such as "you good", "you good" and "please ask" are included in the sentences, the mood state of the ac user is better; if the words such as "aiy" are included in the sentence when the communication partner speaks into the robot, it is indicated that the mood state of the communication partner is not good.
If the words such as "hello", "hello" and "please" are included in the sentence when the ac talker speaks into the robot, the value of the state value ZTZ is 1; if the word such as "aide" is included in the sentence when the ac speaks into the robot, the state value ZTZ is 5.
The state value ZTZ is compared to a preset state threshold P.
If the state value ZTZ is less than or equal to the preset state threshold value P, the state of the communication person when the communication person is communicating with the robot in language is better.
If the status value ZTZ > the preset status threshold value P, it indicates that the status of the communication partner is worse when the communication partner is communicating with the robot.
Here, the preset state threshold P is set to 3, and when the state value ZTZ is 5, the state is worse when the ac is performing language communication with the robot because the state value ZTZ is greater than the preset state threshold P; when the state value ZTZ is 1, the state value ZTZ is smaller than the preset state threshold value P, which indicates that the state of the communication person in language communication with the robot is better.
Preferably, the other language sound quality type information of the environment where the robot is in the language communication is marked and obtained, and the other language sound quality type value YYZ is obtained.
It should be noted that, because the voice characteristics of each person speaking are different when the communication person communicates with the robot, if two persons speak around the communication person, the voice types of other languages in the environment where the communication person communicates with the robot are two, namely, the value YYZ of the voice type of other languages is 2; if five people speak around the communication person, five kinds of other language tone quality kinds exist in the environment where the robot is in the process of carrying out language communication, namely, the value YYYZ of the other language tone quality kinds is 5 at the moment.
Other volume information of the environment in which the robot is communicating with the language is marked and obtained as other language volume values YLZ.
Preferably, the environmental value HJZ is calculated by the processing function hjz=a1×yyz+a2× YLZ; wherein a1 and a2 are influence factors and are greater than zero.
Preferably, the environmental value HJZ is compared with a preset environmental threshold Q.
If the environmental value HJZ is smaller than or equal to the preset environmental threshold value Q, the environmental noise is small when the robot is in language communication.
If the environment value HJZ is larger than the preset environment threshold value Q, the environment noise is large when the robot is in language communication.
Preferably by a joint functionCalculating to obtain a push value TGZ; wherein b1 and b2 are influencing factors and are greater than zero.
Preferably, the push value TGZ is compared with a preset push threshold a.
If the push value TGZ is less than or equal to a preset push threshold A, the robot is not required to popularize and select service to the communication person.
If the push value TGZ is greater than the preset push threshold A, the robot is required to promote the selection service to the communication person.
An information acquisition method based on big data service comprises the following steps.
Collecting basic information when an exchange person carries out language exchange with a robot; the basic information comprises language information, tone quality information and language emotion information.
Collecting other language information of the environment where the robot is in language communication; the language information comprises language tone quality information, language tone quality type information and language volume information.
Processing and analyzing tone quality information when the communication person is in language communication with the robot, and classifying the tone quality information into effective communication languages; the voice quality information of other languages in the environment where the robot is in language communication is classified as invalid communication languages.
The voice quality information of the same language as the effective communication language is tracked and collected to acquire the language content of the communication person in real time, and the robot provides relevant services for the communication person according to the language content of the communication person.
The method comprises the steps of processing and analyzing language emotion information when an exchange person carries out language exchange with a robot to obtain a state value, processing and analyzing other language tone quality type information and language volume information of an environment where the exchange person carries out language exchange with the robot to obtain an environment value, carrying out joint processing and analysis on the state value and the environment value to obtain a push value, and promoting and selecting service to the exchange person by the robot according to the push value.
Compared with the prior art, the invention has the following beneficial effects:
because the environment around the communication person possibly has other noise when the communication person speaks into the robot, the voice quality information of the communication person when the communication person carries out voice communication with the robot is identified, and the voice content of the communication person is identified after the voice quality of the communication person is tracked, so that the robot provides relevant services according to the voice content of the communication person.
Meanwhile, the pushing value is obtained by processing and analyzing the language emotion information when the communication person is in language communication with the robot, the other language tone quality type information and the language volume information of the environment where the communication person is in language communication with the robot, namely, the mood state of the communication person can be judged by analyzing the language emotion information when the communication person is in language communication with the robot, and meanwhile, whether the environment where the communication person is complicated or not is judged according to the other language tone quality type information and the language volume information of the environment where the communication person is in language communication with the robot, so that the communication person can be promoted and selected for service according to actual conditions.
Drawings
Fig. 1 is a schematic block diagram of an information acquisition device based on big data service according to the present invention;
fig. 2 is a schematic flow chart of an information collection method based on big data service according to the present invention;
fig. 3 is a schematic flow chart of calculating an environment value in an information collection method based on big data service according to the present invention.
Detailed Description
Reference is made to figures 1 to 3.
The embodiment provides information acquisition equipment and an information acquisition method based on big data service.
An information acquisition device based on big data service includes.
The data acquisition module comprises a first data acquisition unit and a second data acquisition unit.
The first data acquisition unit is used for acquiring basic information when an exchange person carries out language exchange with the robot; the basic information comprises language information, tone quality information and language emotion information.
The second data acquisition unit is used for acquiring other language information of the environment where the robot is in the process of carrying out language communication; the language information comprises language tone quality information, language tone quality type information and language volume information.
The classification module is used for classifying and classifying voice quality information into effective communication languages after processing and analyzing voice quality information when the communication person is in language communication with the robot; the voice quality information of other languages in the environment where the robot is in language communication is classified as invalid communication languages.
And the tracking acquisition module is used for tracking and acquiring the voice quality information which is the same as the effective communication language to acquire the language content of the communication person in real time, and the robot provides relevant services for the communication person according to the language content of the communication person.
The pushing module processes and analyzes language emotion information when the communication person carries out language communication with the robot to obtain a state value, processes and analyzes other language tone quality type information and language volume information of the environment where the communication person carries out language communication with the robot to obtain an environment value, processes and analyzes the state value and the environment value to obtain a pushing value, and the robot promotes and selects service to the communication person according to the pushing value.
Because the environment around the communication person possibly has other noise when the communication person speaks into the robot, the voice quality information of the communication person when the communication person carries out voice communication with the robot is identified, and the voice content of the communication person is identified after the voice quality of the communication person is tracked, so that the robot provides relevant services according to the voice content of the communication person.
Meanwhile, the pushing value is obtained by processing and analyzing the language emotion information when the communication person is in language communication with the robot, the other language tone quality type information and the language volume information of the environment where the communication person is in language communication with the robot, namely, the mood state of the communication person can be judged by analyzing the language emotion information when the communication person is in language communication with the robot, and meanwhile, whether the environment where the communication person is complicated or not is judged according to the other language tone quality type information and the language volume information of the environment where the communication person is in language communication with the robot, so that the communication person can be promoted and selected for service according to actual conditions.
The robot provides relevant services for the communication person according to the language content of the communication person, and specifically comprises the following steps:
identifying language information when the communication person is carrying out language communication with the robot.
The robot selects the same language according to the language content of the communication person to provide relevant services for the communication person.
When the communication person inquires the questions through the voice, the robot receives the voice consultation questions of the communication person, the robot matches answers related to the consultation questions of the communication person in the database, the communication person inquires the questions of the robot in different languages, if the language of the communication person asking the robot is Chinese, the robot selects to answer the questions of the communication person in Chinese, and if the language of the communication person asking the robot is English, the robot selects to answer the questions of the communication person in English.
The state value ZTZ is obtained by taking and marking the language emotion information when the communication partner is in language communication with the robot.
It should be noted that, by analyzing the language of the ac user, the mood state of the ac user can be determined, for example, when the ac user speaks the robot and the words such as "you good", "you good" and "please ask" are included in the sentences, the mood state of the ac user is better; if the words such as "aiy" are included in the sentence when the communication partner speaks into the robot, it is indicated that the mood state of the communication partner is not good.
If the words such as "hello", "hello" and "please" are included in the sentence when the ac talker speaks into the robot, the value of the state value ZTZ is 1; if the word such as "aide" is included in the sentence when the ac speaks into the robot, the state value ZTZ is 5.
The state value ZTZ is compared to a preset state threshold P.
If the state value ZTZ is less than or equal to the preset state threshold value P, the state of the communication person when the communication person is communicating with the robot in language is better.
If the status value ZTZ > the preset status threshold value P, it indicates that the status of the communication partner is worse when the communication partner is communicating with the robot.
Here, the preset state threshold P is set to 3, and when the state value ZTZ is 5, the state is worse when the ac is performing language communication with the robot because the state value ZTZ is greater than the preset state threshold P; when the state value ZTZ is 1, the state value ZTZ is smaller than the preset state threshold value P, which indicates that the state of the communication person in language communication with the robot is better.
And marking and obtaining other language tone quality type information of the environment where the robot is in the language communication with the robot, and obtaining other language tone quality type values YYYZ.
It should be noted that, because the voice characteristics of each person speaking are different when the communication person communicates with the robot, if two persons speak around the communication person, the voice types of other languages in the environment where the communication person communicates with the robot are two, namely, the value YYZ of the voice type of other languages is 2; if five people speak around the communication person, five kinds of other language tone quality kinds exist in the environment where the robot is in the process of carrying out language communication, namely, the value YYYZ of the other language tone quality kinds is 5 at the moment.
Other volume information of the environment in which the robot is communicating with the language is marked and obtained as other language volume values YLZ.
It should be noted that if the db of the other volume of the environment in which the robot is communicating with the language is 50 db, the value YLZ of the other volume of the language is 50, and if the db of the other volume of the environment in which the robot is communicating with the language is 65 db, the value YLZ of the other volume of the language is 65.
Calculating to obtain an environment value HJZ through a processing function HJZ=a1×YYZ+a2× YLZ; wherein a1 and a2 are influence factors and are greater than zero.
It should be noted that, here, the values of a1 and a2 are set to 1, and when the value of the other language sound quality class YYZ is 2 and the value of the other language sound quality value YLZ is 50, the value of the environment value HJZ is calculated by the processing function hjz=a1×yyz+a2× YLZ to be 52; when the other language sound quality type value YYZ is 5 and the other language sound quality value YLZ is 65, the environment value HJZ is 70 by the processing function hjz=a1×yyz+a2× YLZ.
And comparing the environment value HJZ with a preset environment threshold value Q.
If the environmental value HJZ is smaller than or equal to the preset environmental threshold value Q, the environmental noise is small when the robot is in language communication.
If the environment value HJZ is larger than the preset environment threshold value Q, the environment noise is large when the robot is in language communication.
Setting the preset environment threshold value Q to 60, and under the condition that the environment value HJZ is 52, indicating that the environment noise is small when the robot is in language communication because the environment value HJZ is smaller than the preset environment threshold value Q; under the condition that the environmental value HJZ is 70, the environmental value HJZ is larger than the preset environmental threshold value Q, so that the environmental noise is large when the robot is in language communication.
By joint functionCalculating to obtain a push value TGZ; wherein b1 and b2 are influencing factors and are greater than zero.
Note that here, the value of b1 is set to 1, the value of b2 is set to 100, and when the state value ZTZ is 1 and the environment value HJZ is 50, the value is set to 50 by the joint functionCalculating to obtain a push value TGZ of 3; in the case that the state value ZTZ is 5 and the environment value HJZ is 50, the function is combinedThe calculated push value TGZ is 6.
And comparing the push value TGZ with a preset push threshold A.
If the push value TGZ is less than or equal to a preset push threshold A, the robot is not required to popularize and select service to the communication person.
If the push value TGZ is greater than the preset push threshold A, the robot is required to promote the selection service to the communication person.
It should be noted that, here, the preset push threshold value a is set to 4, and in the case that the push value TGZ is 3, since the push value TGZ is less than the preset push threshold value a, the robot is not required to promote the selection service to the ac, which indicates that the mood state of the ac is better and the environment is more intense, so that the robot is not required to provide the selection service of listening to music and watching movies to the ac; in the case where the push value TGZ is 6, since the push value TGZ is greater than the preset push threshold value a, the robot is required to promote the selection service to the ac, that is, the mood state of the ac is poor and the environment is relatively quiet, so that the robot is required to provide the selection service of listening to music and watching movies to the ac.
Example two
The following technical features are added on the basis of the first embodiment:
an information acquisition method based on big data service comprises the following steps.
Collecting basic information when an exchange person carries out language exchange with a robot; the basic information comprises language information, tone quality information and language emotion information.
Collecting other language information of the environment where the robot is in language communication; the language information comprises language tone quality information, language tone quality type information and language volume information.
Processing and analyzing tone quality information when the communication person is in language communication with the robot, and classifying the tone quality information into effective communication languages; the voice quality information of other languages in the environment where the robot is in language communication is classified as invalid communication languages.
The voice quality information of the same language as the effective communication language is tracked and collected to acquire the language content of the communication person in real time, and the robot provides relevant services for the communication person according to the language content of the communication person.
The method comprises the steps of processing and analyzing language emotion information when an exchange person carries out language exchange with a robot to obtain a state value, processing and analyzing other language tone quality type information and language volume information of an environment where the exchange person carries out language exchange with the robot to obtain an environment value, carrying out joint processing and analysis on the state value and the environment value to obtain a push value, and promoting and selecting service to the exchange person by the robot according to the push value.
The above description is only a preferred embodiment of the present invention, and the protection scope of the present invention is not limited to the above examples, and all technical solutions belonging to the concept of the present invention belong to the protection scope of the present invention. It should be noted that modifications and adaptations to the present invention may occur to one skilled in the art without departing from the principles of the present invention and are intended to be within the scope of the present invention.

Claims (5)

1. An information acquisition device based on big data service, comprising:
the data acquisition module comprises a first data acquisition unit and a second data acquisition unit;
the first data acquisition unit is used for acquiring basic information when an exchange person carries out language exchange with the robot; the basic information comprises language information, tone quality information and language emotion information;
the second data acquisition unit is used for acquiring other language information of the environment where the robot is in language communication; the language information comprises language tone quality information, language tone quality type information and language volume information;
the classification module is used for classifying and classifying voice quality information into effective communication languages after processing and analyzing the voice quality information when the communication person is in language communication with the robot; classifying and classifying voice quality information of other languages in the environment where the robot is in language communication into invalid communication languages;
the tracking acquisition module is used for tracking and acquiring the voice quality information of the same language as the effective communication language to acquire the language content of the communication person in real time, and the robot provides relevant services for the communication person according to the language content of the communication person;
the pushing module is used for processing and analyzing language emotion information when the communication person carries out language communication with the robot to obtain a state value, processing and analyzing other language tone quality type information and language volume information of the environment where the communication person carries out language communication with the robot to obtain an environment value, carrying out joint processing and analysis on the state value and the environment value to obtain a pushing value, and promoting and selecting service to the communication person by the robot according to the pushing value;
the language emotion information of the communication person when carrying out language communication with the robot is valued and marked to obtain a state value ZTZ;
marking and obtaining other language tone quality type information of the environment where the robot is in the process of carrying out language communication with the robot, and obtaining other language tone quality type values YYYZ;
marking and obtaining other volume information of the environment where the robot is in language communication with the robot, wherein the other volume information is YLZ of other language;
by processing functionsCalculating to obtain an environment value HJZ; wherein a1 and a2 are influencing factors and are greater than zero;
by joint functionCalculating to obtain a push value TGZ; wherein b1 and b2 are influencing factors and are greater than zero;
comparing the push value TGZ with a preset push threshold A:
if the pushing value TGZ is less than or equal to a preset pushing threshold A, the robot is not required to popularize and select service to the communication person;
if the push value TGZ is greater than the preset push threshold A, the robot is required to promote the selection service to the communication person.
2. The information acquisition device based on big data service according to claim 1, wherein the robot provides relevant services for the communication person according to the language content of the communication person, specifically:
identifying language information when the communication person is carrying out language communication with the robot;
the robot selects the same language according to the language content of the communication person to provide relevant services for the communication person.
3. The big data service based information collection device according to claim 2, wherein the status value ZTZ is compared with a preset status threshold P:
if the state value ZTZ is less than or equal to the preset state threshold value P, the state of the communication person when the communication person is in language communication with the robot is better;
if the status value ZTZ > the preset status threshold value P, it indicates that the status of the communication partner is worse when the communication partner is communicating with the robot.
4. A big data service based information acquisition device according to claim 3, characterized in that the environmental value HJZ is compared with a preset environmental threshold Q:
if the environmental value HJZ is less than or equal to a preset environmental threshold value Q, the environmental noise is small when the robot is in language communication;
if the environment value HJZ is larger than the preset environment threshold value Q, the environment noise is large when the robot is in language communication.
5. An information acquisition method based on big data service, applying an information acquisition device based on big data service as claimed in claim 1, characterized in that the method comprises the following steps:
collecting basic information when an exchange person carries out language exchange with a robot; the basic information comprises language information, tone quality information and language emotion information;
collecting other language information of the environment where the robot is in language communication; the language information comprises language tone quality information, language tone quality type information and language volume information;
processing and analyzing tone quality information when the communication person is in language communication with the robot, and classifying the tone quality information into effective communication languages; classifying and classifying voice quality information of other languages in the environment where the robot is in language communication into invalid communication languages;
the voice quality information of the same language as the effective communication language is tracked, collected and real-time acquired, and the robot provides relevant services for the communication person according to the language content of the communication person;
the method comprises the steps of processing and analyzing language emotion information when an exchange person carries out language exchange with a robot to obtain a state value, processing and analyzing other language tone quality type information and language volume information of an environment where the exchange person carries out language exchange with the robot to obtain an environment value, carrying out joint processing and analysis on the state value and the environment value to obtain a push value, and promoting and selecting service to the exchange person by the robot according to the push value.
CN202311193749.8A 2023-09-15 2023-09-15 Information acquisition equipment and information acquisition method based on big data service Active CN117275459B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311193749.8A CN117275459B (en) 2023-09-15 2023-09-15 Information acquisition equipment and information acquisition method based on big data service

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311193749.8A CN117275459B (en) 2023-09-15 2023-09-15 Information acquisition equipment and information acquisition method based on big data service

Publications (2)

Publication Number Publication Date
CN117275459A CN117275459A (en) 2023-12-22
CN117275459B true CN117275459B (en) 2024-03-29

Family

ID=89203627

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311193749.8A Active CN117275459B (en) 2023-09-15 2023-09-15 Information acquisition equipment and information acquisition method based on big data service

Country Status (1)

Country Link
CN (1) CN117275459B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107548474A (en) * 2015-02-11 2018-01-05 谷歌公司 The method, system and the medium that are used for environmental background noise modification based on mood and/or behavioural information
CN111949821A (en) * 2020-06-24 2020-11-17 百度在线网络技术(北京)有限公司 Video recommendation method and device, electronic equipment and storage medium
CN113627196A (en) * 2021-07-21 2021-11-09 前海企保科技(深圳)有限公司 Multi-language conversation robot system based on context and Transformer and conversation method thereof

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102302439B1 (en) * 2014-02-21 2021-09-15 삼성전자주식회사 Electronic device

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107548474A (en) * 2015-02-11 2018-01-05 谷歌公司 The method, system and the medium that are used for environmental background noise modification based on mood and/or behavioural information
CN111949821A (en) * 2020-06-24 2020-11-17 百度在线网络技术(北京)有限公司 Video recommendation method and device, electronic equipment and storage medium
CN113627196A (en) * 2021-07-21 2021-11-09 前海企保科技(深圳)有限公司 Multi-language conversation robot system based on context and Transformer and conversation method thereof

Also Published As

Publication number Publication date
CN117275459A (en) 2023-12-22

Similar Documents

Publication Publication Date Title
CN106294774A (en) User individual data processing method based on dialogue service and device
CN110310668A (en) Mute detection method, system, equipment and computer readable storage medium
CN112562681B (en) Speech recognition method and apparatus, and storage medium
CN108132952A (en) A kind of active searching method and device based on speech recognition
CN114267347A (en) Multi-mode rejection method and system based on intelligent voice interaction
CN115602165B (en) Digital employee intelligent system based on financial system
CN113948079A (en) Human-computer interaction method for distribution network dispatching based on artificial intelligence speech recognition
CN115798518A (en) Model training method, device, equipment and medium
CN117275459B (en) Information acquisition equipment and information acquisition method based on big data service
CN113314103B (en) Illegal information identification method and device based on real-time speech emotion analysis
Wang et al. Interference quality assessment of speech communication based on deep learning
CN112087726B (en) Method and system for identifying polyphonic ringtone, electronic equipment and storage medium
CN113810548A (en) Intelligent call quality inspection method and system based on IOT
CN110807370B (en) Conference speaker identity noninductive confirmation method based on multiple modes
CN107180629B (en) Voice acquisition and recognition method and system
CN1694162A (en) Phonetic recognition analysing system and service method
CN113593580B (en) Voiceprint recognition method and device
CN116308740A (en) Post-credit collection system
Li et al. SONAR: A Synthetic AI-Audio Detection Framework and Benchmark
CN117411970B (en) Man-machine coupling customer service control method and system based on sound processing
CN117153151B (en) Emotion recognition method based on user intonation
CN117935862B (en) A system for extracting key information from speech
CN117711389A (en) Voice interaction method, device, server and storage medium
CN118609536A (en) Audio generation method, device, equipment and storage medium
CN118737150A (en) Intelligent bionic voice service system and method

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right
TA01 Transfer of patent application right

Effective date of registration: 20240306

Address after: Unit 302, Unit 1, Gaojian Science and Technology Park, No. 3 Guanbao Road, Luhu Community, Guanhu Street, Longhua District, Shenzhen City, Guangdong Province, 518000

Applicant after: Shenzhen youxunjia Electronic Technology Co.,Ltd.

Country or region after: China

Address before: Room 110-A, Building 1, Phase 1, Jingang Science and Technology Innovation Park, No.1 Science and Technology Innovation Road, Yaohua Street, Qixia District, Nanjing City, Jiangsu Province, 210000

Applicant before: Tianjia Technology (Nanjing) Co.,Ltd.

Country or region before: China

GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: An information collection device and method based on big data services

Granted publication date: 20240329

Pledgee: Shenzhen Rural Commercial Bank Co.,Ltd. Longhua Sub branch

Pledgor: Shenzhen youxunjia Electronic Technology Co.,Ltd.

Registration number: Y2024980035826