[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN106454060A - Video-audio management method and video-audio management system - Google Patents

Video-audio management method and video-audio management system Download PDF

Info

Publication number
CN106454060A
CN106454060A CN201510482476.8A CN201510482476A CN106454060A CN 106454060 A CN106454060 A CN 106454060A CN 201510482476 A CN201510482476 A CN 201510482476A CN 106454060 A CN106454060 A CN 106454060A
Authority
CN
China
Prior art keywords
audio
video file
emotion
label
physiologic information
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN201510482476.8A
Other languages
Chinese (zh)
Inventor
李冠慰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
HTC Corp
Original Assignee
High Tech Computer Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by High Tech Computer Corp filed Critical High Tech Computer Corp
Priority to CN201510482476.8A priority Critical patent/CN106454060A/en
Publication of CN106454060A publication Critical patent/CN106454060A/en
Withdrawn legal-status Critical Current

Links

Landscapes

  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)

Abstract

The invention discloses a video-audio management method and a video-audio management system. The video-audio management method comprises steps that a video-audio file is acquired, and during the acquisition of the video-audio file, an emotion label corresponding to the video-audio file is generated. The management, the categorizing, the edition, or the special effect processing of the shooting situations corresponding to the video-audio files are elastically and conveniently realized by the emotion labels during the use of a lot of video-audio files by users.

Description

Audio-visual management method and its system
Technical field
The present invention relates to a kind of audio-visual management method and audio-visual management system.Specifically, the present invention relates to one Plant the audio-visual management method of application emotion label and audio-visual management system.
Background technology
With scientific and technological development, digitized video has been widely used in the middle of the life of people.In general, User may be by substantial amounts of digital image storage in electronic installation, and manually by these digitized videos Classified, or these digitized videos are managed by the default sortord of electronic installation, for example It is ranked up according to file size, modification date or file name.
However, user is difficult in substantial amounts of digitized video, judges one by one or recording image shoots instantly The emotion of photographer or the person of being taken or physiologic information, image is managed.On the other hand, when making When user is intended to for image to carry out special effect processing, no matter user is manually or automatically selected is intended to additional image Effect, the effect that its image effect applies mechanically after whole section of film is all consistent it is impossible to according to shooting instantly Fragment image is adaptively applied mechanically image specific effect by the emotion of photographer or the person of being taken or physiologic information.As This one, cause digitized video in application on restriction.
Content of the invention
An aspect of of the present present invention provides a kind of method of audio-visual management.The method of audio-visual management comprises following step Suddenly:Capture an audio/video file;Wherein capture this audio/video file when, produce to should audio/video file feelings Thread label.
Another aspect of the present invention is to provide a kind of audio-visual management system.Audio-visual management system includes one and audio-visual picks Delivery block and processing meanss.Audio/video acquisition module is in order to capture an audio/video file;Processing meanss are in order to pick When taking this audio/video file, produce to should audio/video file an emotion label.
Through applying above-mentioned audio-visual management method and audio-visual management system, user can be in substantial amounts of audio-visual literary composition In part, obtain emotion or the physiologic information that each audio/video file shoots photographer instantly or the person of being taken, And according to emotion or physiologic information to produce the emotion label of corresponding audio/video file, more flexible whereby and more Advantageously audio/video file is managed, classifies, editing or special effect processing.
Brief description
Fig. 1 is the block chart of the audio-visual management system according to depicted in one embodiment of the invention;
Fig. 2 is the block chart of the inner member of the sensing module according to depicted in one embodiment of the invention;
Fig. 3 is the flow chart of the audio-visual management method according to depicted in one embodiment of the invention;
Fig. 4 is the block chart of the audio-visual management system according to depicted in one embodiment of the invention;
Fig. 5 is the signal of the user interface of audio-visual management system according to depicted in one embodiment of the invention Figure;
Fig. 6 is the signal of the user interface of audio-visual management system according to depicted in one embodiment of the invention Figure;
Fig. 7 is the signal of the user interface of audio-visual management system according to depicted in one embodiment of the invention Figure.
Specific embodiment
Refer to Fig. 1, Fig. 1 is the side of the audio-visual management system 100 according to depicted in one embodiment of the invention Block figure.As shown in figure 1, audio-visual management system 100 comprises audio/video acquisition module 10 and processing meanss 30. Audio/video acquisition module 10 is in order to capture audio/video file, and connects to processing meanss through wired or wireless way 30.Processing meanss 30 are in order to process the audio/video file that audio/video acquisition module 10 is captured.
In an embodiment, processing meanss 30 comprise expression recognition module 32, emotion analysis module 34, Emotion label generation module 36 and output unit 38.Expression recognition module in processing meanss 30 32 are electrically coupled to audio/video acquisition module 10, and emotion analysis module 34 is electrically coupled to expression recognition mould Block 32, emotion label generation module 36 is electrically coupled to emotion analysis module 34.Expression recognition module 32 in order to identify the human face expression of user in the audio/video file that audio/video acquisition module 10 is captured.Emotion is divided Analysis module 34 in order to analyze the emotion of human face expression in audio/video file, for example with the human face expression that identifies with The expression being stored in data base 42 in advance with regard to emotion is compared, and is belonged to the human face expression analyzing acquisition Which kind of emotion.Emotion label generation module 36 in order to be produced a feeling label according to emotion analysis module 34, and Emotion label is embedded in audio/video file, or be produce the emotion label of corresponding audio/video file and be stored in One temporal data that is default or specifying presss from both sides in (for example in storage element 40).Then, when user is intended to according to feelings Thread label simultaneously passes through processing meanss 30, during with the audio-visual specially good effect of the audio/video file increasing corresponding emotion label, Output unit 38 output completes the audio/video file after increasing effect.
It is noted that in various embodiments of the present invention, processing meanss 30 can be processor or controller. Wherein, the expression recognition module 32 in processing meanss 30, emotion analysis module 34, emotion mark Sign generation module 36, output unit 38 can each or merging be embodied as volume circuit such as micro-control unit (microcontroller), microprocessor (microprocessor), digital signal processor (digital signal Processor), ASIC (application specific integrated circuit, ASIC) or patrol Collect circuit;Audio/video acquisition module 10 can be a digital camera, and it comprises charge coupled cell (Charge Coupled Device, CCD) or Complimentary Metal-Oxide quasiconductor (Complementary Metal-Oxide Semiconductor, CMOS) and radio reception element.
In other words, audio/video acquisition module 10 is in order to capture audio/video file, and produces through processing meanss 30 To should audio/video file emotion label.Wherein, audio/video file can comprise picture shelves, audio frequency shelves, One filmstrip at least one.For example, user utilizes audio/video acquisition module 10 (for example, number Word camera) capture the audio/video file including a child, then recognize via the human face expression in processing meanss 30 Module 32 picks out child's countenance.If the expression recognition module 32 in processing meanss 30 is sentenced The expression of disconnected child is a smiling face, then child's Expression analysis are glad emotion by emotion analysis module 34, And the emotion label corresponding to child's image of face is produced by emotion label generation module 36, and this emotion mark Sign and be used to represent as glad attribute.Again for example, through the radio reception element of audio/video acquisition module 10 Audio mail in the audio/video file of this child of fechtable, if the audio mail in this audio/video file is more noisy (for example, using the preset value of audio frequency or volume as judgement), then child is analyzed and is by emotion analysis module 34 Excited emotion, and the emotion label of corresponding child is produced by emotion label generation module 36, and this feelings Thread label is used to represent as excited attribute.Therefore, user passes through emotion label further by audio-visual literary composition Part carries out classifying, editing or special effect processing.
In an embodiment, expression recognition module 32 can be using the sound in audio/video file or face table Feelings (e.g., the corners of the mouth raise up angle or canthus mobile range), to judge the emotion of photographer or the person of being taken, are lifted For example, when the corners of the mouth that expression recognition module 32 judges the person of being taken in the audio-visual picture angle that raises up is big In an angle threshold value, and photographer sends more loud sound of speaking, then emotion analysis module 34 can be divided Separate out the photographer of photographed scene or the person of being taken instantly and be all in more excited emotion, and pass through emotion label Generation module 36 produces this audio/video file fragment corresponding and is expressed as the emotion label of excitement.Further Say, the audio/video file that emotion label generation module 36 can be captured based on audio/video acquisition module 10, it is right to produce Answer the emotion label of this audio/video file, to manage audio/video file according to emotion label or to apply.
In an embodiment, audio-visual management system 100 also comprises storage element 40, in order to store various numbers According to e.g. memory body, hard disk, portable disk memory card etc..This storage element 40 is electrically coupled to process Device 30, and this storage element 40 can further include data base 42.
In an embodiment, audio-visual management system 100 also comprises user interface 50, in order to provide use Person one operation interface.
In an embodiment, audio-visual management system 100 can further include sensing module 20, sensing module 20 can be made up of at least one sensor, and sensing module 20 is connected to process dress by wireless or cable Put 30 and audio/video acquisition module 10, and sensing module 20 is in order to measure physiologic information sensing signal.Physiology Information sensing signal may include a pupil sensing value, a temperature sensing value, a heart beating sensing value and skin row Antiperspirant sensing value.Refer to Fig. 2, Fig. 2 is the sensing module 20 according to depicted in one embodiment of the invention The block chart of inner member.In Fig. 2, sensing module 20 comprises pupil sensor 22, temperature-sensitive sticker 24th, heartbeat sensing device 26 and skin perspiration sensor 28.Wherein, pupil sensor 22 makes in order to sense User's pupil size, in order to sense user body temperature, heartbeat sensing device 26 is in order to sense for temperature-sensitive sticker 24 User palmic rate and number of times, skin perspiration sensor 28 is in order to sense the skin perspiration degree of user.
In this embodiment, sensing module 20 is being shot instantly with sensing user using multiple sensors Physiologic information sensing signal, physiologic information sensing signal is sent to the emotion analysis module of processing meanss 30 In 34, and emotion analysis module 34 determines an emotion attribute according to physiologic information sensing signal, and makes emotion Label generation module 36 is produced a feeling label.For example, when skin perspiration sensor 28 senses shooting The skin part amount of sweat that person is contacted with filming apparatus is more, and pupil sensor 22 calculates audio-visual picture In the person's of being taken pupil larger, then can determine whether out that the photographer of photographed scene or the person of being taken all are in instantly More nervous or excited emotion attribute, and produce the emotion label of this audio/video file fragment corresponding, by this emotion Tag representation is nervous or excited.In another embodiment, audio-visual management system 100 can adopt sensing simultaneously Module 20 and expression recognition module 32, with the physiologic information sensing signal that detects and human face expression with More accurately judge user in acquisition audio-visual picture emotion instantly.
On the other hand, above-mentioned audio/video acquisition module 10, sensing module 20, processing meanss 30, storage are single Unit 40, user interface 50 may be included in a handheld mobile device.
Then, please with reference to Fig. 1~Fig. 3, Fig. 3 is the audio-visual pipe according to depicted in one embodiment of the invention The flow chart of reason method 300.For convenience of explanation, the operation meeting of the audio-visual management system 100 shown in Fig. 1 Illustrate in the lump with audio-visual management method 300.
In step S301, audio/video acquisition module 10 is in order to capture an audio/video file.This audio/video file is permissible It is photo, film or other multimedia video files.For example, user pass through audio/video acquisition module 10 with Capture the image of face of a child.
In step S303, when processing meanss 30 are used to capture audio/video file, produce corresponding audio/video file An emotion label.For example, processing meanss 30 can be identified through expression recognition module 32 A human face expression or the physiologic information sensing signal that detected of sensing module 20, audio-visual to produce correspondence One emotion label of file.In another embodiment, processing meanss 30 can simultaneously according to human face expression and Physiologic information sensing signal, to produce an emotion label of corresponding audio/video file.Additionally, emotion label is permissible It is in the fileinfo field (for example, shooting time, place, file size) of audio/video file, add one Emotion label field, or other generation one label file, and by attached for this label file to audio/video file, To record emotion label.
On the other hand, processing meanss 30 are not limited to need to immediately produce a feeling label, for example, processing meanss 30 An emotion label of corresponding audio/video file can be produced after acquisition/recording video and audio file.
In an embodiment, obtain audio/video file in audio/video acquisition module 10 and/or sensing module 20 receives After physiologic information sensing signal, processing meanss 30 can be according to physiologic information sense on a handheld mobile device Survey the emotion label that signal produces corresponding audio/video file, and by emotion tag memory in handheld mobile device In data base 42.
In another embodiment, refer to Fig. 4, Fig. 4 is audio-visual according to depicted in one embodiment of the invention The block chart of management system 400.The difference of Fig. 4 and Fig. 1 is, Fig. 4 also comprises a cloud system 70, wherein cloud system 70 is coupled to processing meanss 30, audio/video acquisition module by wired or wireless way 10 and sensing module 20, and cloud system 70 comprises a server (not illustrating).In an embodiment, place Reason device 30, audio/video acquisition module 10 and each self-contained transport module of sensing module 20, can be by having Line or wireless way for transmitting signal.
In the present embodiment, cloud system 70 has and processing meanss 30 identical function.For example, After audio/video acquisition module 10 obtains audio/video file and/or sensing module 20 receives physiologic information sensing signal, Audio/video acquisition module 10 and sensing module 20 are each directly by audio/video file and/or physiologic information sensing signal It is sent to server.After audio/video file and/or the transmission of physiologic information sensing signal finish, on server The human face expression of direct basis audio/video file and/or physiologic information sensing signal, to produce corresponding audio/video file Emotion label, and by emotion tag memory in server.
Whereby, after having captured audio/video file, can directly beyond the clouds in system 70 according to audio/video file Human face expression and/or physiologic information sensing signal produce the emotion label corresponding to audio/video file, pending device 30 when needing emotion label, and emotion label is back to processing meanss 30 to carry out subsequent treatment by server again. In this embodiment, by audio/video file and/or physiologic information sensing signal are sent in cloud system 70 Enter row operation, the computational burden of the processing meanss 30 on handheld mobile device can be lowered.
Additionally, in some embodiments, processing meanss 30 can with the change of audio-visual middle personage's emotion, with Produce multiple emotion labels of the emotion of corresponding each time point.Correspondence described below at least one audio/video file produce to The embodiment of a few emotion label, right those skilled in the art is it should be understood that in the invention without departing from the present invention Under spirit, audio-visual management system 100 of the present invention and audio-visual management method 300 are not limited in following Embodiment.
Refer to Fig. 5, Fig. 5 is making of the audio-visual management system 100 according to depicted in one embodiment of the invention The schematic diagram at user interface 50.In Fig. 5, audio/video file is an audio/video file IM1 with 20 seconds length, When the 5th second, expression recognition module 32 judges that the corners of the mouth upper angle of the person of being taken is more than an angle door Threshold value, and heartbeat sensing device 26 judges that the palmic rate of photographer is higher than a heart beating threshold value, then emotion is divided The emotion that analysis module 34 analyzes personage in audio/video file IM1 is front, and infers that it is glad emotion, Make the emotion label generation module 36 position mark one emotion mark of the 5th second in audio/video file time shafts TL Sign LA, emotion label LA for example can be indicated by smiley;When the 10th second, expression recognition mould Skim under the corners of the mouth of the block 32 judgement person of being taken, and skin perspiration sensor 28 judges that the body temperature of photographer is less than One body temperature threshold value, then the emotion that emotion analysis module 34 analyzes personage in audio/video file IM1 is negative, And it is inferred as the emotion of sadness, make emotion label generation module 36 in the 10th of audio/video file time shafts TL Second position mark one emotion label LB, emotion label LB for example can cry face symbology by one;Then, When the 17th second, if processing meanss 30 judge that the emotion of personage in audio/video file IM1 is front again, And it is inferred as glad emotion, then in position mark one emotion of the 17th second of audio/video file time shafts TL Label LC.
Accordingly, can according to user in the emotion shooting each time point instantly, to indicate at least one emotion label, And carry out follow-up application using emotion label.
In an embodiment, the emotion attribute that processing meanss 30 are recorded according to emotion label, will be audio-visual File adds visual and sound effects.Wherein, visual and sound effects include audio frequency shelves, word shelves, a picture shelves three At least one.
For example, in Fig. 5, processing meanss 30 will represent glad emotion in audio/video file IM1 Film paragraph corresponding to label LA, LC (i.e. the 5th second with the 17th second when) is plus in riotous profusion border effect And brisk music, and add the interesting picture after audio-visual specially good effect using output unit 38 output;On the other hand, Processing meanss 30 would indicate that the film paragraph corresponding to emotion label LB (i.e. the 10th second when) of sadness with ash The effect of rank presents, the sad music of cooperation, with and utilize after the output unit 38 output audio-visual specially good effect of addition Picture, shoot emotion instantly to assume user.
In another embodiment, multiple fragments of the corresponding audio/video file IM1 of processing meanss 30, produce respectively After emotion label LA, LB, LC, analyze the emotion changes after emotion label LA, LB, LC, by At least one fragment that corresponding emotion changes are a predetermined condition is chosen in multiple fragments of audio/video file IM1, Or choose the fragment of the emotion label of all like attributes, and editing is a selected works file.For example, select Represent the film paragraph corresponding to emotion label LA, LC being all happiness, to produce an audio/video file IM1 Selected works file.Again for example, the emotion of the corresponding time point of emotion label LA, the LB in audio/video file IM1 The emotion changes changing be from happiness be changed into sad predetermined condition when, then by emotion label LA, LB editing The selected works file of audio/video file IM1.
Then, refer to Fig. 6, Fig. 6 is the audio-visual management system 100 according to depicted in one embodiment of the invention User interface 50 schematic diagram.In an embodiment, audio/video file is a shadow with 30 seconds length Sound file IM2, in Fig. 6, processing meanss 30 judge in audio/video file IM2 the photographer of each fragment or The person's of being taken emotion, and emotion label in different colors is indicated by different emotion fragments.Process dress Put 30 according to emotion label TR, TG, TB, time shafts of corresponding audio/video file IM2 add at least one Color mark or at least one label symbol, indicate at least one color on the time shafts TL of audio/video file IM2 Labelling or at least one label symbol.
For example, processing meanss 30 judge audio/video file IM2 at the 0th second to the 7th second, the 14th second to When 19 seconds and the 27th second to the 30th second, the emotion of photographer or the person of being taken is glad emotion attribute, Then it is shown on time shafts TL with the emotion label TR of red line segment;In addition, processing meanss 30 judge shadow When the 21st second to the 27th second, the emotion of photographer or the person of being taken was sad emotion to sound file IM2 Attribute, then be shown on time shafts TL with the emotion label TB of blue line segment;When processing meanss 30 judge Audio/video file IM2 when the 7th second to the 14th second and the 19th second to the 21st second, photographer or the person of being taken Emotion especially do not react, then be shown on time shafts TL with the emotion label TG of green line segment.
Whereby, processing meanss 30 can determine whether the content in audio/video file IM2, with the corresponding bat shooting instantly The emotion of the person of taking the photograph or the person of being taken produces multiple emotion label TR, TG, TB, and further by emotion mark Sign the corresponding fragment of TR, TG, TB and add different effects, for example, would indicate that glad (or front) emotion The homologous segment of emotion label TR applies mechanically in riotous profusion word picture and brisk music of arranging in pairs or groups, or would indicate that sorrow The emotion label TB of wound (or negative) emotion applies mechanically the music of miss old times or old friends wind effect and sadness.Whereby, audio-visual In file IM2, can be according to emotion label TR, TG, TB of Each point in time, to apply mechanically corresponding emotion The multi-effect of label TR, TG, TB, so that audio/video file IM2 is after applying mechanically effect, can bring The more lively visual effect of user.
In an embodiment, user can click the Menu button in user interface 50, to promote to process Device 30 will have like the emotion label editing of emotion attribute in a selected fragment, for example, will own There is in audio/video file IM2 the homologous segment (the 0th second to the 7th second, the 14th second to of emotion label TR 19 seconds and the 27th second to the 30th second) editing be a short-movie, make this short-movie become the selected of audio/video file IM2 Fragment.
Then, refer to Fig. 7, Fig. 7 is the audio-visual management system 100 according to depicted in one embodiment of the invention User interface 50 schematic diagram.In this embodiment, user interface 50 has a document display area Domain RA, image data folder FA, FB and film data folder FC.Document display area domain RA in order to immediately according to Put forward sequence according to default or random dialling, photo or film are put with automatic poking.Image data folder FA may be used to deposit Put the photo of the emotion label with glad (or front) emotion attribute.Film data folder FB may be used to deposit There is the photo of the emotion label of sad (or negative) emotion attribute, film data folder FC is then in order to deposit There is film.In another embodiment, film data folder FC can be according further to the various emotion marks in film The quantity signed, emotion attributes similarity, the information such as persistent period length, film are carried out being categorized as just have The film of face emotion attribute or the film with negative emotions attribute.
Whereby, through applying above-mentioned audio-visual management method and audio-visual management system, each audio-visual literary composition can be obtained Part shoots emotion or the physiologic information of photographer instantly or the person of being taken, and according to emotion or physiologic information Or both, to produce the emotion label of corresponding audio/video file, more flexiblely whereby and more convenient by shadow The corresponding situation shooting instantly of sound file is managed, classifies, editing or special effect processing.
Although the present invention is disclosed above with embodiment, so it is not limited to the present invention, any is familiar with this Those skilled in the art, without departing from the spirit and scope of the present invention, when being used for a variety of modifications and variations, therefore originally The protection domain of invention ought be defined depending on the scope of which is defined in the appended claims.

Claims (20)

1. a kind of audio-visual management method is it is characterised in that include:
Capture an audio/video file;
Wherein capture this audio/video file when, produce to should audio/video file an emotion label.
2. audio-visual management method according to claim 1 is it is characterised in that capturing this audio-visual literary composition In the step of part, also include:
Detect a human face expression of a physiologic information sensing signal or this audio/video file, to produce this emotion mark Sign.
3. audio-visual management method according to claim 1 is it is characterised in that this emotion label is to pick When taking this audio/video file, detect the physiologic information sensing signal being obtained from a sensing module, and foundation should Physiologic information sensing signal determines that an emotion attribute is produced.
4. audio-visual management method according to claim 2 is it is characterised in that this physiologic information senses Signal includes a pupil sensing value, a temperature sensing value, a heart beating sensing value and a skin perspiration sensing value.
5. audio-visual management method according to claim 1 is it is characterised in that the method also includes:
To should emotion label, by this audio/video file add at least one visual and sound effects;Wherein, this visual and sound effects Including audio frequency shelves, word shelves, a picture shelves three at least one.
6. audio-visual management method according to claim 1 is it is characterised in that this audio/video file comprises One picture shelves, the method also includes:
According to this emotion label, by this picture shelves be classified to should emotion label an image data folder.
7. audio-visual management method according to claim 1 is it is characterised in that the method also includes:
To should audio/video file multiple fragments, produce multiple emotion labels respectively;
Analyze emotion changes of described emotion label;
By choosing to should emotion changes be at least the one of a predetermined condition in the described fragment of this audio/video file Fragment, and editing is a selected works file.
8. audio-visual management method according to claim 1 is it is characterised in that the method also includes:
According to this emotion label, to time shafts of audio/video file at least one color mark or at least should be added One label symbol;
This at least one color mark or this at least one label symbol are indicated on this time shaft of this audio/video file.
9. audio-visual management method according to claim 2 is it is characterised in that also include:
Transmit this audio/video file or this physiologic information sensing signal to a storage element of a server system, should Storage element comprises a data base, after this audio/video file or the transmission of this physiologic information sensing signal finish, in According to this human face expression and this physiologic information sensing signal of this audio/video file on this server, to produce correspondence This emotion label of this audio/video file, and by this emotion tag memory in this data base of this server system In.
10. audio-visual management method according to claim 2 is it is characterised in that also include:
After obtaining this audio/video file or receiving this physiologic information sensing signal, on a handheld mobile device According to this physiologic information sensing signal produce to should audio/video file this emotion label, and by this emotion label It is stored in a data base of this handheld mobile device.
A kind of 11. audio-visual management systems are it is characterised in that include:
One audio/video acquisition module, in order to capture an audio/video file;
One processing meanss, in order to capture this audio/video file when, produce to should audio/video file an emotion mark Sign.
12. audio-visual management systems according to claim 11 are it is characterised in that also include:
One sensing module, in order to detect a human face expression of a physiologic information sensing signal or this audio/video file, To produce this emotion label.
13. audio-visual management systems according to claim 11 are it is characterised in that this emotion label is When capturing this audio/video file, detecting is obtained a physiologic information sensing signal from a sensing module, and foundation should Physiologic information sensing signal determines that an emotion attribute is produced.
14. audio-visual management systems according to claim 12 are it is characterised in that this physiologic information sense Survey signal and include a pupil sensing value, a temperature sensing value, a heart beating sensing value and a skin perspiration sensing value.
15. audio-visual management systems according to claim 11 are it is characterised in that this processing means is used With to should emotion label, by this audio/video file add at least one visual and sound effects;Wherein, this visual and sound effects bag Include audio frequency shelves, word shelves, a picture shelves three at least one.
16. audio-visual management systems according to claim 11 are it is characterised in that this audio/video file bag Containing picture shelves, this processing means in order to according to this emotion label, this picture shelves is classified to should feelings One image data folder of thread information.
17. audio-visual management systems according to claim 11 are it is characterised in that this audio/video file is One audio/video file, this processing means in order to should audio/video file multiple fragments, produce multiple emotions respectively Label, analyzes emotion changes of described emotion label,
By choosing to should emotion changes be at least the one of a predetermined condition in the described fragment of this audio/video file Fragment, and editing is a selected works file.
18. audio-visual management systems according to claim 11 are it is characterised in that also comprise a use Person interface, wherein this audio/video file are an audio/video file, and this processing means is in order to according to this emotion label, to incite somebody to action This audio/video file adds at least one color mark or at least one label symbol, makes when this audio/video file is shown in this When on user interface, in this audio/video file a time shafts subscript show this at least one color mark or this at least one Label symbol.
19. audio-visual management systems according to claim 12 are it is characterised in that also include:
Transmit this audio/video file or this physiologic information sensing signal to a storage element of a server system, should Storage element comprises a data base, after this audio/video file or the transmission of this physiologic information sensing signal finish, in According to this human face expression and this physiologic information sensing signal of this audio/video file on this server, to produce correspondence This emotion label of this audio/video file, and by this emotion tag memory in this data base of this server system In;
Wherein, this audio/video acquisition module, this expression recognition module, this physiologic information sensing module, should Emotion analysis module and this emotion label generation module are located among this server system.
20. audio-visual management systems according to claim 12 are it is characterised in that also include:
After obtaining this audio/video file or receiving this physiologic information sensing signal, on a handheld mobile device According to this physiologic information sensing signal, produce to should audio/video file this emotion label, and by this emotion mark Sign in the data base being stored in this handheld mobile device,
Wherein, this audio/video acquisition module, this expression recognition module, this sensing module, the analysis of this emotion Module and this emotion label generation module are located among this handheld mobile device.
CN201510482476.8A 2015-08-10 2015-08-10 Video-audio management method and video-audio management system Withdrawn CN106454060A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201510482476.8A CN106454060A (en) 2015-08-10 2015-08-10 Video-audio management method and video-audio management system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201510482476.8A CN106454060A (en) 2015-08-10 2015-08-10 Video-audio management method and video-audio management system

Publications (1)

Publication Number Publication Date
CN106454060A true CN106454060A (en) 2017-02-22

Family

ID=58092207

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201510482476.8A Withdrawn CN106454060A (en) 2015-08-10 2015-08-10 Video-audio management method and video-audio management system

Country Status (1)

Country Link
CN (1) CN106454060A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108519811A (en) * 2018-03-13 2018-09-11 广东欧珀移动通信有限公司 Screenshot method and Related product
CN109257649A (en) * 2018-11-28 2019-01-22 维沃移动通信有限公司 A kind of multimedia file producting method and terminal device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1602620A (en) * 2001-12-11 2005-03-30 皇家飞利浦电子股份有限公司 Mood based virtual photo album
CN101853259A (en) * 2009-03-31 2010-10-06 国际商业机器公司 Methods and device for adding and processing label with emotional data
CN103716542A (en) * 2013-12-26 2014-04-09 深圳市金立通信设备有限公司 Photographing method, photographing device and terminal
CN103826160A (en) * 2014-01-09 2014-05-28 广州三星通信技术研究有限公司 Method and device for obtaining video information, and method and device for playing video

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1602620A (en) * 2001-12-11 2005-03-30 皇家飞利浦电子股份有限公司 Mood based virtual photo album
CN101853259A (en) * 2009-03-31 2010-10-06 国际商业机器公司 Methods and device for adding and processing label with emotional data
CN103716542A (en) * 2013-12-26 2014-04-09 深圳市金立通信设备有限公司 Photographing method, photographing device and terminal
CN103826160A (en) * 2014-01-09 2014-05-28 广州三星通信技术研究有限公司 Method and device for obtaining video information, and method and device for playing video

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108519811A (en) * 2018-03-13 2018-09-11 广东欧珀移动通信有限公司 Screenshot method and Related product
CN108519811B (en) * 2018-03-13 2021-04-09 Oppo广东移动通信有限公司 Screenshot method and related product
CN109257649A (en) * 2018-11-28 2019-01-22 维沃移动通信有限公司 A kind of multimedia file producting method and terminal device
CN109257649B (en) * 2018-11-28 2021-12-24 维沃移动通信有限公司 Multimedia file generation method and terminal equipment

Similar Documents

Publication Publication Date Title
TWI597980B (en) Video menagement method and system thereof
KR102091848B1 (en) Method and apparatus for providing emotion information of user in an electronic device
US20100086204A1 (en) System and method for capturing an emotional characteristic of a user
US8866931B2 (en) Apparatus and method for image recognition of facial areas in photographic images from a digital camera
US9369662B2 (en) Smart gallery and automatic music video creation from a set of photos
CN103945130B (en) The image acquisition method of electronic equipment and electronic equipment
CN101510934B (en) Digital plate frame and method for displaying photo
CN108229369A (en) Image capturing method, device, storage medium and electronic equipment
CN110198412A (en) A kind of video recording method and electronic equipment
US20120169895A1 (en) Method and apparatus for capturing facial expressions
CN103685940A (en) Method for recognizing shot photos by facial expressions
US8331691B2 (en) Image data processing apparatus and image data processing method
US8760551B2 (en) Systems and methods for image capturing based on user interest
US8009204B2 (en) Image capturing apparatus, image capturing method, image processing apparatus, image processing method and computer-readable medium
JP6474393B2 (en) Music playback method, apparatus and terminal device based on face album
CN105302315A (en) Image processing method and device
TW201602922A (en) Automatic insertion of video into a photo story
WO2014176139A1 (en) Automatic music video creation from a set of photos
CN111625670A (en) Picture grouping method and device
TW201601074A (en) Thumbnail editing
JP2012105205A (en) Key frame extractor, key frame extraction program, key frame extraction method, imaging apparatus, and server device
US20100123804A1 (en) Emotion-based image processing apparatus and image processing method
CN103856708B (en) The method and photographic device of auto-focusing
US11163822B2 (en) Emotional experience metadata on recorded images
CN114520886A (en) Slow-motion video recording method and equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication

Application publication date: 20170222

WW01 Invention patent application withdrawn after publication