WO2010073355A1 - 番組データ処理装置、方法、およびプログラム - Google Patents
番組データ処理装置、方法、およびプログラム Download PDFInfo
- Publication number
- WO2010073355A1 WO2010073355A1 PCT/JP2008/073694 JP2008073694W WO2010073355A1 WO 2010073355 A1 WO2010073355 A1 WO 2010073355A1 JP 2008073694 W JP2008073694 W JP 2008073694W WO 2010073355 A1 WO2010073355 A1 WO 2010073355A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- weight
- data
- scene
- program
- data portion
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 58
- 238000000605 extraction Methods 0.000 claims abstract description 18
- 239000000284 extract Substances 0.000 claims abstract description 8
- 230000008859 change Effects 0.000 claims description 26
- 238000001514 detection method Methods 0.000 claims description 4
- 230000000694 effects Effects 0.000 claims description 3
- 238000003672 processing method Methods 0.000 claims 4
- 230000007423 decrease Effects 0.000 claims 1
- 230000008569 process Effects 0.000 description 44
- 230000006870 function Effects 0.000 description 13
- 238000005516 engineering process Methods 0.000 description 12
- 238000004590 computer program Methods 0.000 description 5
- 230000003247 decreasing effect Effects 0.000 description 4
- 238000010586 diagram Methods 0.000 description 3
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000003825 pressing Methods 0.000 description 2
- 206010034719 Personality change Diseases 0.000 description 1
- 230000009471 action Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 238000005401 electroluminescence Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000012886 linear function Methods 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000008929 regeneration Effects 0.000 description 1
- 238000011069 regeneration method Methods 0.000 description 1
- 239000000126 substance Substances 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/14—Picture signal circuitry for video frequency region
- H04N5/147—Scene change detection
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/432—Content retrieval operation from a local storage medium, e.g. hard-disk
- H04N21/4325—Content retrieval operation from a local storage medium, e.g. hard-disk by playing back content from the storage medium
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44008—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/45—Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
- H04N21/466—Learning process for intelligent management, e.g. learning user preferences for recommending movies
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/47—End-user applications
- H04N21/472—End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/81—Monomedia components thereof
- H04N21/8126—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts
- H04N21/8133—Monomedia components thereof involving additional data, e.g. news, sports, stocks, weather forecasts specifically related to the content, e.g. biography of the actors in a movie, detailed information about an article seen in a video program
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N5/00—Details of television systems
- H04N5/76—Television signal recording
- H04N5/78—Television signal recording using magnetic recording
- H04N5/782—Television signal recording using magnetic recording on tape
- H04N5/783—Adaptations for reproducing at a rate different from the recording rate
Definitions
- the present invention relates to program data processing technology and reproduction technology.
- a highlight reproduction function for extracting only a scene (a climax part) estimated to be viewed by the user, or a stretch reproduction function for adjusting a reproduction speed.
- the highlight playback function extracts a highlight scene from a video file and plays back only a scene with a high degree of highlight.
- this function is a function for mechanically creating a digest version from the original program.
- the stretch playback is a function that can specify the magnification with respect to the speed during playback as x1.0->x1.2->x1.5->x2.0->.
- the playback device adjusts the viewable time at the specified magnification. Audio playback is also possible when the magnification is within a predetermined limit.
- the conventional technology can complete the viewing by the desired time, there are cases where the scene selected by highlight playback does not match the scene that the user really wants to see and the scene that the user wants to see is not extracted. , “Overlook” occurred. Further, in the stretch playback, since the playback is completed by the target time, there is a case where the high speed playback is performed to such an extent that the recorded contents cannot be sufficiently understood. In any case, the conventional viewing technology is not easy to use. The same problem can occur in a program with only video and no audio.
- An object of the disclosed technology is to provide a technology that makes it possible to adjust the playback time of program data stored in a storage medium and to increase the possibility that a portion of a program that is estimated to be desired by a user can be provided at a reasonable playback speed. It is to be.
- One aspect of the disclosed technology can be exemplified as a program data processing apparatus having a reading unit, a feature extraction unit, a weight acquisition unit, and a weighting unit, for example.
- the reading unit reads a data portion included in the program data from the file storing the program data.
- the feature extraction unit extracts feature information for distinguishing reproduction information reproduced from a data portion from reproduction information reproduced from another data portion.
- the weight acquisition unit acquires the weight set for the extracted feature information from the weight table storage unit that sets the weight for each feature information of the program data.
- the weighting unit assigns the acquired weight to the data portion from which the corresponding feature portion is extracted.
- the viewing apparatus divides video data in a video file into a plurality of scene data (corresponding to data portions), and assigns weights to the respective scenes.
- the scene is a concept for dividing reproduction information such as video images, sounds, sounds, and stories reproduced from a video file.
- the scene data is data for reproducing a scene separated from other scenes based on the characteristics of reproduction information such as video images, sounds, sounds, and stories.
- Scenes can be divided according to differences in viewing effects such as video images, sounds, sounds, stories, etc., but can also be divided simply by time designation. For example, scene 1 is divided into N1 minutes from the start, and scene 2 is divided from N1 minutes to N2. Also, as a concept equivalent to time specification, it can be divided by frame specification.
- scene 1 is a segment from frame 0 to frame N1
- scene 2 is a segment from frame N1 to frame N2.
- a scene can also be divided according to configuration information constituting a program.
- a variety of program information includes a guest corner (1 minute from the start), a commercial 1 (from 15 to 16 minutes), a gourmet corner (from 16 to 30 minutes), and a commercial 2 (from 30 to 31 minutes) ) And present corners (from 31 minutes to 40 minutes).
- configuration information can be acquired from, for example, an electronic program guide.
- the playback speed is changed for each scene according to the weight. That is, a scene assumed to meet the user's preference is played back at a normal speed, and a scene assumed to not meet the user's preference is played back at a higher speed than usual.
- the video file can be viewed within a predetermined reproduction time (for example, within a time specified by the user), and the possibility that the scene in which the user is interested can be surely viewed is increased.
- the video file is created, for example, by recording a television broadcast program.
- the video file is not limited to a recorded file, and may be data that can be obtained by various methods, for example, a video file stored and provided in a storage medium.
- the weight is set from an operation history for a program that the user has viewed in the past. For example, a program viewed in the past is divided into a plurality of scenes, the features of each scene are extracted, and the user's operation history performed at the time of reproducing the scene is collected. If the operation history is fast-forward, the viewing device determines that the user is not interested in the scene or that the user's preference and the scene do not match. As a result, the viewing device reduces the weight for the feature of the scene.
- the viewing device determines that the scene at that time is a scene that meets the user's preference. And the viewing / listening apparatus gives a heavy weight to the feature of the scene.
- the normal playback speed refers to a 1 ⁇ speed playback speed without so-called fast forward.
- Scene features include, for example, sound volume level in each scene, change in sound level, characters displayed on the screen in each scene, presence / absence of character changes, and included in audio in each scene It is determined by extracting words, words given to scenes to which each scene belongs in the program, degree of screen change, information related to the program shown in the electronic program guide, and the like.
- the characters displayed on the screen are subtitles, sports program scores, and the like.
- the presence / absence of a change in character means a case where the score of a sports program moves.
- the information related to the program shown in the electronic program guide is given to each section when the variety program includes a combination of a plurality of sections such as a guest corner, a gourmet corner, and a present corner, for example. Name, performer, synopsis, etc.
- the sections constituting such programs and the broadcast times of the sections can be obtained from the electronic program guide data.
- the electronic program guide can be obtained from a website on the Internet.
- the viewing apparatus stores such a relationship between scene features and weights in a memory table or the like in the form of a weight table.
- the viewing apparatus divides a video file stored on a medium such as a hard disk into a plurality of scenes, searches the weight table based on the characteristics of each scene, and reads the weights. Then, the read weight is set for each scene.
- Replay device has playback time specified by user. If the playback time is shorter than the playback time of the original video file, the playback speed of each scene is adjusted according to the set weight, and the playback time of the entire video file falls within the playback time specified by the user. Adjust as follows.
- FIG. 1 illustrates the concept of a program divided into scenes.
- FIG. 1 assumes a sports game broadcast.
- the program is divided into scenes such as player entry, commercial, mid-game, scoring scene, mid-game, commercial, post-game interview, and the like.
- a score scene may be estimated when the subtitle number indicating the score is changed.
- a word such as “goal”, “home book safe”, “home run”, or the like in a voice is detected, a scene at a time before and after the word is detected may be estimated as a score scene.
- a low weight such as 0.1 or 0.2 is set for commercials, while a high weight of 0.9 is set for scoring scenes. Further, a weight of 0.6 or 0.7 is set during the game (other than the scoring scene), and a weight lower than that during the game is set for the player entry, the interview after the game, and the like.
- a scene with a weight of 0.2 or less is cut and not reproduced.
- the weight is 0.9 or more
- the scene is reproduced at 1.0 times normal speed, that is, at a normal reproduction speed.
- the scene is reproduced at a quadruple speed.
- each scene is played back at an intermediate rate between 1.0 times and 4 times speed, for example, 1.2 times speed or 1.5 times speed. .
- FIG. 2 is an example of a block diagram showing hardware and functions of the viewing device 1.
- the viewing device 1 includes a broadcast receiver 19 that receives a television program from a broadcast wave, a hard disk drive 18 that stores the received television program in the form of a video file, and a hard disk drive 18.
- a decoder 12 that decodes the video file data
- a scene extraction unit 13 that divides the decoded program into a plurality of scenes
- a highlight scene that is estimated to be particularly viewed by the user among the plurality of divided scenes.
- the highlight scene extraction unit 14 to extract, the highlight reproduction unit 317 to highlight and reproduce the video file data of the hard disk drive device 18, the broadcast reception device 19, the hard disk drive device 18, the highlight reproduction unit 17 and the like are controlled.
- Television application program 15 (hereinafter simply referred to as application It has a called down 15), and a control unit 11 which executes the application 15 implements the function of the viewing device 1.
- the viewing device 1 is operated by a remote controller (hereinafter referred to as a remote controller 20).
- the viewing device 1 has an input device (not shown) (for example, a pointing device such as a mouse, a keyboard, etc.).
- Examples of the viewing device 1 include a personal computer (PC), a television receiver with a built-in information processing function, a portable information terminal, a hard disk recorder, a set top box for television broadcasting, and the like.
- the monitor is, for example, a liquid crystal display, an electroluminescence panel, a plasma display, a CRT (Cathode Ray Tube) or the like.
- the detachable storage medium drive device 21 can be externally connected to the viewing device 1 or can be built in the housing of the viewing device 1.
- the removable storage medium is, for example, a CD (Compact Disc), a DVD (Digital Versatile Disc), a Blu-ray disc, a flash memory card, or the like.
- the removable storage medium driving device 21 reads video data from a medium in which a video file is stored.
- the removable storage medium drive device 21 reads the program from the medium and moves it to the hard disk when installing the application 15 or the like.
- the control unit 11 includes, for example, a CPU (Central Processing Unit) and a memory, and the CPU executes a computer program developed in a format executable by the CPU on the memory.
- One such computer program is an application 15. Before the application 15 is expanded in the memory, it is stored in the hard disk 18 or a ROM (Read Only Memory) (not shown).
- the control unit 11 receives a user operation through the remote controller 20 and controls a recording reservation process, a reception process according to the recording reservation, a recording process, and the like.
- control unit 11 receives a user operation through the remote controller 20 and reproduces the recorded television program. At the time of reproduction, the control unit 11 receives designation of reproduction time or reproduction end time from the user. When the playback time or the time from the current time to the end of playback is shorter than the recording time of the recorded program, highlight playback according to the present embodiment is executed.
- the broadcast receiving device 19 demodulates the broadcast wave received from the antenna and acquires a television program signal.
- the broadcast receiver 19 is a TV tuner that receives analog broadcasts, an HDTV (High Definition Television) tuner that receives digital broadcasts, or a one-segment broadcast tuner that uses one segment in the HDTV channel.
- an HDTV High Definition Television
- the configuration of the broadcast receiving device 19 is widely known, and thus detailed description thereof is omitted.
- the acquired television program signal is temporarily stored in the hard disk drive 18.
- the data coder 12 decodes the television program signal stored in the hard disk drive 18 and creates video data.
- the video data is divided into scenes composed of a plurality of frames by the scene extraction unit 13. Scene features are extracted from each scene.
- the feature of the scene is stored in the memory of the control unit 11 as a scene feature table together with information specifying each scene.
- the highlight extraction unit 14 searches the weight table based on the scene characteristics and assigns a weight to each scene.
- the weight is stored in the scene feature table.
- the scene extraction unit 13 and the highlight extraction unit 14 are realized as computer programs executed by the control unit 11.
- the video data and the scene feature table created by the decoder 12 are stored in the hard disk drive 18. If the video data demodulated by the broadcast receiving device 19 is not encrypted, the decoding process by the decoder 12 is omitted. Further, the video data to be subjected to the above processing may be analog data or digital data.
- the broadcast receiving device 19 may acquire an analog signal or digital data of a television program from a wired network instead of receiving a broadcast wave from an antenna.
- the playback speed determination unit 16 is one of computer programs executed by the control unit 11. When reproducing the video data in the hard disk, the reproduction speed determination unit 16 determines the reproduction speed based on the scene feature table created based on the video data.
- the highlight playback unit 17 plays back each scene according to the playback speed specified by the playback speed determination unit 16.
- the highlight reproduction unit 17 may be a computer program executed by the CPU of the control unit 11 or may be configured by a hardware circuit. In any case, the highlight reproduction unit 17 determines a scene to which each frame belongs according to the number of frames from the start position of the program, and adjusts the number of output frames per unit time in the scene.
- the user uses the remote controller 20 to make a recording reservation for a soccer program, for example (arrows A1-A3).
- the scene extraction unit 13 and the highlight extraction unit 14 are activated by the control of the control unit 11, and the highlight scene extraction and the scene weighting calculation are executed (arrow A4).
- the viewing device 1 whether or not to give a high weight to a scene is not simply a feature of the scene, but how the user behaved in the past when playing a scene including such a feature, It is determined based on the operation history.
- the control unit 11 For the recorded viewing, the user activates the application 15 using the remote controller 20 (arrow A1). Then, the control unit 11 that executes the application 15 displays a recorded program list on the monitor screen. The user selects a soccer broadcast recording program, and further designates a time at which reproduction is to be completed. The application 15 accepts these operations, and executes a recorded program reproduction process. At this time, the control unit 11 executes the playback speed determination unit 16 (A11), and calculates the playback speed according to the weight so as to be within the specified time. Further, the control unit 11 executes the highlight reproduction unit 17 and performs highlight reproduction according to the speed (arrows A11 to A13).
- the memory of the control unit 11 that stores the weight table corresponds to the weight table storage unit.
- the weight table in FIG. 3 is an example of weights given to keywords extracted from speech in each scene of a soccer game.
- the viewing device 1 divides a video file recording a soccer game into scenes including one or a plurality of frames. Then, the user's operation history for the keywords extracted from each scene is detected. Then, the weight is determined based on the operation history when the user views the scene including the keyword.
- the scene where the word “goal” is issued often views at 1.0 times speed, and a lot of such history remains. Become.
- the forward-forwarding is often performed at 4.0 times speed, and a lot of such history is left.
- a weight may be set in association with a detected user operation (or playback speed at the time of viewing, etc.) corresponding to a keyword characterizing each scene. For example, after setting an initial value 1 for each keyword and viewing at N times speed, the current weight is set to 1 / N times. Then, with respect to the initial value 1, the faster the fast-forwarding speed and the faster the number of fast-forwarding, the smaller the weight. Therefore, an interesting scene and an uninterested scene can be identified for each user according to the viewing history, and an appropriate weight can be set for each scene.
- the points to be added are determined (2 points or more, 0 points, 1 to 2 times, 1 point, 1 point, 3 points, etc.) An additional point may be added each time an operation is detected, and the score for each keyword may be totaled. Then, the score may be normalized so that the weight of each keyword is distributed in the range of 0 to 1.
- the weights as shown in FIG. 3 are not limited to soccer matches, but may be collected for all programs. When the number of samples is small, a common weight table may be used regardless of the program category. In addition, when a large number of user operation histories are accumulated and a large number of relations between keywords and users (or playback speed at the time of viewing, etc.) can be collected, a weight table as shown in FIG. Should be created. This is because, since the keywords that can be extracted differ depending on the category of the program, it can be estimated that the weight accuracy is higher when the weight table is provided for each category. For example, since soccer terms and baseball terms are different, fine weights can be set by setting weights mainly in soccer terms and terms commonly used in a program in a soccer game.
- FIG. 4 shows an example in which scenes are classified according to program configuration information extracted from the electronic program guide, and weights are assigned to the scenes.
- subtitles hereinafter referred to as scene names
- the viewing device 1 may divide the program into scenes based on the electronic program guide acquired in advance and assign a scene name to each scene.
- Each scene can be identified by the elapsed time or the number of frames from the start of the program. Then, a weight is set for each scene based on a user operation in each scene (or a playback speed during viewing and the like).
- the procedure for setting the weight is the same as in the case of FIG. For example, when the guest corner is viewed at 1 ⁇ speed, the current weight is doubled. In addition, when the gourmet corner is viewed at a triple speed, the current weight is multiplied by 1/3. Alternatively, a method may be used in which points to be added are determined in accordance with user operations (or playback speed at the time of viewing, etc.) and the scores are totaled.
- FIG. 5 is an example of a scene feature table attached to video data recording a soccer game based on the weight table of FIG.
- the memory of the control unit 11 that stores the scene feature table corresponds to a reproduction data weight storage unit. That is, when a user makes a recording reservation, recording is executed, and a video file is created, a scene feature table as shown in FIG. 5 is created for each video file.
- the scene feature table includes elements of the number of frames, scene features (keywords), and weights.
- each scene is identified by the number of frames. For example, the scene is divided from the start to 300 frames and from 301 frames to N1 (an integer greater than or equal to 301) frames.
- N1 an integer greater than or equal to 301
- a keyword characterizing each scene is recorded for each scene.
- the feature of the scene is not limited to the feature specified by the keyword.
- the viewing device 1 searches the weight table in FIG. 3 based on keywords indicating the characteristics of each scene, and assigns weights.
- the viewing device 1 adjusts the playback speed according to the weight when the playback time (or playback end time) specified by the user is shorter than the recording time of the recorded program. Then, a scene with a high weight is reproduced at a normal speed as much as possible, and a scene with a low weight is fast-forwarded at a high speed.
- the reproduction of the recorded program is controlled to end at the reproduction time specified by the user (reproduction end time), and it is possible to avoid missing the program that the user is interested in as much as possible.
- FIG. 6 illustrates a processing flow of the viewing process of the viewing device 1.
- This viewing process is realized by the CPU 1 of the control device 1 executing the application 15.
- the user designates a video file to be played back (hereinafter also referred to as a playback file) and a time at which viewing is desired to be completed from the user interface of the viewing device 1 (F1, F2).
- the user interface is realized by displaying the monitor screen of the viewing device 1 and operating the remote controller 20 for the display.
- the viewing device 1 determines whether or not the reproduction is completed by the specified time (F3).
- the playback time required for the playback file can be determined from, for example, the number of frames described in the playback file, the playback time described in the medium, or the elapsed time of recording recorded in the playback file.
- the scene of the playback file is divided and a weight is set for each scene (F4).
- the playback speed (F5 to F6) of each scene is set so that the playback method (for example, playback speed) is changed according to the weight of each scene and the time is within the time (F5 to F6).
- the playback method for example, playback speed
- a scene with a high degree of highlight that is, a scene with a high weight is set to be played back at a normal speed.
- scenes with a moderate highlight level are set to be fast-forwarded playback such as double speed playback.
- a scene with a low degree of highlight such as a commercial is cut (scene removal).
- the viewing device 1 plays the playback file according to the set playback speed (F7).
- the playback speed is variable depending on the degree of highlight, that is, the weight. It should be noted that the user may be able to move to the next scene at any time by pressing a “skip” button on the remote controller 20 or the like. Similarly, during double speed playback, it is only necessary to switch to normal playback at any time by pressing the “play” button on the remote controller 20 or the like. Furthermore, these operations on the remote controller 20 may be stored and stored as reference information for determining the degree of highlight.
- FIG. 7 illustrates details of the weighting process (F4 in FIG. 6).
- the playback device 1 reads scene data from the video file (F41).
- the CPU of the control unit 11 that executes this process corresponds to a reading unit.
- the playback device 1 analyzes the scene data and extracts the scene features (F42).
- the feature of a scene is determined by words (as keywords) detected in audio data. That is, the viewing device 1 recognizes voice data and extracts keywords.
- Speech recognition is performed by associating a combination of consonants and vowels in speech data with predetermined dictionary data. Since specific processing of speech recognition is already widely known, its details are omitted. However, when the category of the program is known, the voice recognition dictionary may be changed for each category. This is because, for example, the words in a voice that are uttered in a soccer game are limited to some extent.
- the extracted feature of the scene that is, the keyword is stored in the scene feature table in the format of FIG.
- one scene when a keyword is detected, one scene may be configured by associating the keyword with a frame including the keyword and a predetermined number of frames before and after the keyword.
- the CPU of the control unit 11 that executes this process corresponds to a feature extraction unit.
- the viewing device 1 refers to the weight table based on the extracted keyword and determines the weight (F43).
- the CPU of the control unit 11 that executes this process corresponds to a weight acquisition unit.
- a weight is assigned to the scene (F44).
- the CPU of the control unit 11 that executes this process corresponds to a weighting unit.
- the viewing device 1 determines whether there is data of the next scene (that is, the next frame) (F45). When there is data of the next scene, the viewing device 1 advances the control to F41. On the other hand, when the processing is completed for all the scenes, the viewing device 1 ends the scene weighting process.
- the scene weighting process is executed in the reproduction process shown in FIG. 6, but after the recording is completed or the medium is mounted on the removable storage medium driving device 21, Prior to user viewing, the processing in FIG. 7 may be executed in advance.
- FIG. 8 illustrates details of the reproduction process (F7 in FIG. 6).
- the CPU of the control unit 11 that executes this process corresponds to a playback unit.
- the viewing device 1 reads the scene data and the playback speed set in F6 of FIG. 6 for the scene (F71). Then, the viewing device 1 reproduces the scene at the set reproduction speed (F72). If the scene weight is not more than a predetermined value, the scene data itself may be cut without being reproduced.
- unnecessary scenes are cut, and important scenes can be viewed at normal speed.
- what scene is cut, which scene is sent fast, and which scene is played back at normal speed is determined based on the weight given to the feature of the scene. Even if the scene is not cut, a scene that does not match the user's preference can be fast-forwarded. With such a combination of reproduction speeds, the reproduction can be completed by a time desired by the user, and the possibility that the user may be overlooked can be reduced.
- the “Play” button on the remote control 20 If players who are interested in the interview after the game appear, press the “Play” button on the remote control 20 to enable normal playback. Further, when the “skip” button on the remote controller 20 is pressed, the process may move to the next scene. Further, the playback method being played back may be displayed at all times so as not to confuse the user. For example, a display such as “Highlight playback” is displayed.
- the viewing device 1 With reference to FIG. 9, the viewing device 1 according to the second embodiment will be described. In the present embodiment, a description will be given of a process in which the viewing device 1 stores an operation history being reproduced and creates a weight table. The weight table is used for scene weighting in the next reproduction.
- FIG. 9 illustrates a processing flow of the weight table creation process executed by the viewing device 1.
- the CPU of the control unit 11 that executes this process corresponds to a weight creating unit.
- the viewing device 1 determines whether or not the reproduction is finished (F100). If the reproduction is not finished, the viewing device 1 collects the scene features from the data of the scene currently being reproduced (F101).
- the feature of the scene is, for example, a word in the audio data, that is, a keyword.
- Various data such as a feature of the scene, in addition to audio data, sound level, change in sound level, character information displayed on the screen, change in character information, words in the electronic program guide of the program, Various data such as the degree of screen change can be used as scene features.
- the collected scene features are stored in a scene feature table.
- the format of the scene feature table is, for example, the format of FIG. At this time, an initial value (for example, weight 1) is set as the scene weight.
- the viewing device 1 detects, for example, a user operation from the remote controller 20 (F102, F103) (the remote controller 20 or an input device (not shown) corresponds to the operation detection unit). When an operation is detected, the viewing device 1 determines whether the detected operation is a scene skip (F104). If the detected operation is a scene skip, the weight for the feature of the scene is decreased (F105). For example, the weight is decreased by 1 count (or the weight is multiplied by 1 / (2M) times, where M is a magnification with respect to the normal speed of the fastest fast forward). Then, the viewing device 1 returns the control to F101.
- the playback device 1 determines whether or not the playback speed is changed (F107). If the detected operation is an increase to N times speed, the weight for the scene feature is decreased (F108). For example, the weight is decreased by 0.5 count (or the weight is multiplied by 1 / N). Then, the viewing device 1 returns the control to F101. Further, the weight for the scene feature whose detected operation is a change to the normal speed is increased (F109). For example, the weight is increased by 1 count (or the weight is doubled). Then, the viewing device 1 returns the control to F101.
- the reproduction apparatus 1 normalizes the weight of the scene feature table to a range of 0 to 1 (F110). That is, the weight value is converted into the range of the minimum value 0 to the maximum value 1 based on the weight set in the processing of F101 to F109.
- the numerical value may be simply converted by a linear function based on the calculated weight.
- you may convert with a curvilinear function.
- weights can be set for each scene according to the history of user operations for scene features. Note that the process of playing a video file according to the set weight is the same as in the first embodiment.
- scene features are extracted based on information obtained by processing video data such as keywords in audio data.
- the scene may be divided based on the configuration of the scene in the program that can be acquired from the electronic program guide. Then, for each scene, a user operation is detected, and the scene weight may be set in the same procedure as in FIG. The set weight may be stored in a table having an entry for each scene.
- the scene can be divided according to the electronic program guide.
- the viewing device 1 displays a reduced image (hereinafter referred to as a chapter image) of a frame constituting a video image on the monitor screen.
- the chapter image indicates the first frame (or representative frame) of each scene.
- a plurality of chapter images may be displayed for each scene.
- the viewing device 1 may select and display a chapter image according to a predetermined standard for each scene, for example, the degree of screen change, the degree of sound change, the change of characters, and the like.
- FIG. 10 is a diagram illustrating a weight setting operation for the chapter screen.
- the monitor 21 that displays the chapter list in FIG. 10 corresponds to a still image display unit. 4).
- the scene weight is set for the scene to which each chapter image belongs.
- the weight of each scene is stored in a table similar to the scene feature table in FIG. 5 (hereinafter referred to as chapter image management table). 5). Close the setting screen.
- FIG. 11 exemplifies the configuration of a chapter image management table that stores the relationship between scenes, chapter images extracted from the scenes, and weights set by the user.
- the chapter image management table includes elements of scenes, chapter images (frame numbers), and weights.
- a scene is specified in a range of frame numbers as in the scene feature table of FIG.
- the chapter image is specified by the corresponding frame number.
- the first frame of each scene is a chapter image.
- a plurality of chapter images may be selected from each scene.
- the weight is a weight set by the user.
- FIG. 12 shows an example of chapter image selection processing executed by the viewing device 1.
- the viewing device 1 extracts a chapter image from the data of the video file recorded on the hard disk drive 18.
- the viewing device 1 determines whether or not all the frames have been processed (F131).
- the viewing device 1 extracts the feature of the scene from the next frame group (F132).
- the number of frames of the next frame group is set as a system parameter. For example, scene features are extracted from 10 frames.
- the features of the scene are, for example, whether or not the sound level is higher than a predetermined reference value, whether or not the sound level has increased by a predetermined value or more, and whether or not a number (a character portion indicating a score) has changed on the screen. Whether or not the image has changed more than a predetermined amount, whether or not a specific keyword (for example, “goal”, “score”, etc.) is included in the sound belonging to the frame group, and the like.
- a specific keyword for example, “goal”, “score”, etc.
- the viewing device 1 determines whether or not a new scene should be defined from the collected scene characteristics (F133). That is, when the audio level is above a predetermined reference value, when the audio level is increased above a predetermined value, when the number (character part indicating the score) changes on the screen, when the image changes above a predetermined value, If any of the criteria such as a specific keyword is included in the audio belonging to the frame group, the viewing device 1 determines that a new scene should be defined. Then, one of the images in the frame group (for example, the head image) is stored as a chapter image in the hard disk drive 18 (F134). Furthermore, an entry is added to the chapter management table for managing chapter images.
- the viewing device 1 displays the chapter image selected in the above processing (F136). Then, the weight setting is accepted according to the user operation (F137).
- FIG. 13 shows an example of processing for detecting the amount of screen change as an example of processing for extracting scene features.
- a frame is divided into a plurality of regions, and the amount of change in the screen is detected between a reference frame that serves as a reference and a target image that is a target of whether or not a chapter image is to be determined.
- the reference image may be a predetermined number of frames before the target image (for example, 1 frame, 2 frames, or 10 frames before). Further, an average image of frames included in a predetermined section may be used as a reference image.
- the reference frame and the target frame are each divided into a plurality of partial areas.
- the difference of the feature-value between partial areas is calculated.
- the feature amount is, for example, an average color (for example, RGB values, that is, red, green, and blue frequency values) in the partial area.
- the feature amount is a color distribution, that is, an RGB value of each pixel.
- the sum of the changes in the average R value, G value, and B value is taken as the difference.
- the sum of the change amounts of the R value, G value, and B value for each pixel is added to all the pixels in the partial area as a difference.
- the change amount of the screen is a total value obtained by collecting the differences in the partial areas for all the partial areas.
- FIG. 14 shows a processing example of the screen change detection process.
- the viewing device 1 divides the pixels in the reference frame into partial areas (F151).
- the viewing device 1 divides the pixels in the target frame into partial areas (F152).
- the viewing device 1 calculates a feature amount difference for each partial region (F153).
- the viewing device 1 sums up the differences in the feature values for the partial areas for all the partial areas (F154).
- the viewing device 1 determines whether or not the total of F154 exceeds the reference value (F155).
- the reference value is, for example, an empirically accumulated value that can be set as a system parameter. If the total exceeds the reference value, a new scene is defined (F156). That is, a new entry is added to the chapter image management table shown in FIG. 11, and the first frame is set. Further, the frame in which the screen change is detected is registered as a chapter image. In addition, the last frame of the scene created immediately before the added entry (the frame immediately before the frame in which the screen change is detected) is set.
- the viewing device 1 determines whether or not the next frame remains (F157). When the next frame remains, the viewing device 1 returns the control to F151. On the other hand, when all the frames have been processed, the processing is terminated.
- the chapter image can be extracted by the above procedure. It should be noted that other characteristics, that is, whether or not the voice level is equal to or higher than a predetermined reference value, whether or not the voice level has increased by a predetermined value or more, and whether a number (a character portion indicating a score) has changed on the screen. Even when a chapter image is extracted depending on whether or not a specific keyword (for example, “goal”, “score”, etc.) is included in the sound belonging to the frame group, the same procedure can be used. Good.
- the numbers in the screen may be detected by pattern matching between the screen data and the number pattern.
- the keyword may be detected by pattern matching between the screen data and the character pattern.
- pattern matching may be performed by narrowing the character size to a size range obtained from experience values for each program.
- scene features are extracted by using keywords in the voice.
- the features of the scene are not limited to keywords in the voice.
- scenes can be classified using sound levels, keywords associated with the program, the degree of screen change, and various scene features.
- the scenes may be weighted from user operations when viewing the scenes.
- FIG. 15 shows an example of a scene feature table in which scene features are extracted on the basis of sound level, keyword, and degree of screen change.
- the sound level is the volume of the sound output from the speaker in parallel with the display of the video image on the monitor screen.
- the keyword is not limited to the words in the voice, but may be acquired from the electronic program guide of the program. Further, it may be obtained from a telop on the screen.
- the degree of screen change can be acquired, for example, according to the processes in FIGS.
- a weight is set in the same manner as in the processing of FIG. 9 by operations performed by the user, for example, scene skip, fast forward, playback in a normal state, etc. Good.
- each scene may be determined based on the above characteristics, and each weight may be set.
- the weights may be stored in the scene feature table similar to FIG.
- the playback speed may be set in accordance with the weight set in such a scene feature table, and the playback speed may be controlled so that the program playback ends within the time specified by the user.
- Meta information of a program may be used as a weighting determination element. For example, if it is known from the meta information of the program obtained from the electronic program guide that the program is a “news” program, weighting is not determined based on the sound volume of the scene, but in the part where the news telop appears A weight may be set.
- the program includes a television broadcast program, a radio broadcast program, a movie, music, music, and the like.
- each scene and the weight are associated with each other in the scene feature table as shown in FIG.
- a weight may be set for the corresponding portion of each scene in the scene data, that is, each video file. Therefore, in the reproduction process, the weight with the scene data may be read, and the reproduction speed may be adjusted according to the weight.
- a scene feature table is not required.
- the scene weighting process of FIG. 7 and the reproduction process of FIG. 8 may be performed in parallel (or in real time). In that case, it is not necessary to store the scene weight in association with the scene.
- Computer-readable recording medium A program for causing a computer or other machine or device (hereinafter, a computer or the like) to realize any of the above functions can be recorded on a recording medium that can be read by the computer or the like.
- the function can be provided by causing a computer or the like to read and execute the program of the recording medium.
- a computer-readable recording medium is a recording medium that stores information such as data and programs by electrical, magnetic, optical, mechanical, or chemical action and can be read from a computer or the like.
- Examples of such recording media that can be removed from the computer include a flexible disk, a magneto-optical disk, a CD-ROM, a CD-R / W, a DVD, a Blu-ray disk, a DAT (Digital Audio Tape), and an 8 mm tape. And memory cards.
- a hard disk a ROM (read only memory), etc. as a recording medium fixed to a computer or the like.
- ROM read only memory
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Databases & Information Systems (AREA)
- Human Computer Interaction (AREA)
- Television Signal Processing For Recording (AREA)
- Signal Processing For Digital Recording And Reproducing (AREA)
- Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
- Management Or Editing Of Information On Record Carriers (AREA)
Abstract
Description
11 制御部
12 デコーダ
13 シーン抽出部
14 ハイライト抽出部
15 テレビアプリケーション(アプリケーション)
16 再生速度判定部
17 ハイライト再生部
18 ハードディスクドライブ
19 放送受信装置
20 リモコン
21 モニタ
22 着脱可能記憶媒体駆動装置
本視聴装置は、ビデオファイル内のビデオデータを複数のシーンのデータ(データ部分に相当)に分割し、それぞれのシーンに対して、重みを付与する。ここで、シーンとは、ビデオファイルから再生されるビデオ映像、音、音声、ストーリ等の再生情報を区切る概念である。シーンのデータは、ビデオ映像、音、音声、ストーリ等の再生情報の特徴を基に、他のシーンから区切られたシーンを再生するためのデータである。シーンは、ビデオ映像、音、音声、ストーリ等の視聴効果の相異による区分することもできるが、単純に時間指定で区切ることもできる。例えば、シーン1が開始からN1分まで、シーン2がN1分からN2まで、という区切りである。また、時間指定と等価の概念として、フレーム指定で区切ることもできる。例えば、シーン1がフレーム0~フレームN1まで、シーン2がフレームN1からフレームN2までというような区切りである。また、シーンは、番組を構成する構成情報にしたがって区切ることもできる。番組の構成情報は、例えば、あるバラエティ番組が、ゲストコーナ(開始から1分まで)、コマーシャル1(15分から16分まで)グルメコーナ(16分から30分まで)、コマーシャル2(30分から31分まで)およびプレゼントコーナ(31分から40分まで)等である。このような構成情報は、例えば、電子番組表から取得できる。
また、重みは、ユーザが過去に視聴した番組に対する操作履歴から設定される。例えば、過去に視聴した番組を複数のシーンに分割し、それぞれのシーンの特徴を抽出するとともに、そのシーンの再生時に行ったユーザの操作履歴を収集する。そして、その操作履歴が、早送りの場合には、視聴装置は、ユーザがそのシーンに興味がない、あるいは、ユーザの嗜好とシーンとが合致していないと判断する。その結果、視聴装置は、そのシーンの特徴に対して、重みを軽くする。一方、ユーザが早送り状態から、通常の再生スピードに戻した場合に、視聴装置は、そのときのシーンがユーザの嗜好にあったシーンであると判断する。そして、視聴装置は、そのシーンの特徴に対して、重みを重くする。ここで、通常の再生スピードとは、いわゆる早送りのない1倍速の再生速度をいう。
上記実施例1では、音声データ中のキーワード等、ビデオデータを加工して得られる情報を基に、シーンの特徴を抽出した。そのような処理に代えて、図4に示したように、電子番組表から取得できる番組中の場面の構成を基に、シーンを分割してもよい。そして、それぞれのシーンごとに、ユーザ操作を検出し、図9と同様の手順で、シーンの重みを設定すればよい。設定された重みは、シーンごとのエントリを有するテーブルに格納しておけばよい。
1.サッカー番組を録画予約する。
2.録画予約完了後、設定画面を開く。
3.サッカー番組のチャプタ一覧が表示される(図10参照)。図10のチャプタ一覧を表示するモニタ21が、静止画表示部に相当する。
4.チャプタを選択して、シーンの重み付けを変更する。シーンの重みは、各チャプタ画像の属するシーンに対して設定される。そして、各シーンの特徴とともに、図5のシーン特徴テーブルと同様のテーブル(以下、チャプタ画像管理テーブルという)に、各シーンの重みが格納される。
5.設定画面を閉じる。
重み付けの判定要素に、番組のメタ情報を使ってもよい。例えば、電子番組表から得られる番組のメタ情報から、「ニュース」番組であることが分かれば、シーンの音の大きさで、重み付けを判定するのではなく、ニュースのテロップが出ている部分に重み付けをおくなどしてもよい。
コンピュータその他の機械、装置(以下、コンピュータ等)に上記いずれかの機能を実現させるプログラムをコンピュータ等が読み取り可能な記録媒体に記録することができる。そして、コンピュータ等に、この記録媒体のプログラムを読み込ませて実行させることにより、その機能を提供させることができる。
Claims (17)
- 番組データを格納したファイルから前記番組データに含まれるデータ部分を読み出す読み出し部と、
前記データ部分から再生される再生情報を他のデータ部分から再生される再生情報と区別するための特徴情報を抽出する特徴抽出部と、
番組データの有する特徴情報ごとに重みを設定した重みテーブル記憶部から、前記抽出された特徴情報に設定された重みを取得する重み取得部と、
前記取得された重みを該当する特徴部分が抽出された前記データ部分に付与する重み付け部と、を備える番組データ処理装置。 - 前記データ部分ごとに付与された重みにしたがって再生速度を調整して番組データを再生する重み付け再生部をさらに備える請求項1に記載の番組データ処理装置。
- 前記データ部分と関連付けて前記重みを記憶する再生データ重み記憶部をさらに備える請求項1または2に記載の番組データ処理装置。
- 前記データ部分は、視聴効果の相異による区分、時間指定による区分、番組データ中のフレーム指定による区分、または、番組構成を示す情報に基づく区分によって分割したデータ部分であり、
前記視聴効果の相異は、映像の変化、音の変化、音のレベル、文字情報の有無、および文字情報の変化の少なくとも1つによって検知される請求項1から3のいずれか1項に記載の番組データ処理装置。 - 前記データ部分が再生されるときのユーザ操作を検出する操作検出部と、
前記データ部分が再生中にユーザが再生速度を低下させたときに前記データ部分から抽出された特徴情報に対する重みを増加し、前記再生速度を上昇させたときに前記データ部分から抽出された特徴情報に対する重みを減少させ、前記特徴情報とともに重みテーブル記憶部に重みを設定する重み作成部と、をさらに備える請求項1かた4のいずれか1項に記載の番組データ処理装置。 - 前記重み作成部は、前記データ部分の少なくとも一部の再生が省略されたときに、前記データ部分から抽出された特徴情報に対する重みを減少させる請求項5に記載の番組データ処理装置。
- 前記それぞれの再生単位に含まれる映像から静止画像を摘出して表示する静止画表示部と、
前記表示された静止画像に対する重みの設定を受け付ける操作部と、を備え、
前記重み付け部は、前記重みの設定を受け付けた静止画像が含まれるデータ部分に前記受け付けた重みを設定する請求項1から6のいずれか1項に記載の番組データ処理装置。 - コンピュータが、
番組データを格納したファイルから前記番組データに含まれるデータ部分を読み出す読み出しステップと、
前記データ部分から再生される再生情報を他のデータ部分から再生される再生情報と区別するための特徴情報を抽出する特徴抽出ステップと、
番組データの有する特徴情報ごとに重みを設定した重みテーブル記憶部から、前記抽出された特徴情報に設定された重みを取得する重み取得ステップと、
前記取得された重みを該当する特徴部分が抽出された前記データ部分に付与する重み付けステップと、を実行する番組データ処理方法。 - 前記データ部分と関連付けて前記重みを記憶する再生データ重み記憶ステップをさらに実行する請求項8に記載の番組データ処理方法。
- 前記データ部分が再生されるときのユーザ操作を検出する操作検出部ステップと、
前記データ部分が再生中にユーザが再生速度を低下させたときに前記データ部分から抽出された特徴情報に対する重みを増加するステップと、
前記再生速度を上昇させたときに前記データ部分から抽出された特徴情報に対する重みを減少するステップと、
前記特徴情報とともに重みテーブル記憶部に重みを設定する重み作成ステップと、をさらに実行する請求項8または9に記載の番組データ処理方法。 - 前記データ部分の少なくとも一部の再生が省略されたときに、前記データ部分から抽出された特徴情報に対する重みを減少させるステップをさらに実行する請求項8から10のいずれか1項に記載の番組データ処理方法。
- 前記それぞれの再生単位に含まれる映像から静止画像を摘出して表示する静止画表示ステップと、
前記表示された静止画像に対する重みの設定を受け付けるステップと、
前記重みの設定を受け付けた静止画像が含まれるデータ部分に前記受け付けた重みを設定するステップと、をさらに実行する請求項8から11のいずれか1項に記載の番組データ処理方法。 - コンピュータに、
番組データを格納したファイルから前記番組データに含まれるデータ部分を読み出す読み出しステップと、
前記データ部分から再生される再生情報を他のデータ部分から再生される再生情報と区別するための特徴情報を抽出する特徴抽出ステップと、
番組データの有する特徴情報ごとに重みを設定した重みテーブル記憶部から、前記抽出された特徴情報に設定された重みを取得する重み取得ステップと、
前記取得された重みを該当する特徴部分が抽出された前記データ部分に付与する重み付けステップと、を実行させるためのプログラム。 - 前記データ部分と関連付けて前記重みを記憶する再生データ重み記憶ステップをさらに実行させるための請求項13に記載のプログラム。
- 前記データ部分が再生されるときのユーザ操作を検出する操作検出部ステップと、
前記データ部分が再生中にユーザが再生速度を低下させたときに前記データ部分から抽出された特徴情報に対する重みを増加するステップと、
前記再生速度を上昇させたときに前記データ部分から抽出された特徴情報に対する重みを減少するステップと、
前記特徴情報とともに重みテーブル記憶部に重みを設定する重み作成ステップと、をさらに実行させるための請求項13または14に記載のプログラム。 - 前記データ部分の少なくとも一部の再生が省略されたときに、前記データ部分から抽出された特徴情報に対する重みを減少させるステップをさらに実行させるための請求項13から15のいずれか1項に記載のプログラム。
- 前記それぞれの再生単位に含まれる映像から静止画像を摘出して表示する静止画表示ステップと、
前記表示された静止画像に対する重みの設定を受け付けるステップと、
前記重みの設定を受け付けた静止画像が含まれるデータ部分に前記受け付けた重みを設定するステップと、をさらに実行させるための請求項13から16のいずれか1項に記載のプログラム。
Priority Applications (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010543690A JPWO2010073355A1 (ja) | 2008-12-26 | 2008-12-26 | 番組データ処理装置、方法、およびプログラム |
DE112008004201T DE112008004201T5 (de) | 2008-12-26 | 2008-12-26 | Programmdatenverarbeitungsvorrichtung. -Verfahren und - Programm |
CN2008801325094A CN102265609A (zh) | 2008-12-26 | 2008-12-26 | 节目数据处理装置、方法和程序 |
PCT/JP2008/073694 WO2010073355A1 (ja) | 2008-12-26 | 2008-12-26 | 番組データ処理装置、方法、およびプログラム |
KR1020117014121A KR20110097858A (ko) | 2008-12-26 | 2008-12-26 | 프로그램 데이터 처리 장치, 프로그램 데이터 처리 방법, 및 프로그램을 기록한 컴퓨터 판독가능한 기록 매체 |
US13/163,130 US20110249956A1 (en) | 2008-12-26 | 2011-06-17 | Program data processing device and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2008/073694 WO2010073355A1 (ja) | 2008-12-26 | 2008-12-26 | 番組データ処理装置、方法、およびプログラム |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US13/163,130 Continuation US20110249956A1 (en) | 2008-12-26 | 2011-06-17 | Program data processing device and method |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2010073355A1 true WO2010073355A1 (ja) | 2010-07-01 |
Family
ID=42287023
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2008/073694 WO2010073355A1 (ja) | 2008-12-26 | 2008-12-26 | 番組データ処理装置、方法、およびプログラム |
Country Status (6)
Country | Link |
---|---|
US (1) | US20110249956A1 (ja) |
JP (1) | JPWO2010073355A1 (ja) |
KR (1) | KR20110097858A (ja) |
CN (1) | CN102265609A (ja) |
DE (1) | DE112008004201T5 (ja) |
WO (1) | WO2010073355A1 (ja) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120039584A1 (en) * | 2010-08-10 | 2012-02-16 | Yoshinori Takagi | Moving image processing apparatus, moving image processing method, and program |
JP2016063494A (ja) * | 2014-09-19 | 2016-04-25 | ヤフー株式会社 | 動画処理装置、動画処理方法および動画処理プログラム |
JP2016201680A (ja) * | 2015-04-10 | 2016-12-01 | 日本電信電話株式会社 | 再生速度調整装置、再生速度調整方法及び再生速度調整プログラム |
JP2017517995A (ja) * | 2014-04-11 | 2017-06-29 | サムスン エレクトロニクス カンパニー リミテッド | 要約コンテンツサービスのための放送受信装置及び方法 |
JP2022140113A (ja) * | 2021-03-12 | 2022-09-26 | 株式会社コナミデジタルエンタテインメント | 端末装置、サーバ装置、端末装置の制御方法、サーバ装置の制御方法、配信システム、表示システム、及び、プログラム |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2011217197A (ja) * | 2010-03-31 | 2011-10-27 | Sony Corp | 電子機器、再生制御システム、再生制御方法及びプログラム |
US9846696B2 (en) | 2012-02-29 | 2017-12-19 | Telefonaktiebolaget Lm Ericsson (Publ) | Apparatus and methods for indexing multimedia content |
KR101909030B1 (ko) | 2012-06-08 | 2018-10-17 | 엘지전자 주식회사 | 비디오 편집 방법 및 이를 위한 디지털 디바이스 |
US9633015B2 (en) * | 2012-07-26 | 2017-04-25 | Telefonaktiebolaget Lm Ericsson (Publ) | Apparatus and methods for user generated content indexing |
WO2014185834A1 (en) | 2013-05-14 | 2014-11-20 | Telefonaktiebolaget L M Ericsson (Publ) | Search engine for textual content and non-textual content |
US9465435B1 (en) * | 2013-08-26 | 2016-10-11 | Google Inc. | Segmentation of a video based on user engagement in respective segments of the video |
WO2015030645A1 (en) | 2013-08-29 | 2015-03-05 | Telefonaktiebolaget L M Ericsson (Publ) | Methods, computer program, computer program product and indexing systems for indexing or updating index |
CN105493436B (zh) | 2013-08-29 | 2019-09-10 | 瑞典爱立信有限公司 | 用于向授权用户分发内容项目的方法、内容拥有者设备 |
CN103501434A (zh) * | 2013-09-17 | 2014-01-08 | 北京奇艺世纪科技有限公司 | 一种视频的质量分析方法及装置 |
CN104506947B (zh) * | 2014-12-24 | 2017-09-05 | 福州大学 | 一种基于语义内容的视频快进/快退速度自适应调整方法 |
US10728624B2 (en) | 2017-12-29 | 2020-07-28 | Rovi Guides, Inc. | Systems and methods for modifying fast-forward speeds based on the user's reaction time when detecting points of interest in content |
US20220312079A1 (en) * | 2021-03-23 | 2022-09-29 | Rovi Guides, Inc. | Systems and methods to provide adaptive play settings |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08292965A (ja) * | 1995-02-20 | 1996-11-05 | Hitachi Ltd | 映像支援システム |
JP2003177788A (ja) * | 2001-12-12 | 2003-06-27 | Fujitsu Ltd | 音声対話システムおよびその方法 |
JP2006180305A (ja) * | 2004-12-24 | 2006-07-06 | Hitachi Ltd | 動画再生装置 |
JP2007306055A (ja) * | 2006-05-08 | 2007-11-22 | Sharp Corp | ダイジェスト作成装置 |
JP2008096482A (ja) * | 2006-10-06 | 2008-04-24 | Matsushita Electric Ind Co Ltd | 受信端末、ネットワーク学習支援システム、受信方法およびネットワーク学習支援方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4325767B2 (ja) * | 1999-08-30 | 2009-09-02 | パナソニック株式会社 | データ受信装置、およびデータ受信方法 |
JP2005223451A (ja) | 2004-02-03 | 2005-08-18 | Matsushita Electric Ind Co Ltd | 再生装置、データ送受信システム及び再生方法 |
WO2006016605A1 (ja) * | 2004-08-10 | 2006-02-16 | Sony Corporation | 情報信号処理方法、情報信号処理装置及びコンピュータプログラム記録媒体 |
JP4399865B2 (ja) | 2005-07-20 | 2010-01-20 | 株式会社カシオ日立モバイルコミュニケーションズ | 録画番組再生装置、録画番組再生方法および録画番組再生プログラム |
JP2008004170A (ja) | 2006-06-22 | 2008-01-10 | Funai Electric Co Ltd | 情報記録再生装置 |
JP4845755B2 (ja) * | 2007-01-30 | 2011-12-28 | キヤノン株式会社 | 画像処理装置、画像処理方法、プログラム及び記憶媒体 |
-
2008
- 2008-12-26 DE DE112008004201T patent/DE112008004201T5/de not_active Withdrawn
- 2008-12-26 JP JP2010543690A patent/JPWO2010073355A1/ja active Pending
- 2008-12-26 CN CN2008801325094A patent/CN102265609A/zh active Pending
- 2008-12-26 WO PCT/JP2008/073694 patent/WO2010073355A1/ja active Application Filing
- 2008-12-26 KR KR1020117014121A patent/KR20110097858A/ko active IP Right Grant
-
2011
- 2011-06-17 US US13/163,130 patent/US20110249956A1/en not_active Abandoned
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH08292965A (ja) * | 1995-02-20 | 1996-11-05 | Hitachi Ltd | 映像支援システム |
JP2003177788A (ja) * | 2001-12-12 | 2003-06-27 | Fujitsu Ltd | 音声対話システムおよびその方法 |
JP2006180305A (ja) * | 2004-12-24 | 2006-07-06 | Hitachi Ltd | 動画再生装置 |
JP2007306055A (ja) * | 2006-05-08 | 2007-11-22 | Sharp Corp | ダイジェスト作成装置 |
JP2008096482A (ja) * | 2006-10-06 | 2008-04-24 | Matsushita Electric Ind Co Ltd | 受信端末、ネットワーク学習支援システム、受信方法およびネットワーク学習支援方法 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120039584A1 (en) * | 2010-08-10 | 2012-02-16 | Yoshinori Takagi | Moving image processing apparatus, moving image processing method, and program |
US8682143B2 (en) * | 2010-08-10 | 2014-03-25 | Sony Corporation | Moving image processing apparatus, moving image processing method, and program |
JP2017517995A (ja) * | 2014-04-11 | 2017-06-29 | サムスン エレクトロニクス カンパニー リミテッド | 要約コンテンツサービスのための放送受信装置及び方法 |
JP2016063494A (ja) * | 2014-09-19 | 2016-04-25 | ヤフー株式会社 | 動画処理装置、動画処理方法および動画処理プログラム |
JP2016201680A (ja) * | 2015-04-10 | 2016-12-01 | 日本電信電話株式会社 | 再生速度調整装置、再生速度調整方法及び再生速度調整プログラム |
JP2022140113A (ja) * | 2021-03-12 | 2022-09-26 | 株式会社コナミデジタルエンタテインメント | 端末装置、サーバ装置、端末装置の制御方法、サーバ装置の制御方法、配信システム、表示システム、及び、プログラム |
JP7401918B2 (ja) | 2021-03-12 | 2023-12-20 | 株式会社コナミデジタルエンタテインメント | 端末装置、サーバ装置、端末装置の制御方法、サーバ装置の制御方法、配信システム、表示システム、及び、プログラム |
Also Published As
Publication number | Publication date |
---|---|
US20110249956A1 (en) | 2011-10-13 |
JPWO2010073355A1 (ja) | 2012-05-31 |
KR20110097858A (ko) | 2011-08-31 |
DE112008004201T5 (de) | 2012-06-21 |
CN102265609A (zh) | 2011-11-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2010073355A1 (ja) | 番組データ処理装置、方法、およびプログラム | |
EP2107477B1 (en) | Summarizing reproduction device and summarizing reproduction method | |
JP4081120B2 (ja) | 記録装置、記録再生装置 | |
JP4767216B2 (ja) | ダイジェスト生成装置、方法及びプログラム | |
US20080059526A1 (en) | Playback apparatus, searching method, and program | |
US20090129749A1 (en) | Video recorder and video reproduction method | |
US20080066104A1 (en) | Program providing method, program for program providing method, recording medium which records program for program providing method and program providing apparatus | |
US8103149B2 (en) | Playback system, apparatus, and method, information processing apparatus and method, and program therefor | |
JP2008148077A (ja) | 動画再生装置 | |
JP2005538634A (ja) | コンテンツ提示の方法及び装置 | |
US20070024753A1 (en) | Image processing apparatus, image processing method, and image processing program | |
JP4735413B2 (ja) | コンテンツ再生装置およびコンテンツ再生方法 | |
US8243199B2 (en) | Apparatus, method and program for enabling content displayed on a display screen to be switched | |
JP2009118168A (ja) | 番組録画再生装置、および、番組録画再生方法 | |
US20070179786A1 (en) | Av content processing device, av content processing method, av content processing program, and integrated circuit used in av content processing device | |
WO2007046171A1 (ja) | 記録再生装置 | |
JP4929128B2 (ja) | 録画再生装置 | |
JP5033653B2 (ja) | 映像記録再生装置及び映像再生装置 | |
JPWO2007039995A1 (ja) | ダイジェスト作成装置およびそのプログラム | |
JP2008153920A (ja) | 動画像一覧表示装置 | |
JP5266981B2 (ja) | 電子機器、情報処理方法及びプログラム | |
US20080095512A1 (en) | Information Signal Processing Method And Apparatus, And Computer Program Product | |
JP2007288391A (ja) | ハードディスク装置 | |
JP2007095135A (ja) | 映像記録再生装置 | |
JP2008199456A (ja) | 番組記録再生装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 200880132509.4 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 08879151 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2010543690 Country of ref document: JP Kind code of ref document: A Ref document number: 20117014121 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWE | Wipo information: entry into national phase |
Ref document number: 112008004201 Country of ref document: DE Ref document number: 1120080042012 Country of ref document: DE |
|
NENP | Non-entry into the national phase |
Ref country code: DE Effective date: 20110627 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 08879151 Country of ref document: EP Kind code of ref document: A1 |