WO2018224839A2 - Procédés et systèmes de génération de vidéo de réaction - Google Patents
Procédés et systèmes de génération de vidéo de réaction Download PDFInfo
- Publication number
- WO2018224839A2 WO2018224839A2 PCT/GB2018/051564 GB2018051564W WO2018224839A2 WO 2018224839 A2 WO2018224839 A2 WO 2018224839A2 GB 2018051564 W GB2018051564 W GB 2018051564W WO 2018224839 A2 WO2018224839 A2 WO 2018224839A2
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- reaction
- data
- video
- video data
- live stream
- Prior art date
Links
- 238000006243 chemical reaction Methods 0.000 title claims abstract description 250
- 238000000034 method Methods 0.000 title claims abstract description 47
- 238000012545 processing Methods 0.000 claims abstract description 39
- 230000004044 response Effects 0.000 claims description 67
- 230000002123 temporal effect Effects 0.000 claims description 47
- 238000009877 rendering Methods 0.000 claims description 36
- 238000003541 multi-stage reaction Methods 0.000 claims description 27
- 230000015654 memory Effects 0.000 claims description 19
- 239000000872 buffer Substances 0.000 claims description 10
- 230000003139 buffering effect Effects 0.000 claims description 4
- 238000004590 computer program Methods 0.000 claims description 3
- 238000012544 monitoring process Methods 0.000 claims 1
- 238000012546 transfer Methods 0.000 description 16
- 230000003936 working memory Effects 0.000 description 14
- 230000001360 synchronised effect Effects 0.000 description 11
- 239000002131 composite material Substances 0.000 description 5
- 230000008569 process Effects 0.000 description 5
- 238000004891 communication Methods 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 239000007787 solid Substances 0.000 description 3
- 230000000007 visual effect Effects 0.000 description 3
- 230000006870 function Effects 0.000 description 2
- 230000009467 reduction Effects 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000036528 appetite Effects 0.000 description 1
- 235000019789 appetite Nutrition 0.000 description 1
- 230000003111 delayed effect Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 238000003672 processing method Methods 0.000 description 1
- 238000005096 rolling process Methods 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 230000005236 sound signal Effects 0.000 description 1
- 230000003068 static effect Effects 0.000 description 1
- 210000003813 thumb Anatomy 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/643—Communication protocols
- H04N21/6437—Real-time Transport Protocol [RTP]
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/20—Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
- H04N21/23—Processing of content or additional data; Elementary server operations; Server middleware
- H04N21/234—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
- H04N21/2343—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
- H04N21/234363—Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/41—Structure of client; Structure of client peripherals
- H04N21/422—Input-only peripherals, i.e. input devices connected to specially adapted client devices, e.g. global positioning system [GPS]
- H04N21/4223—Cameras
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/63—Control signaling related to video distribution between client, server and network components; Network processes for video distribution between server and clients or between remote clients, e.g. transmitting basic layer and enhancement layers over different transmission paths, setting up a peer-to-peer communication via Internet between remote STB's; Communication protocols; Addressing
- H04N21/64—Addressing
- H04N21/6405—Multicasting
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/60—Network structure or processes for video distribution between server and client or between remote clients; Control signalling between clients, server and network components; Transmission of management data between server and client, e.g. sending from server to client commands for recording incoming content stream; Communication details between server and client
- H04N21/65—Transmission of management data between client and server
- H04N21/658—Transmission by the client directed to the server
- H04N21/6582—Data stored in the client, e.g. viewing habits, hardware capabilities, credit card number
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/80—Generation or processing of content or additional data by content creator independently of the distribution process; Content per se
- H04N21/85—Assembly of content; Generation of multimedia applications
- H04N21/854—Content authoring
Definitions
- the present invention relates to methods and systems for generating a reaction video showing the reaction of at least one viewer of a video presentation to an event occurring in that video presentation.
- the invention has particular relevance to generating a reaction video showing the reaction of multiple viewers of the same video presentation on remote display devices.
- a computer-implemented method of generating a reaction video in response to an event in a live stream of video content comprising apparatus having an internal clock: receiving live stream data, the live stream data comprising one or more segments of live stream video data and associated metadata, wherein the metadata comprises at least one timestamp; buffering the live stream video data; recording reaction video data captured by a camera; rendering the live stream video data; calculating a temporal offset using the at least one timestamp and a timing provided by the internal clock; receiving event data indicating a time related to the event; determining, using the event data and the calculated temporal offset, the time at which the event is rendered; and saving a portion of reaction video data corresponding to an interval spanning a temporal window encompassing the time at which the event is rendered.
- Calculating a temporal offset using the at least one timestamp allows a reaction video to be generated in which the reaction of a viewer to an event in the live stream occurs at a prescribed time within the reaction video. In this way, if reaction videos of different viewers are generated in response to the same event, and the reaction videos are rendered simultaneously, the reactions of the different viewers occur simultaneously.
- a data processing system for generating a reaction video in response to an event in a live stream of video content
- the system comprising one or more user devices and a server, wherein each of the one or more user devices comprises a respective internal clock and is communicatively coupled to the server and is operable to: receive live stream data, the live stream data comprising one or more segments of live stream video data and associated metadata, wherein the metadata comprises at least one timestamp; buffer the live stream video data; record reaction video data captured by a camera; render the live stream video data; calculate a temporal offset using the at least one timestamp and a timing from the internal clock for that user device; receive event data indicating a time related to the event; determine, using the event data and the calculated temporal offset, the time at which the event is rendered; and save a portion of reaction video data corresponding to an interval spanning a temporal window encompassing the time at which the event is rendered, and wherein the server is adapted to receive reaction video data from the
- a data processing system having a configuration as described allows for the server to generate composite reaction videos using reaction video data from multiple user devices without the need to synchronise the live streams of video data between the multiple user devices.
- Figure 1 schematically shows a first example of a data processing system used to generate reaction videos in response to an event in a live stream of video data.
- FIG. 1 schematically shows an event server in the system of Figure 1.
- Figure 3 schematically shows a reaction video server in the system of Figure 1.
- Figure 4 schematic shows a user device in the system of Figure 1.
- Figure 5 is a flow chart representing the routine executed by the data processing system of Figure 1 to render a live stream of video data.
- Figure 6 is a flow chart representing the routine executed by the data processing system of Figure 1 in response to an event in a live stream of video data.
- Figure 7 is a flow chart representing an alternative routine executed by the data processing system of Figure 1 in response to an event in a live stream of video data.
- Figure 8 is a flow chart representing the routine executed by a user device in response to a drop in the rate of data transfer between the streaming server and a user device.
- Figure 9 shows a second example of a data processing system used to generate reaction videos in response to an event in a live stream of video data.
- Figure 10 shows a third example of a data processing system used to generate reaction videos in response to an event in a live stream of video data.
- Figure 11 shows a fourth example of a data processing system used to generate reaction videos in response to an event in a live stream of video data.
- Figure 12 shows an example of a data processing system used to generate a reaction video in response to a social media response.
- Figure 13 shows an example of an alternative data processing system used to generate reaction videos in response to an event in a live stream of video data.
- video content refers to a media presentation having visual content and possibly also having audio content.
- video data refers to data that, when decoded using a suitable decoder, is rendered to generate visual and possibly also processed to generate audio signals.
- broadcast is to be interpreted in a broad sense, and is intended to include distribution of data over a network via a subscription-free service, as well as distribution of data to a limited number of subscribing client devices (sometimes referred to as multicasting).
- a first example of a data processing system in accordance with the present invention includes streaming server 110, event server 120, reaction video server 130, and one or more user devices of which only user device 160, operated by user 190, is shown.
- Each of the one or more user devices may be, for example, a personal computer, a laptop, a tablet, a smartphone, a games console, or a smart television, and the one or more user devices are not necessarily of the same type as one another.
- user device 160 is a personal computer.
- Streaming server 110, event server 120, reaction video server 130, and each of the one or more user devices are interconnected via network 100.
- network 100 is the Internet.
- Event server 120 and user device 160 are further able to communicate with each other via Message Queuing Telemetry Transport (MQTT) broker 150.
- MQTT Message Queuing Telemetry Transport
- event server 120 includes power supply 131 and system bus 133.
- System bus 133 is connected to: CPU 135; memory 137; network interface 139; MQTT publisher module 141; and internal clock 143.
- Memory 137 contains storage 145, working memory 147, and cache 149.
- Working memory 147 includes Random Access Memory (RAM) and Read Only Memory (ROM).
- Storage 145 includes one or more of: hard disk drives; optical disk storage; flash memory; and solid state drives.
- reaction video server 120 includes power supply 331 and system bus 333.
- System bus 333 is connected to: CPU 335; memory 337; network interface 339; and internal clock 343.
- Memory 337 contains storage 345, working memory 347, and cache 349.
- Working memory 347 includes Random Access Memory (RAM) and Read Only Memory (ROM).
- Storage 345 includes one or more of: hard disk drives; optical disk storage; flash memory; and solid state drives.
- user device 160 includes power supply 161 and system bus 163.
- System bus 163 is connected to: CPU 165; memory 167; internal clock 169; network interface 171; MQTT subscriber module 173; camera 175; speaker 177; display 179; and input/output (I/O) devices 181.
- I/O devices 181 may include, for example, a keyboard, a mouse, a touchscreen, a microphone, headphones, additional displays and additional speakers.
- Memory 167 contains storage 183, working memory 185, and cache 187.
- Working memory 185 includes RAM and ROM.
- Storage 183 includes one or more of: hard disk drives; optical disk storage; flash memory; and solid state drives.
- Internal clock 169 refers to hardware and/or software elements that function together as a clock.
- Internal clock 169 may be a different combination of hardware and software elements to the main internal clock of user device 160.
- internal clock 169 is synchronised to Coordinated Universal Time (UTC), though the method described hereafter does not require internal clock 169 to be synchronised to UTC, or to be synchronised with the internal clocks of any of the servers in Figure 1.
- UTC Coordinated Universal Time
- Streaming server 110 is operated by a live stream service provider, and broadcasts live stream data to user devices via network 100.
- Event server 120 and reaction video server 130 are operated by a reaction video service provider.
- Event server 120 provides event data, via MQTT broker 150, to the user devices receiving live stream data from streaming server 110, including user device 160, in response to an event occurring in the live stream data provided by streaming server 110.
- Reaction video server 130 stores reaction video data, and distributes reaction videos to user devices subscribed to the reaction video service.
- User 190 has an account with the operator of the reaction video service, and watches the live stream of video content using user device 160.
- User device 160 records reaction video data from camera 177, where camera 177 is arranged to face user 190.
- user device 160 In response to receiving event data from event server 120, user device 160 generates a video containing the reaction of user 190 to the event, and sends the reaction video to reaction video server 130.
- a data processing method begins with user device 160 requesting, at S501, live stream data from streaming server 110.
- the user device 160 requests live stream data in response to user 190 selecting a live stream of video content from a list provided by an application stored on user device 160, the application being hosted by event server 120.
- event server 130 connects, at S503, user device 160 to streaming server 110.
- connecting user device 160 to streaming server 110 involves providing user device 160 with a uniform resource locator (URL) corresponding to the network location from which the live stream data is broadcasted.
- URL uniform resource locator
- event server 120 connects, at S505, user device 160 to a messaging service.
- the messaging service is a simple messaging service implemented using the MQTT protocol
- connecting user device 160 to the messaging service includes connecting MQTT subscriber module 173 of user device 160 to MQTT broker 150.
- the MQTT protocol allows the transfer of simple messages between event server 120 and user device 160 with a low bandwidth requirement.
- Streaming server 110 broadcasts, at S507, live stream data over network 100.
- broadcasting refers to the process of a server making data available to be downloaded from a network location by one or more client devices
- time at which data is broadcasted refers to the time at which the data is first made available, by the server, to be downloaded by the one or more client devices.
- the live stream data includes segments of live stream video data and associated metadata.
- each segment of live stream video data corresponds to the same duration of video content
- each segment of live stream video data corresponds to a duration of video content of between one and five seconds.
- the metadata associated with the live stream video data includes a UTC timestamp indicating the time, to a precision of one millisecond, at which broadcasting of live stream video data from streaming server 110 begins.
- the metadata further includes data indicating the duration of video content that each segment of live stream video data corresponds to, and data indicating the order of the segments in the live stream.
- user devices download available segments from streaming server 110 in the order specified by the metadata.
- User device 160 receives, at S509, live stream data from streaming server 110.
- User device 160 stores the live stream metadata in working memory 185 and buffers, at S511 , segments of live stream video data in the order specified by the metadata.
- User device 160 buffers the segments of live stream video data by storing the segments in working memory 185, such that they can be decoded and rendered in the correct order at a later time.
- user device 160 As user device 160 buffers segments of live stream video data, user device activates, at S513, camera 177. Camera 177 is arranged to face user 190 and is used to capture reaction video data.
- user device 160 After a prescribed number of the segments of live stream video data have been buffered by user device 160, user device 160 starts rendering, at S515, the live stream video data.
- the prescribed number of segments may depend on one or more of: the data size of each segment; the duration of video content that each segment corresponds to; a data transfer rate measured by user device 160; and data received by user device 160 indicating a data transfer rate.
- the prescribed number of segments is sufficiently high to allow rendering of video content to continue in the event of typical fluctuations in data transfer rate.
- user device 160 stores live stream video data in working memory 185 for a prescribed period of time after it is rendered, for example ten minutes, and live stream video data that has been stored for this prescribed period of time is automatically deleted.
- Storing live stream video data is necessary for later stages in the data processing routine in which live stream video data is used to generate composite reaction videos. Compared with storing the live stream video data indefinitely, storing live stream video data for a prescribed period of time is advantageous if either or both of the working memory 185 and storage 184 of user device 160 have a low data capacity, and particularly in cases where the duration of the live stream of video content is long or unbounded.
- reaction video data captured by camera 177 By recording reaction video data, user device 160 stores reaction video data in working memory 185.
- user device 160 stores reaction video data in working memory 185 for a prescribed period of time, for example ten minutes, and reaction video data that has been stored for this prescribed period of time is automatically deleted.
- user device 160 calculates, at S519, a temporal offset between the broadcasting and the rendering of the live stream of video data.
- the temporal offset is the difference between the time at which streaming server 110 broadcasts the live stream video data and the time measured by internal clock 169 when user device 160 renders the live stream video data.
- the temporal offset is calculated as the difference between the UTC timestamp received at S509 and the time provided by internal clock 169 when user device 160 starts rendering the live stream video data.
- the routine of Figure 6 begins when event server 130 sends, at S601, event data to user device 160 as a push message through MQTT broker 150 using the MQTT protocol.
- event server 130 receives information indicating that an event has occurred from a third party application programming interface (API), though in other examples, other methods of identifying an event may be used.
- the event data sent at S601 includes a UTC timestamp indicating a time, to a precision of one millisecond, at which an event is broadcasted by streaming server 110. An event is assumed to be broadcast at a single point in time.
- the event data further includes information indicating the type of the event. For example, if the live stream of video content corresponds to live coverage of a football match, types of event include: a goal being scored; a free kick being awarded; a free kick being taken; and the end of the football match.
- user device 160 determines, at S605, the time, in this example to a precision of one millisecond, that internal clock 169 measures when the event is rendered by user device 160.
- the time measured by internal clock 169 when user device 160 renders the event is determined using the UTC timestamp included within the event data received from reaction video server 130 and the temporal offset calculated at S519.
- user device 160 looks up, at S607, data from a table stored in storage 184. For each type of event associated with the live stream, the table stores a relative starting time and a relative ending time. Adding the relative starting time to the time determined at S605 defines the start of a temporal window, and adding the relative ending time to the time determined at S605 defines the end of the temporal window. For example, if the live stream of video content corresponds to a live broadcast of a football match, a goal being scored may correspond to a relative starting time of minus five seconds and a relative ending time of ten seconds, thereby defining a temporal window with a duration of fifteen seconds, in which the goal being scored is rendered by user device 160 five seconds after the start of the temporal window.
- user device 160 After user device 160 measures, using internal clock 169, that the temporal window defined at S607 has passed, user device 160 copies, at S609, a portion of reaction video data captured by camera 177 during the temporal window defined at S507. User device 160 saves the copy in working memory 185. Unlike the original portion of reaction video data, the copy will not be deleted after a prescribed period of time elapses from the time at which the portion of reaction video data is recorded.
- User device 160 further copies, at S611, a portion of live stream video data that was rendered during the temporal window defined at S607.
- User device 160 saves the copy in working memory 185. Unlike the original portion of live stream video data, the copy will not be deleted after a prescribed period of time elapses from the time at which the portion of reaction video data is recorded.
- the saved copy of reaction video data and the saved copy of live stream video data are such that if the two copies are rendered simultaneously, the reaction of user 190 will be synchronised with the event, with the maximum temporal separation between rendering of the reaction and rendering of the event being approximately one millisecond.
- user device 160 After user device 160 saves the copy of live stream data at S611, user device 160 provides user 190 with a list of options including “delete reaction”, “save reaction”, and “share reaction”. In response to user 190 selecting “delete reaction”, user device 160 discards, at S615, the copies of reaction video data and live stream video data stored at S609 and S611. In response to user 190 selecting "save reaction” or “share reaction”, user device 160 generates, at S617, a composite reaction video using the saved copy of reaction video data and the saved copy of live stream video data.
- a composite reaction video is a single video file in which live stream video content and reaction video content are presented together, for example the reaction video content being presented adjacent to the live stream video content, or a "video in video" format in which the reaction video content is presented in a small frame embedded within a larger frame, the live stream video content being presented within the larger frame.
- reaction video server 130 sends, at S619, the composite reaction video to reaction video server 130.
- Reaction video server 130 stores, at S621, the composite reaction video, along with user data corresponding to user 190, in storage 145.
- User 190 is able to view the composite reaction video at a later time using a user device, for example user device 160.
- reaction video server 130 sends messages to user devices approved by user 190 indicating that a composite reaction video has been generated, and containing further information such as the title of the live stream of video content, for example the title of the live stream of video content, and data indicating the network location of the composite reaction video (for example a URL).
- user device 160 generates a composite reaction video using reaction video data captured by camera 177, and sends the composite reaction video to reaction video server 130.
- This method is advantageous in cases where a large number of users having accounts with the operator of the reaction video service simultaneously view the same live stream of video content, because according to this method the majority of the data processing operations are performed by user devices, thereby minimising the processing demand experienced by reaction video server 130.
- the composite reaction video generated by the routine of Figure 6 contains reaction video data recorded exclusively by user device 160.
- Figure 7 shows an alternative routine executed by the data processing system of Figure 1 in response to an event in a live stream of video data.
- reaction video server 130 receives the live stream data broadcasted by streaming server 110.
- Steps S701 to S715 of Figure 7 are equivalent to steps S601 to S615 of Figure 6, but in the routine of Figure 7, the step of copying live stream video data is omitted by user device 160.
- user device 160 in response to user 190 selecting "save reaction” or "share reaction”, sends reaction video data, at S717, to reaction video server 130.
- Reaction video server stores, at S719, the reaction video data, along with user data corresponding to user 190, in storage 145.
- Reaction video server 130 generates, at S721, a composite reaction video using the reaction video data received from user device 160 and the live stream data received from streaming server 110 at S509.
- user device 160 may be associated with one or more additional user devices, and each of the one or more additional user devices may also send reaction video data to reaction video server 130.
- reaction video server 130 may generate a composite reaction video using the live stream data received from streaming server 110 at S509 and reaction video data received from one or more user devices.
- Reaction video server stores, at S723, the composite reaction video, along with user data corresponding to user 190, in storage 145.
- the connection between streaming server 110 and user device 160 will be interrupted, or that the rate at which data is transferred between streaming server 110 and user device 160 will otherwise be temporarily reduced.
- the buffered live stream video data will not be sufficient for rendering of live stream video data to continue, and the rendering will pause at the beginning of a segment.
- the routine executed by user device 160 in such a case depends on whether user device 160 is configured in "catch-up" mode or not. If user device 160 is configured in catch-up mode and the rendering of live stream data is paused due to a reduction in data transfer rate, user device 160 will skip segments of live stream video data before resuming rendering, so that the rendering of live stream video data is not delayed. If user device 160 is not configured in catch-up mode and the rendering of live stream data is paused due to a reduction in data transfer rate, user device 160 will continue to buffer segments of live stream video data, before resuming rendering the live stream video data with a delay.
- user device 160 in response to the connection between streaming service 110 and user device 160 being interrupted at S801, user device 160 pauses, at S803, rendering of live stream video data. If user device 160 is not configured in catch-up mode, user device 160 continues to buffer, at S805, live stream video data. When a prescribed number of data of segments have been buffered, user device 160 resumes rendering, at S807, live stream video data.
- the prescribed number of segments may again depend on one or more of: the data size of each segment; the duration of video content that each segment corresponds to; a data transfer rate measured by user device 160; and data received by user device 160 indicating a data transfer rate.
- User device 160 calculates, at S809, a new temporal offset between the broadcasting and the rendering of the live stream of video data.
- the new temporal offset is calculated as the difference between the UTC time at which streaming server 110 broadcasts the paused segment of live stream video data, and the time recorded by internal clock when user device 160 starts rendering the paused segment of live stream video data.
- the time at which streaming server 110 broadcasts the paused segment of live stream video data is calculated using the metadata received from streaming server 110, including the UTC timestamp received at S509, the data indicating the duration of video content that each segment of live stream video data corresponds to, and the metadata indicating the order of the segments in the live stream.
- user device 160 skips, at S811, segments of live stream video data. By skipping segments of live stream video data, user device 160 refrains from downloading one or more consecutive segments of live stream video data.
- user device 160 resumes buffering segments of live stream video data, starting with the first segment that is broadcast after the data transfer rate becomes sufficiently high.
- user device 160 resumes rendering, at S813, live stream video data.
- the prescribed number of segments may again depend on one or more of: the data size of each segment; the duration of video content that each segment corresponds to; a data transfer rate measured by user device 160; and data received by user device 160 indicating a data transfer rate.
- User device 160 calculates, at S815, a new temporal offset between the broadcasting and the rendering of the live stream of video data.
- the new temporal offset is calculated as the difference between the UTC time at which streaming server 110 broadcasts the first segment that is broadcast after the data transfer rate becomes sufficiently high, and the time recorded by internal clock when user device 160 starts rendering the first segment that is broadcast after the data transfer rate becomes sufficiently high.
- Catch-up mode is advantageous for cases in which a significant delay in rendering of live stream video data is unacceptable.
- One example in which a significant delay in rendering of live stream video data is unacceptable is discussed with reference to Figure 11.
- camera 177 is a component of user device 160.
- the user device being used to render the live stream video data is a first user device and the camera used to capture reaction video data is a component of a second user device.
- the second user device is connected to the first user device by a wireless connection method.
- the system of Figure 9 has similar components to those of Figure 1, and is similarly adapted to execute the routines of Figures 5, 6, and 7.
- camera 977 is a peripheral webcam connected to user device 960 via a Bluetooth connection. In this case, there is a delay between time at which reaction video data is captured by camera 977 and the time at which the reaction video data is recorded by user device 960.
- User device 960 receives reaction metadata associated with the reaction video data received from camera 977.
- reaction metadata includes data indicating the temporal delay between the capturing and recording of reaction video data.
- the second user device has an internal clock that is synchronised with the internal clock of the first user device (for example by both user clocks being synchronised to UTC), and the reaction metadata includes one or more UTC timestamps indicating times at which individual frames of reaction video data are captured by the camera, as measured by the internal clock of the second user device.
- the second user device has an internal clock that is not synchronised with the internal clock of the first user device, and the reaction metadata includes one or more timestamps indicating times at which individual frames of reaction video data are captured by the camera, as measured by the internal clock of the second user device.
- the second user device further sends data to the first user device from which the timing offset between the internal clock of the second user device and the internal clock of the first user device can be determined.
- the second user device records the reaction video data before sending the reaction video data to the first user device.
- control device 1040 which in this example is a mobile phone storing an application hosted by reaction video server 1030.
- user 1090 associates control device 1040 with the user device 1060 prior to the streaming server 1010 broadcasting live stream data.
- associating control device 1040 with user device 1060 is achieved by user device 1060 presenting a Quick Response (QR) code, and user device 1040 scanning the QR code.
- QR code contains data that allows control device 1040 to connect to MQTT broker 1050 and to reaction video server 1030.
- event server 1020 sends event data to user device 1060 through MQTT broker 1050, and also sends event data to control device 1040 through MQTT broker 1050.
- control device 1040 In response to control device 1040 receiving event data from event server 1020, control device 1040 provides user 1090 with a list of options including “delete reaction”, “save reaction”, and “share reaction”. In response to user 1090 selecting “save reaction” or “share reaction”, user device 1060 and reaction video server 1030 generate and save a composite reaction video in accordance with the routine of Figure 6 or Figure 7 (depending on the configuration of user device 1060 and reaction video server 1030).
- This embodiment is advantageous in situations where multiple users watch a live stream of video content using the same user device, for example in cases where the user device is in a public location such as a cafe or a public house. In the case that the user device is located in a cafe or a public house, control device 1040 may be operated by a member of staff of the cafe or public house.
- virtual conference room 1170 contains three user devices.
- a virtual conference room may contain any number of user devices.
- Each user device in Figure 11 is connected to network 1100, and is further connected to MQTT broker 1150.
- the user devices in a virtual conference room may be connected via an additional network such as a Local Area Network (LAN), but the user devices in a virtual conference room are not necessarily located in the vicinity of each other.
- the users of the user devices in virtual conference room 1170 are in communication with one another, for example by video conference.
- Each user device in conference room 1170 is configured in catch-up mode, as discussed above with reference to Figure 8, and each user device has an internal clock synchronised with UTC time.
- Each user device in virtual conference room 1170 is used to view the same live stream of video data.
- each user device in virtual conference room buffers segments of live stream video data.
- Each of the user devices in virtual conference room 1170 starts rendering live stream video data at the same time.
- each user device in virtual conference room 1170 starts rendering live stream data five seconds after the live stream data is first broadcasted, where the time at which the live stream data is first broadcasted is indicated by a UTC timestamp within the live stream metadata. Due to the fact that each user device is configured in catch-up mode, the rendering of live stream video data remains synchronised throughout the duration of the live stream of video content. However, rendering of live stream video data is not necessarily synchronised between user devices that are not in the same virtual conference room as each other.
- Each user device in Figure 11 is adapted to send reaction video data to reaction video server 1130, in accordance with the method outlined by Figure 7.
- Reaction video server 1130 is adapted to generate composite reaction videos using live stream video data received from streaming server 1110 and reaction video data received from one or more user devices.
- the method of the present invention allows for composite reaction videos to be generated using reaction video data from one or more user devices that are not in the same virtual conference room as each other.
- the system of Figure 12 is used to generate reaction videos in cases for which an event service is not necessarily available to provide event data corresponding to key moments in a live stream of video content. Accordingly, the system of Figure 12 does not include an event server, and events in the live stream of video content are instead identified by responses of subscribers to a live stream of reaction video data generated by a single "producer" who views the original live stream of video content.
- the system of Figure 12 includes streaming server 1210, reaction video server 1230, reaction streaming server 1240, producer device 1260 (operated by producer 1290) and viewer devices 1280, of which three viewer devices are shown.
- Viewer devices 1280 are associated with producer device 1260, where associating a viewer device with a producer device is initiated by the user of the viewer device subscribing to a social media channel associated with producer 1290.
- the social media channel associated with producer 1290 is hosted by reaction video server 1230.
- Producer 1290 views, using producer device 1260, a live stream of video content broadcasted by streaming server 1210.
- Producer device 1260 includes a camera, and while producer 1290 views the live stream of video content, producer device 1260 captures and records reaction video data.
- Producer device 1260 automatically sends the reaction video data to reaction streaming server 1240, and reaction streaming server 1240 broadcasts the reaction video data over network 1200 as a live stream of reaction video content.
- reaction streaming server 1240 broadcasts metadata associated with the reaction video data, the metadata including at least one timestamp indicating the time at which the reaction video data is first broadcasted.
- Each of the viewer devices 1280 receives the reaction video data from the reaction streaming server, buffers the reaction video data, and then renders the reaction video data for viewing.
- Each of the viewer devices 1280 calculates a temporal offset between broadcasting and rendering of the reaction video data, using an equivalent method to that described above with reference to Figure 5.
- each of the viewer devices 1280 may provide viewer responses to the live stream of reaction video content. Examples of viewer responses to a live stream of reaction video content include selecting "like" or “dislike” options, or submitting a comment.
- the viewer device calculates, using the calculated temporal offset, the time at which the specific moment of reaction video data was broadcasted, and sends a message to reaction video server 1230 indicating the type of viewer response, and further including a timestamp indicating the time at which the corresponding specific moment of reaction video content was first broadcasted.
- Reaction video server 1230 sends a signal to producer device 1260 and to each of the viewer devices 1280, causing producer device 1260 and each of the viewer devices 1280 to render a visual representation of the viewer response. For example, if the viewer response is a comment, then producer device 1260 and each of the viewer devices 1280 may render the comment in a comments box. If the viewer response is a "like", then producer device 1260 and each viewer device may render a corresponding icon, such as a "thumbs up" icon. In this way, producer 1290 and users of viewer devices 1280 are presented with a rolling feed of viewer responses.
- Reaction video server 1230 may receive a large number of viewer responses associated with times in a particular interval. For example, if producer 1290 reacts in an amusing fashion to an event in the live stream of video content received from streaming server 1210, the reaction video server may receive a large number of viewer responses with timestamps corresponding to times shortly after the event.
- reaction video server 1230 sends a signal to producer device 1260, causing producer device 1260 to save a copy of a portion of the reaction video data, the portion corresponding to a temporal window encompassing the interval during which the threshold number of viewer responses were received.
- the threshold number of viewer responses may depend on the total number of viewer devices.
- reaction video server 1230 sends a signal to producer device 1260, causing producer device 1250 to save a copy of a portion of the reaction video data corresponding to a temporal window encompassing the interval during which the threshold number of viewer responses of that type were received.
- the temporal window is defined by a relative starting time and a relative ending time with respect to the start of the interval with which the threshold number of viewer responses is associated. The relative starting time and relative ending time may be different for different types of viewer response.
- producer device 1260 sends the copied portions of reaction video data to reaction video server 1230, and the copied portions of reaction video data are then available to be viewed via the social media channel associated with producer 1290.
- producer device 1260 may generate composite reaction videos, as described above with reference to Figure 6, using the live stream data received from streaming server 1210 and the reaction video data copied and saved by producer device 1260 during the live streaming process.
- producer 1290 views a live stream of video content that is broadcast by streaming server 1210.
- producer 1290 may view other types of video presentation, which may not necessarily be broadcast as live streams.
- no streaming server is necessary.
- the producer device deletes redundant reaction video data that has not been copied and saved after a prescribed period of time.
- the reaction video data is copied and saved by the reaction streaming server instead of the producer device.
- no temporal offset is calculated, and the threshold number of viewer responses is measured with reference to the times at which the viewer responses are received by the reaction server. This simplified embodiment is only suitable for cases in which the delay between broadcasting and rendering of reaction video data is guaranteed to be small.
- data processing system has different server configurations to those described in Figures 1, 9, 10, 11 or 12.
- the reaction video server and the event server are part of the same server.
- the functionality of one or more of the servers may be performed by multiple servers.
- multiple streaming servers or multiple reaction video servers may be used.
- Other server configurations are envisaged.
- the MQTT protocol is not used to send messages between the event sever, the reaction video server, and user devices, and accordingly no MQTT broker is included in the data processing system.
- the WebSocket protocol is used to send messages between the event server, the reaction video server, and user devices.
- the routines executed by the data processing system are different from the routines described in Figures 5, 6, and 7.
- user devices used to generate reaction videos do not delete live stream video data or reaction video data after a prescribed period of time.
- the steps of copying reaction video data and copying live stream video data can be executed after the user has finished viewing the live stream. This method is advantageous in cases where the processing power of a user device is limited, or where the bandwidth available to a user device is limited.
- the user device being used to render live stream video data does not request live stream data from the streaming server in response to a user selecting a live stream of video content from a list provided by an application.
- a user device requests live stream data from the streaming server by accessing a website hosted by the streaming server.
- the user device connects to the reaction video server via a plugin on the website hosted by the streaming server.
- the streaming server uses adaptive streaming in which the segments of live stream video data do not correspond to the same duration of video content, and instead the duration is automatically adjusted during the streaming process to optimise the streaming process in response to fluctuating rates of data transfer.
- the metadata associated with each segment of video data may contain a timestamp indicating the broadcasting time of the segment, and the user device uses these timestamps to calculate the temporal offset between the broadcasting and rendering of live stream video data.
- Embodiments in which multiple user devices are grouped in a virtual conference room may use methods other than the method described with reference to Figure 11 to ensure that live stream video data is rendered synchronously between the user devices in the virtual conference room.
- Figure 13 shows an example of an alternative data processing system used to generate reaction videos in response to an event in a live stream of video data.
- a broadcaster system 1310 transmits a video stream to a select group of users 1340 whose reactions to the video stream are to be monitored via a reaction video system 1320.
- the reaction video system 1320 converts the video stream received from the broadcaster system 1310 to a real time protocol and multicasts the converted video stream to the select group of users, so that the viewing of the converted video stream by the select group of users 1340 is substantially synchronised.
- Reaction videos are supplied by each of the select group of users 1340 to the reaction video system 1320 using a real time protocol, and the reaction video system merges the reaction videos into a composite video and outputs the composite video to the broadcaster system 1310. This arrangement provides the composite reaction video to the broadcaster system 1310 with low latency.
- the broadcaster system 1310 includes a media stream generator 1312, which may generate a live stream, which in this embodiment is transmitted to the reaction video system 1320 using the HLS (HTTP Live Streaming) protocol.
- the broadcaster system 1310 also includes a reaction stream receiver 1314 which receives the composite reaction video stream from the reaction video system. It will be appreciates that this composite reaction video stream could then be broadcast by the broadcaster.
- the reaction video system 1320 implements a WebRTC (Web Real-Time Communication) node to allow real-time communication with the select group of users 1340, who are registered subscribers to the reaction video system 1320.
- the HLS-encoded video stream from the broadcaster system is processed by a video muxer/recoder 1322 to multiplex HLS video stream and recode the HLS video stream in accordance with the RTP protocol.
- the RTP-encoded video data is then input to a Selective Forwarding Unit (SFU) 1324 in accordance the WebRTC software, which uses a media RTP streamer 1324 to multicast the RTP-encoded video data to the selected group of users 1340.
- SFU Selective Forwarding Unit
- Each of the selected group of users 1340 has an associated user device 1342 which forwards the data to a monitor 1344 for display.
- Each user also has a camera 1346 which images the viewers of the monitor, and sends a reaction video stream, via the user device 1342, to a reaction RTP server 1330 of the SFU 1324 of the reaction video system 1320.
- the Reaction RTP receiver 1330 forwards the RTP encoded data for processing by an RTP demultiplexer and RTMP listener 1332.
- the demultiplexed reaction video streams are then processed by a video merger 1338 that merges the reaction video streams, together with the RTP-encoded data output by the video muxer/recoder 1322, to generate a composite video stream.
- the composite video stream has a plurality of panes, with the number of panes corresponding to one more than the number of reaction video streams.
- Each reaction video stream is shown in a corresponding pane, with the final pane showing the original RTP-encoded video stream from the video muxer/recoder 1322.
- a delay compensator 1326 can introduce a small delay to the RTP-encoded data from the video muxer/recoder 1322 before input to the video merger 1338 to improve synchronism with the reaction video streams.
- the RTP demultiplexer and RTMP listener 1332 converts incoming RTP streams from SFU 1324 to RTMP to which it actively listens. In this way, the RTP demultiplexer and RTMP listener 1332 waits for incoming data rather than polling for data.
- the RTP demultiplexer and RTMP listener 1332 also outputs data from the reaction video streams to a UDP traffic monitor 1336, which detects any interruptions in the reaction video stream.
- the UDP traffic monitor 1336 detects that the reaction video stream from a viewer device has been interrupted, or degraded below an acceptable level, then the UDP traffic monitor 1336 sends a control signal to the video merger 1338 that may either remove the pane for that reaction video stream or replace that reaction video stream with a default video stream (e.g. a static image indicating that there is a temporary interruption).
- a default video stream e.g. a static image indicating that there is a temporary interruption
- the composite video stream output by the video merger 1338 is then processed by encoder/multiplexer/segmenter 1339 to generate a HLS video stream that is output to the reaction stream receiver 1314 of the broadcaster system.
- the CPUs in Figures 2, 3, and 4 are typical examples of processor circuitry which, when programmed with suitable instructions, are operable to perform routines in accordance with the present invention. However, it will be appreciated that other examples of processor circuitry could be used for some or all of the steps described. For example, one or more graphics processing units (GPUs) may be used for the rendering and recording operations performed by the user device in Figure 6. Furthermore, the processor circuitry used to perform the routines may include multiple CPUs located included within multiple devices, and each CPU may have multiple processor cores. Methods described herein may be implemented by way of computer program code that is storable in a memory.
- GPUs graphics processing units
- Examples of memory suitable for storing computer program code are the memory components described with reference to Figures 2, 3, and 4, but it will be appreciated that the memory may be any non-transitory computer- readable media able to contain, store, or maintain programs and data for use by or in connection with an instruction execution system. Such media may be any physical media such as, for example, electronic, magnetic, optical, electromagnetic, or semiconductor media. More specific examples of suitable media include, but are not limited to, a hard drive, RAM, ROM, erasable programmable read-only memory, or a portable disc. Elements of the memory used to store program code may be volatile or non- volatile and may include additional functionality, for example to minimise latency.
Landscapes
- Engineering & Computer Science (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Computer Security & Cryptography (AREA)
- Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Social Psychology (AREA)
Abstract
L'invention concerne un procédé mis en œuvre par ordinateur permettant de générer un flux vidéo de réaction illustrant une réaction à un flux de contenu vidéo. Le procédé comprend un serveur qui reçoit un flux vidéo de contenu, convertit le flux vidéo de contenu en protocole en temps réel afin de générer un flux vidéo converti et transmet en multidiffusion le flux vidéo converti vers un ou plusieurs dispositifs de visualisation. Le serveur reçoit, de chacun des dispositifs de visualisation, un flux vidéo de visualisation codé avec un protocole en temps réel, et traite le flux vidéo converti et chaque flux vidéo de visualisation afin de générer un flux vidéo de réaction comprenant une pluralité de volets représentés en parallèle, un volet de la pluralité de volets présentant une séquence d'images correspondant au flux vidéo converti et chaque autre volet présentant une séquence d'images correspondant à un flux vidéo de visualisation respectif.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1709162.0A GB2563267A (en) | 2017-06-08 | 2017-06-08 | Methods and systems for generating a reaction video |
GB1709162.0 | 2017-06-08 |
Publications (2)
Publication Number | Publication Date |
---|---|
WO2018224839A2 true WO2018224839A2 (fr) | 2018-12-13 |
WO2018224839A3 WO2018224839A3 (fr) | 2019-01-10 |
Family
ID=59358226
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2018/051564 WO2018224839A2 (fr) | 2017-06-08 | 2018-06-08 | Procédés et systèmes de génération de vidéo de réaction |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2563267A (fr) |
WO (1) | WO2018224839A2 (fr) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022026100A1 (fr) * | 2020-07-28 | 2022-02-03 | Arris Enterprises Llc | Système et procédé d'enregistrement de la réaction du spectateur à un contenu vidéo |
WO2022133346A1 (fr) * | 2020-12-18 | 2022-06-23 | Fasetto, Inc. | Systèmes et procédés de vidéo à points de vue multiples simultanés |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP3869813B1 (fr) | 2020-02-24 | 2022-03-30 | Axis AB | Diffusion en continu d'un flux vidéo en direct |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN100583880C (zh) * | 2003-02-26 | 2010-01-20 | Nxp股份有限公司 | 用于广播多媒体内容的系统 |
US8161504B2 (en) * | 2009-03-20 | 2012-04-17 | Nicholas Newell | Systems and methods for memorializing a viewer's viewing experience with captured viewer images |
US20130305158A1 (en) * | 2012-05-11 | 2013-11-14 | Samsung Electronics Co. Ltd. | Network system with reaction mechanism and method of operation thereof |
US20130304820A1 (en) * | 2012-05-11 | 2013-11-14 | Samsung Electronics Co., Ltd. | Network system with interaction mechanism and method of operation thereof |
US20140096167A1 (en) * | 2012-09-28 | 2014-04-03 | Vringo Labs, Inc. | Video reaction group messaging with group viewing |
WO2015120413A1 (fr) * | 2014-02-07 | 2015-08-13 | Fanpics, Llc | Systèmes et procédés d'imagerie en temps réel destinés à capturer des images instantanées d'utilisateurs regardant un événement dans un environnement résidentiel ou local |
US9967618B2 (en) * | 2015-06-12 | 2018-05-08 | Verizon Patent And Licensing Inc. | Capturing a user reaction to media content based on a trigger signal and using the user reaction to determine an interest level associated with a segment of the media content |
-
2017
- 2017-06-08 GB GB1709162.0A patent/GB2563267A/en not_active Withdrawn
-
2018
- 2018-06-08 WO PCT/GB2018/051564 patent/WO2018224839A2/fr active Application Filing
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022026100A1 (fr) * | 2020-07-28 | 2022-02-03 | Arris Enterprises Llc | Système et procédé d'enregistrement de la réaction du spectateur à un contenu vidéo |
WO2022133346A1 (fr) * | 2020-12-18 | 2022-06-23 | Fasetto, Inc. | Systèmes et procédés de vidéo à points de vue multiples simultanés |
Also Published As
Publication number | Publication date |
---|---|
GB201709162D0 (en) | 2017-07-26 |
WO2018224839A3 (fr) | 2019-01-10 |
GB2563267A (en) | 2018-12-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US11871088B2 (en) | Systems, apparatus, and methods for providing event video streams and synchronized event information via multiple Internet channels | |
CN110876080B (zh) | 视频投屏方法、装置、计算机设备及存储介质 | |
US11201903B1 (en) | Time synchronization between live video streaming and live metadata | |
WO2017107578A1 (fr) | Procédé, dispositif et système de traitement de mise en correspondance et d'affichage de synchronisation instantanée de sous-titre et de contenu multimédia de diffusion en continu | |
JP5854243B2 (ja) | Ipビデオ信号同期のための方法及び装置 | |
US20190069047A1 (en) | Methods and systems for sharing live stream media content | |
CN113411616A (zh) | 计算设备、播放装置、播放视频广播流的方法和计算机可读介质 | |
US10887646B2 (en) | Live streaming with multiple remote commentators | |
US9736518B2 (en) | Content streaming and broadcasting | |
US9756373B2 (en) | Content streaming and broadcasting | |
CN111447455A (zh) | 直播视频流回放处理方法、装置及计算设备 | |
US9338480B2 (en) | Systems and methods to compensate for the effects of transmission delay | |
WO2018224839A2 (fr) | Procédés et systèmes de génération de vidéo de réaction | |
CN113596484A (zh) | 用于实时控制具有多个流的远程视频制作的系统及方法 | |
US11463747B2 (en) | Systems and methods for real time control of a remote video production with multiple streams | |
CN113892274B (zh) | 同步控制装置、同步控制方法以及同步控制程序 | |
JP7290260B1 (ja) | サーバ、端末及びコンピュータプログラム | |
US11089386B1 (en) | Accommodating irregular timing information in streamed media data | |
EP2814259A1 (fr) | Procédé, système, dispositif de capture et serveur de synchronisation pour permettre une synchronisation du rendu de plusieurs parties de contenu, à l'aide d'une référence de temps de rendu | |
EP3891999B1 (fr) | Diffusion juste après de contenu multimédia | |
US10347295B1 (en) | Computing system with video content generation feature | |
JP2012137900A (ja) | 映像出力システム、映像出力方法及びサーバ装置 | |
KR102051985B1 (ko) | 이질적인 네트워킹 환경들에서 미디어 렌더링의 동기화 | |
JP6481290B2 (ja) | 情報処理装置 | |
JP6738639B2 (ja) | 配信システム、ミッドロールサーバ、端末装置、広告発火装置、情報処理方法、およびプログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18746002 Country of ref document: EP Kind code of ref document: A2 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18746002 Country of ref document: EP Kind code of ref document: A2 |