[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN103546662A - A method for synchronizing audio and video in a network monitoring system - Google Patents

A method for synchronizing audio and video in a network monitoring system Download PDF

Info

Publication number
CN103546662A
CN103546662A CN201310437082.1A CN201310437082A CN103546662A CN 103546662 A CN103546662 A CN 103546662A CN 201310437082 A CN201310437082 A CN 201310437082A CN 103546662 A CN103546662 A CN 103546662A
Authority
CN
China
Prior art keywords
video
data
audio
buffer
rtp
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201310437082.1A
Other languages
Chinese (zh)
Inventor
孟利民
蒋维
周凯
司徒涨勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang University of Technology ZJUT
Original Assignee
Zhejiang University of Technology ZJUT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang University of Technology ZJUT filed Critical Zhejiang University of Technology ZJUT
Priority to CN201310437082.1A priority Critical patent/CN103546662A/en
Publication of CN103546662A publication Critical patent/CN103546662A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

一种网络监控系统中音视频同步方法,包括以下步骤:(1)视频通过TW2835芯片自带的硬件压缩实现数据压缩,生成H.264数据,音频则通过软件压缩生成G.729数据格式,将其送入RTP库进行封装、发送;(2)从网络中接收到的数据包去掉IP包头和UDP包头以后,根据RTP包头中的负载类型决定将RTP包放入音频或是视频缓冲区,再根据RTP包中的序列号字段的顺序将RTP包中的负载数据插入到缓冲区正确位置中;网络数据接收开始后,音、视频缓冲区都会预存适量的数据包;当音、视频流的数据填满预存区后,同时开始播放;同步以音频流为时间主导,通过调整视频对象来实现音视频的同步。本发明实现简单、同步精确度较高。

Figure 201310437082

A method for synchronizing audio and video in a network monitoring system, comprising the following steps: (1) The video is compressed by the built-in hardware of the TW2835 chip to achieve data compression to generate H.264 data, and the audio is compressed by software to generate G.729 data format. It is sent to the RTP library for encapsulation and transmission; (2) After removing the IP header and UDP header from the data packet received from the network, it is determined to put the RTP packet into the audio or video buffer according to the load type in the RTP header, and then Insert the load data in the RTP packet into the correct position of the buffer according to the order of the serial number fields in the RTP packet; after the network data reception starts, the audio and video buffers will pre-store an appropriate amount of data packets; when the audio and video stream data After the pre-stored area is filled, the playback starts at the same time; the synchronization takes the audio stream as the time guide, and the audio and video synchronization is realized by adjusting the video object. The invention is simple to implement and has high synchronization accuracy.

Figure 201310437082

Description

Audio and video synchronization method in a kind of network monitoring system
Technical field
The present invention relates to network monitoring field, especially a kind of Voice & Video synchronous method of network monitoring system.
Background technology
Be accompanied by the raising of Video Supervision Technique and monitor the extensive use in every field, in a lot of occasions, people start to pay attention to Voice Surveillance.No matter be mechanisms of public security organs, or the key unit such as airport, railway, bank, increasing high-quality safe protection engineering is badly in need of audio-visual simultaneous monitoring system clear, true to nature, and Voice Surveillance field has become the new highlight of security protection industry.There is adding of Voice Surveillance, just can take leave of " silent movie " epoch of simple video monitoring, be conducive to the comprehensive control to accident, carried out accurate evaluation and disposal.Voice applications, in monitoring, has been filled up to a large blank of safety-security area, is a great development direction of network monitoring in recent years.Yet the objects such as the Voice & Video in multi-medium data itself have strict time relationship, and Internet Transmission can cause this original time domain relation destroyed, and then causes sound and image in system cannot realize synchronized playback.Therefore, research and the realization of network monitoring system sound intermediate frequency and audio video synchronization technology, just seem especially important.But there is the problems such as complicated, synchronous accuracy is not high that realize in existing simultaneous techniques.
Summary of the invention
In order to overcome synchronous complicated, the synchronous not high deficiency of accuracy of realization of Voice & Video of existing network monitoring system, the invention provides a kind of audio and video synchronization method in the network monitoring system that simple, synchronous accuracy is higher of realizing.
The technical solution adopted for the present invention to solve the technical problems is:
In an audio and video synchronization method, described synchronous method comprises the following steps:
(1) hardware-compressed that video carries by TW2835 chip realizes data compression, generates H.264 data, and audio frequency generates G.729 data format by Software Compression; Then being sent into RTP storehouse encapsulates, sends;
Described RTP packet header comprises sequence number and timestamp, and in the process of transmitting of data, the sequence number in the RTP packet that each sends increases one by one, and timestamp is identifying the collection moment of audio, video data;
(2) after the packet receiving from network removes IP packet header and UDP packet header, first according to the loadtype in RTP packet header, determine RTP bag to put into audio frequency or screen buffer, and then according to the order of the sequence-number field in RTP bag, the load data in RTP bag is inserted in tram, buffering area;
After network data receives and to start, sound, the screen buffer appropriate packet that all can prestore; When the data of sound, video flowing, fill up and prestore behind district, start to play simultaneously; Synchronously take audio stream as the time leading, by adjusting object video, realize the synchronous of audio frequency and video.
Further, in described step (2), using the timestamp of audio frequency as the relative reference time, after playback starts, with constant speed, voice data is taken out and sends into decoder from buffering area, and write the timestamp A of buffering area the first blocks of data t, then by A ttimestamp V with screen buffer the first blocks of data tcompare, according to both difference A t-V tdetermine the propelling movement speed of video data and the playback rate of video, specific implementation is:
2.1) as-100ms≤A t-V tduring≤100ms, audio frequency and video buffering is pressed normal speed propelling data, and the speed of broadcasting also remains unchanged;
2.2) when 100ms≤| A t-V t| during≤160ms, need to synchronously adjust;
If 1. 100ms≤A t-V t≤ 160ms, the leading video of audio frequency, accelerates the data in pushing video buffering, accelerates the playback rate of video, audio frequency and video timestamp is trended towards identical;
If 2.-160ms≤A t-V t≤-100ms, i.e. audio frequency hysteresis video, the propelling movement speed of the video buffer that slows down, the playback rate of reduction video, trends towards audio frequency and video timestamp identical;
2.3) as | A t-V t| during>=160ms, need to re-start synchronous;
If 1. A t-V t>=160ms, the serious leading video of audio frequency, abandons video packets the oldest in screen buffer, until A t=V t, start to start playback with normal speed;
If 2. A t-V t≤-160ms, the audio frequency video that seriously lags behind, abandons audio pack the oldest in audio buffer, until A t=V t, start to start playback with normal speed.
Further again, described buffering area is a data link table, and described data link table comprises back end, and every kind of Media Stream has two kinds of back end, and a kind of is idle back end FreeDatanode, and a kind of is back end BusyDatanode in using; When there being new RTP bag to receive, just apply for that a FreeDatanode is as BusyDatanode, write the media data of load in RTP bag and the sequence number of RTP bag, and according to this sequence number, this BusyDatanode is inserted into the tram of buffering area, to recover original time relationship of media data in buffering area; After data in BusyDatanode back end are sent into decoder and play, BusyDatanode will become FreeDatanode.When FreeDatanode uses, when has expired buffering area, the data in the oldest BusyDatanode are by deleted, and itself can change into FreeDatanode automatically.
Technical conceive of the present invention is: a complete network monitoring system comprises the collection compression of audio, video data, the links such as transmission, Internet Transmission, network reception, real-time synchronization realization of packing, different according to its residing position, system can be divided into 3 parts: equipment end (comprise and gather compression and packing transmission), the webserver (Internet Transmission), receiving terminal (comprising network reception and synchronous realization), as shown in Figure 1, its basic implementation procedure is as follows: receiving terminal sends control command, the audio, video data of notification server forwarding unit end; Transmitting terminal receives after instruction, by audio-video collection chip, gathers audio, video data, then compresses respectively (video format for H.264, audio format is for G.729), is packaged into RTP Packet Generation to the webserver afterwards according to Real-time Transport Protocol standard; The webserver is transmitted to receiving terminal after receiving these data, at receiving terminal, by dynamic buffering technology, Directshow technology etc., realizes the real-time synchronization playback to monitoring site audio, video data.
Beneficial effect of the present invention is mainly manifested in: 1) H.264 the compression of audio frequency and video adopts respectively and G.729, these two kinds of algorithms are all the algorithms with high compression rate of current popular, can effectively save bandwidth, improve the efficiency of transmission of network.2) adopt RTP host-host protocol, there is very strong live effect, can reproduce in real time field scene by network remote.3) it is short that audio-visual synchronization is adjusted the time, and synchronization accuracy is high, and the maximum step-out interval of system is 160ms.4) synchronous implementation complexity is low, has greatly simplified audio-visual synchronization algorithm, has improved efficiency.
Accompanying drawing explanation
Fig. 1 is the Organization Chart of network monitoring system.
Fig. 2 is the schematic diagram of audio, video data encapsulation process.
The schematic diagram in Tu3Shi dynamic buffering district.
Fig. 4 is the schematic diagram that audio frequency is play thread.
Fig. 5 is the schematic diagram of video playback thread.
Embodiment
Below in conjunction with accompanying drawing, the invention will be further described.
With reference to Fig. 1~Fig. 5, audio and video synchronization method in a kind of network monitoring system, described synchronous method comprises the following steps:
(1) hardware-compressed that video carries by TW2835 chip realizes data compression, generates H.264 data, and audio frequency generates G.729 data format by Software Compression; Then being sent into RTP storehouse encapsulates, sends;
Described RTP packet header comprises sequence number and timestamp, and in the process of transmitting of data, the sequence number in the RTP packet that each sends increases one by one, and timestamp is identifying the collection moment of audio, video data;
(2) after the packet receiving from network removes IP packet header and UDP packet header, first according to the loadtype in RTP packet header, determine RTP bag to put into audio frequency or screen buffer, and then according to the order of the sequence-number field in RTP bag, the load data in RTP bag is inserted in tram, buffering area;
After network data receives and to start, sound, the screen buffer appropriate packet that all can prestore; When the data of sound, video flowing, fill up and prestore behind district, start to play simultaneously; Synchronously take audio stream as the time leading, by adjusting object video, realize the synchronous of audio frequency and video.
In monitoring site, the audio, video data of equipment end collects by WM8731 chip and TW2835 chip respectively.Afterwards, the hardware-compressed that video carries by TW2835 chip realizes data compression, generates H.264 data, and audio frequency generates G.729 data format by Software Compression.Then sent into RTP storehouse and encapsulate, send, encapsulation process as shown in Figure 2.
In synchronous implementation procedure, these two fields of the sequence number in RTP packet header and timestamp are particularly important.In the process of transmitting of data, sequence number in the RTP packet that each sends increases one by one, to facilitate at receiving terminal, packet is sorted, to recover the original time relation of packet, and then overcome the impact that network congestion, server time delay etc. cause packet.Timestamp is even more important, and it is identifying the collection moment of audio, video data, is an of paramount importance scalar in Synchronization Control.These information are all to seal in the process of dressing up RTP bag and realize at audio, video data.
The webserver main effect in synchronous implementation procedure is exactly the transfer that realizes signaling and data.
Under the impact of the various factorss such as network delay, packet loss, the RTP bag front and back order that arrives receiving terminal can be entanglement, and even certain video packets has arrived receiving terminal, and corresponding audio pack is also in network with it.For this reason, need to be respectively Voice & Video at receiving terminal and be provided with one section of dynamic buffer, as shown in Figure 3.After the packet receiving from network removes IP packet header and UDP packet header, first according to the loadtype in RTP packet header (PT value), determine RTP bag to put into audio frequency or screen buffer.And then according to the order of the sequence-number field in RTP bag, the load data in RTP bag is inserted in tram, buffering area.In actual design, buffering area is a data link table, some nodes, consists of, and specific implementation is: 1) every kind of Media Stream has two kinds of back end, an idle back end FreeDatanode, a kind of is back end BusyDatanode in using.2) there is new RTP bag to receive, just apply for that a FreeDatanode is as BusyDatanode, write the media data of load in RTP bag and the sequence number of RTP bag, and according to this sequence number, this BusyDatanode is inserted into the tram of buffering area, to recover original time relationship of media data in buffering area.3) after the data in BusyDatanode back end are sent into decoder and play, BusyDatanode will become FreeDatanode.When FreeDatanode uses, when has expired buffering area, the data in the oldest BusyDatanode are by deleted, and itself can change into FreeDatanode automatically.Fig. 3 has shown the concrete structure of Datanode simultaneously, except data data, also have four signs Len, Key, SequNum, Timestamp, whether representative data section length, data are the part of key frame of video, the sequence number of packet, timestamp respectively.
The design of dynamic buffer is that receiving terminal is realized synchronous committed step, and it not only recovers the normal play order in Media Stream, is controlling realization synchronous between media simultaneously.After network data receives and to start, sound, the screen buffer appropriate packet that all can prestore.The length in district of prestoring not only wants to meet the demand of shaking in Media Stream that makes up, and the while meets the requirement of real-time again, is unlikely to allow period of reservation of number long.So the length general control in the district that prestores is in 500ms.When the data of sound, video flowing, fill up and prestore behind district, start to play simultaneously.When processing audio-visual synchronization broadcasting, need to select suitable reference stream.Because people's the sense of hearing is more responsive than vision, when fixed frequency sound is play, the change of time-out and speed all can make people be difficult to accept, bandwidth the lacking more than video that audio stream takies in addition.Therefore, synchronously take audio stream as the time leading, by adjusting object video, realize the synchronous of audio frequency and video.Herein, we using the timestamp of audio frequency as the relative reference time.After playback starts, with constant speed, voice data is taken out and sends into decoder from buffering area, and write the timestamp A of buffering area the first blocks of data t.Then by A ttimestamp V with screen buffer the first blocks of data tcompare, according to both difference A t-V tdetermine the propelling movement speed of video data and the playback rate of video, specific implementation is:
1) as-100ms≤A t-V tduring≤100ms, the asynchrony phenomenon of the imperceptible audio frequency and video of people, Here it is synchronous region.In this case, audio frequency and video buffering is pressed normal speed propelling data, and the speed of broadcasting also remains unchanged.
2) when 100ms≤| A t-V t| during≤160ms, Here it is synchronous critical zone, need to synchronously adjust.
If 1. 100ms≤A t-V t≤ 160ms, the leading video of audio frequency, accelerates the data in pushing video buffering, accelerates the playback rate of video, audio frequency and video timestamp is trended towards identical.
If 2.-160ms≤A t-V t≤-100ms, i.e. audio frequency hysteresis video, the propelling movement speed of the video buffer that slows down, the playback rate of reduction video, trends towards audio frequency and video timestamp identical.
3) as | A t-V t| during>=160ms, people can obviously feel the asynchrony phenomenon of audio frequency and video, and Here it is step-out region need to re-start synchronous.
If 1. A t-V t>=160ms, the serious leading video of audio frequency, abandons video packets the oldest in screen buffer, until A t=V t, start to start playback with normal speed.
If 2. A t-V t≤-160ms, the audio frequency video that seriously lags behind, abandons audio pack the oldest in audio buffer, until A t=V t, start to start playback with normal speed.
Above-mentioned synchronisation control means has kept well the time relationship between audio stream and video flowing in the running of system, in long playing situation, also there will not be the phenomenon that audio frequency is leading or lag behind, also do not occur the discontinuous broadcasting phenomenon of audio stream or video flowing.
Audio frequency is play thread as shown in Figure 4, when audio buffer arrives, prestores after length, and the sign that audio frequency is play will be set to true, and program starts from buffering area reading out data.Because the frequency acquisition of native system transmitting terminal audio frequency is 8KHZ, be quantified as 16, so we read voice data by the speed with 16000 bytes per second, simultaneously by timestamp assignment in every voice data to A tvariable, judges whether that according to the difference of itself and video time stamp it is directly sent into decoder plays.
Video playback thread is similar with the flow process that audio frequency is play thread, difference is that audio frequency broadcasting only need to adjust in re-synchronization, the adjustment of video playback has run through whole synchronous playing process, when the time tolerance of audio stream and video flowing changes, playback rate that need to be by adjusting video flowing is to reach audio-visual synchronization effect.

Claims (3)

1.一种网络监控系统中音视频同步方法,其特征在于:所述同步方法包括以下步骤:1. a method for synchronizing audio and video in a network monitoring system, characterized in that: the method for synchronizing comprises the following steps: (1)视频通过TW2835芯片自带的硬件压缩实现数据压缩,生成H.264数据,音频则通过软件压缩生成G.729数据格式;接着将其送入RTP库进行封装、发送;(1) The video is compressed by the hardware compression built into the TW2835 chip to generate H.264 data, and the audio is compressed by software to generate the G.729 data format; then it is sent to the RTP library for packaging and sending; 所述RTP包头包括序列号和时间戳,在数据的发送过程中,每个发送出去的RTP数据包中的序列号都是逐一增加的,时间戳标识着音视频数据的采集时刻;Described RTP header comprises sequence number and time stamp, and in the sending process of data, the sequence number in the RTP data packet of each sending out all increases one by one, and time stamp marks the acquisition moment of audio-video data; (2)从网络中接收到的数据包去掉IP包头和UDP包头以后,首先根据RTP包头中的负载类型决定将RTP包放入音频或是视频缓冲区,然后再根据RTP包中的序列号字段的顺序将RTP包中的负载数据插入到缓冲区正确位置中;(2) After removing the IP header and UDP header from the data packet received from the network, first decide to put the RTP packet into the audio or video buffer according to the load type in the RTP header, and then according to the sequence number field in the RTP packet Insert the load data in the RTP packet into the correct position of the buffer in the order; 网络数据接收开始后,音、视频缓冲区都会预存适量的数据包;当音、视频流的数据填满预存区后,同时开始播放;同步以音频流为时间主导,通过调整视频对象来实现音视频的同步。After the network data reception starts, the audio and video buffers will pre-store a certain amount of data packets; when the data of the audio and video streams fills the pre-stored area, they will start playing at the same time; the synchronization takes the audio stream as the time-oriented, and realizes the audio by adjusting the video object. Synchronization of video. 2.如权利要求1所述的网络监控系统中音视频同步方法,其特征在于:所述步骤(2)中,以音频的时间戳作为相对参考时间,回放开始后,以恒定的速率将音频数据从缓冲区取出送入解码器,并且记载下缓冲区第一块数据的时间戳AT,然后将AT与视频缓冲区第一块数据的时间戳VT进行比较,根据两者的差值AT—VT决定视频数据的推送速率以及视频的播放速率,具体实现为:2. The method for synchronizing audio and video in a network monitoring system according to claim 1, characterized in that: in the step (2), the time stamp of the audio is used as a relative reference time, and after the playback starts, the audio is recorded at a constant rate The data is taken out from the buffer and sent to the decoder, and the time stamp AT of the first block of data in the buffer is recorded, and then AT is compared with the time stamp V T of the first block of data in the video buffer. The value A T - V T determines the push rate of video data and the playback rate of the video. The specific implementation is: 2.1)当-100ms≤AT-VT≤100ms时,音视频缓冲按正常速率推送数据,播放的速率亦保持不变;2.1) When -100ms≤A T -V T ≤100ms, the audio and video buffer pushes data at the normal rate, and the playback rate remains unchanged; 2.2)当100ms≤|AT-VT|≤160ms时,需要进行同步调整;2.2) When 100ms≤|A T -V T |≤160ms, synchronization adjustment is required; ①若100ms≤AT-VT≤160ms,即音频超前视频,则加快推送视频缓冲中的数据,加快视频的播放速率,使音视频时间戳趋向于相同;①If 100ms≤AT- V T≤160ms , that is, the audio is ahead of the video, the data in the video buffer will be pushed faster, and the playback rate of the video will be accelerated, so that the audio and video timestamps tend to be the same; ②若-160ms≤AT-VT≤-100ms,即音频滞后视频,则减慢视频缓冲的推送速率,降低视频的播放速率,使音视频时间戳趋向于相同;②If -160ms≤AT -V T≤ -100ms, that is, the audio lags behind the video, slow down the push rate of the video buffer, reduce the playback rate of the video, and make the audio and video timestamps tend to be the same; 2.3)当|AT-VT|≥160ms时需要重新进行同步;2.3) Resynchronization is required when |A T -V T |≥160ms; ①若AT-VT≥160ms,即音频严重超前视频,则丢弃视频缓冲区中最老的视频包,直至AT=VT,开始以正常速率启动回放;① If A T -V T ≥ 160ms, that is, the audio is seriously ahead of the video, discard the oldest video packet in the video buffer until A T = V T , and start playback at a normal rate; ②若AT-VT≤-160ms,即音频严重滞后视频,则丢弃音频缓冲区中最老的音频包,直至AT=VT,开始以正常速率启动回放。②If A T -V T ≤-160ms, that is, the audio lags behind the video severely, discard the oldest audio packet in the audio buffer until A T =V T , and start playback at a normal rate. 3.如权利要求1或2所述的网络监控系统中音视频同步方法,其特征在于:所述缓冲区是一个数据链表,所述数据链表包括数据节点,每种媒体流具有两种数据节点,一种是空闲的数据节点FreeDatanode,一种是使用中的数据节点BusyDatanode;当有新的RTP包接收到,就申请一个FreeDatanode作为BusyDatanode,写入RTP包中负载的媒体数据和RTP包的序列号,并且根据该序列号将此BusyDatanode插入到缓冲区的正确位置,以恢复缓冲区中媒体数据的原有时间关系;当3. the method for synchronizing audio and video in the network monitoring system as claimed in claim 1 or 2, is characterized in that: described buffer zone is a data linked list, and described data linked list comprises data node, and every kind of media stream has two kinds of data nodes , one is the free data node FreeDatanode, and the other is the data node BusyDatanode in use; when a new RTP packet is received, apply for a FreeDatanode as the BusyDatanode, and write the media data loaded in the RTP packet and the sequence of the RTP packet number, and insert this BusyDatanode into the correct position of the buffer according to the serial number to restore the original time relationship of the media data in the buffer; when BusyDatanode数据节点中的数据送入解码器进行播放后,After the data in the BusyDatanode data node is sent to the decoder for playback, BusyDatanode将成为FreeDatanode。当FreeDatanode使用完,即缓冲区满了的时候,最老的BusyDatanode中的数据将被删除,它本身会自动转化成FreeDatanode。BusyDatanode will become FreeDatanode. When the FreeDatanode is used up, that is, when the buffer is full, the data in the oldest BusyDatanode will be deleted, and it will automatically be converted into a FreeDatanode.
CN201310437082.1A 2013-09-23 2013-09-23 A method for synchronizing audio and video in a network monitoring system Pending CN103546662A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201310437082.1A CN103546662A (en) 2013-09-23 2013-09-23 A method for synchronizing audio and video in a network monitoring system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201310437082.1A CN103546662A (en) 2013-09-23 2013-09-23 A method for synchronizing audio and video in a network monitoring system

Publications (1)

Publication Number Publication Date
CN103546662A true CN103546662A (en) 2014-01-29

Family

ID=49969689

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201310437082.1A Pending CN103546662A (en) 2013-09-23 2013-09-23 A method for synchronizing audio and video in a network monitoring system

Country Status (1)

Country Link
CN (1) CN103546662A (en)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104581422A (en) * 2015-02-05 2015-04-29 成都金本华科技股份有限公司 Method and device for processing network data transmission
CN104597456A (en) * 2015-02-27 2015-05-06 南通航大电子科技有限公司 Multi-board-card synchronous control method of GNSS signal simulation system
CN104618786A (en) * 2014-12-22 2015-05-13 深圳市腾讯计算机系统有限公司 Audio/video synchronization method and device
CN104869461A (en) * 2015-05-22 2015-08-26 南京创维信息技术研究院有限公司 Video data processing system and method
CN105228028A (en) * 2015-09-18 2016-01-06 南京大学镇江高新技术研究院 A kind of video stream media Data dissemination based on udp broadcast and pre-cache method
CN105245976A (en) * 2015-09-30 2016-01-13 合一网络技术(北京)有限公司 Method and system for synchronously playing audio and video
CN105744334A (en) * 2016-02-18 2016-07-06 海信集团有限公司 Method and equipment for audio and video synchronization and synchronous playing
CN106162293A (en) * 2015-04-22 2016-11-23 无锡天脉聚源传媒科技有限公司 A kind of video sound and the method and device of image synchronization
CN108200481A (en) * 2017-12-07 2018-06-22 北京佳讯飞鸿电气股份有限公司 A kind of RTP-PS method for stream processing, device, equipment and storage medium
CN108282685A (en) * 2018-01-04 2018-07-13 华南师范大学 A kind of method and monitoring system of audio-visual synchronization
CN108337230A (en) * 2017-12-26 2018-07-27 武汉烽火众智数字技术有限责任公司 A kind of real-time retransmission method of audio and video based on smart mobile phone and system
CN108599774A (en) * 2018-04-26 2018-09-28 郑州云海信息技术有限公司 a kind of compression method, system, device and computer readable storage medium
CN108616767A (en) * 2018-04-28 2018-10-02 青岛海信电器股份有限公司 A kind of audio data transmission method and device
WO2019153960A1 (en) * 2018-02-11 2019-08-15 Zhejiang Dahua Technology Co., Ltd. Systems and methods for synchronizing audio and video
CN111988674A (en) * 2020-08-18 2020-11-24 广州极飞科技有限公司 Multimedia data transmission method, device, equipment and storage medium
CN112511885A (en) * 2020-11-20 2021-03-16 深圳乐播科技有限公司 Audio and video synchronization method and device and storage medium
CN112511886A (en) * 2020-11-25 2021-03-16 杭州当虹科技股份有限公司 Audio and video synchronous playing method based on audio expansion and contraction
CN113099310A (en) * 2021-04-08 2021-07-09 李蕊男 Real-time media internal video and audio coordination method based on android platform
CN114285513A (en) * 2021-11-22 2022-04-05 杭州当虹科技股份有限公司 Time delay device and method supporting lossless long-time delay of IP signals
CN115297337A (en) * 2022-08-05 2022-11-04 深圳市野草声学有限公司 Audio transmission method and system during video live broadcasting based on data receiving and transmitting cache

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101902649A (en) * 2010-07-15 2010-12-01 浙江工业大学 A method of audio and video synchronization control based on H.264 standard
CN102547482A (en) * 2011-12-30 2012-07-04 北京锐安科技有限公司 Synchronous playing method of multi-path IP (Internet Protocol) audio-video stream
CN103137191A (en) * 2011-11-28 2013-06-05 国际商业机器公司 Programming of phase-change memory cells
CN203137191U (en) * 2013-03-27 2013-08-21 谢静琦 Anti-trip vibration prompting shoe

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101902649A (en) * 2010-07-15 2010-12-01 浙江工业大学 A method of audio and video synchronization control based on H.264 standard
CN103137191A (en) * 2011-11-28 2013-06-05 国际商业机器公司 Programming of phase-change memory cells
CN102547482A (en) * 2011-12-30 2012-07-04 北京锐安科技有限公司 Synchronous playing method of multi-path IP (Internet Protocol) audio-video stream
CN203137191U (en) * 2013-03-27 2013-08-21 谢静琦 Anti-trip vibration prompting shoe

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
司徒涨勇,孟利民,黄成君: "网络监控系统中多媒体同步控制的研究与实现", 《电声技术》 *
方立华,骆似骏: "一种网络监控系统中音视频同步的方法", 《电声技术》 *
方立华: "网络监控系统中音视频实时流同步技术的研究与设计", 《中国优秀硕士论文全文数据库》 *

Cited By (31)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104618786A (en) * 2014-12-22 2015-05-13 深圳市腾讯计算机系统有限公司 Audio/video synchronization method and device
CN104618786B (en) * 2014-12-22 2018-01-05 深圳市腾讯计算机系统有限公司 Audio and video synchronization method and device
CN104581422A (en) * 2015-02-05 2015-04-29 成都金本华科技股份有限公司 Method and device for processing network data transmission
CN104581422B (en) * 2015-02-05 2017-09-15 成都金本华科技股份有限公司 A kind of method and apparatus transmitted for network data
CN104597456A (en) * 2015-02-27 2015-05-06 南通航大电子科技有限公司 Multi-board-card synchronous control method of GNSS signal simulation system
CN104597456B (en) * 2015-02-27 2018-03-30 南通航大电子科技有限公司 A kind of more board synchronisation control means of GNSS signal analogue system
CN106162293A (en) * 2015-04-22 2016-11-23 无锡天脉聚源传媒科技有限公司 A kind of video sound and the method and device of image synchronization
CN106162293B (en) * 2015-04-22 2019-11-08 无锡天脉聚源传媒科技有限公司 A kind of method and device of video sound and image synchronization
CN104869461A (en) * 2015-05-22 2015-08-26 南京创维信息技术研究院有限公司 Video data processing system and method
CN105228028B (en) * 2015-09-18 2018-05-11 南京大学镇江高新技术研究院 A kind of video stream media data distribution based on udp broadcast and pre-cache method
CN105228028A (en) * 2015-09-18 2016-01-06 南京大学镇江高新技术研究院 A kind of video stream media Data dissemination based on udp broadcast and pre-cache method
CN105245976B (en) * 2015-09-30 2016-11-23 合一网络技术(北京)有限公司 Voice & Video synchronizes the method and system play
CN105245976A (en) * 2015-09-30 2016-01-13 合一网络技术(北京)有限公司 Method and system for synchronously playing audio and video
CN105744334A (en) * 2016-02-18 2016-07-06 海信集团有限公司 Method and equipment for audio and video synchronization and synchronous playing
CN108200481B (en) * 2017-12-07 2020-12-15 北京佳讯飞鸿电气股份有限公司 RTP-PS stream processing method, device, equipment and storage medium
CN108200481A (en) * 2017-12-07 2018-06-22 北京佳讯飞鸿电气股份有限公司 A kind of RTP-PS method for stream processing, device, equipment and storage medium
CN108337230A (en) * 2017-12-26 2018-07-27 武汉烽火众智数字技术有限责任公司 A kind of real-time retransmission method of audio and video based on smart mobile phone and system
CN108282685A (en) * 2018-01-04 2018-07-13 华南师范大学 A kind of method and monitoring system of audio-visual synchronization
US11343560B2 (en) 2018-02-11 2022-05-24 Zhejiang Xinsheng Electronic Technology Co., Ltd. Systems and methods for synchronizing audio and video
WO2019153960A1 (en) * 2018-02-11 2019-08-15 Zhejiang Dahua Technology Co., Ltd. Systems and methods for synchronizing audio and video
CN108599774A (en) * 2018-04-26 2018-09-28 郑州云海信息技术有限公司 a kind of compression method, system, device and computer readable storage medium
CN108616767A (en) * 2018-04-28 2018-10-02 青岛海信电器股份有限公司 A kind of audio data transmission method and device
CN108616767B (en) * 2018-04-28 2020-12-29 海信视像科技股份有限公司 Audio data transmission method and device
CN111988674A (en) * 2020-08-18 2020-11-24 广州极飞科技有限公司 Multimedia data transmission method, device, equipment and storage medium
CN112511885A (en) * 2020-11-20 2021-03-16 深圳乐播科技有限公司 Audio and video synchronization method and device and storage medium
CN112511886A (en) * 2020-11-25 2021-03-16 杭州当虹科技股份有限公司 Audio and video synchronous playing method based on audio expansion and contraction
CN113099310A (en) * 2021-04-08 2021-07-09 李蕊男 Real-time media internal video and audio coordination method based on android platform
CN114285513A (en) * 2021-11-22 2022-04-05 杭州当虹科技股份有限公司 Time delay device and method supporting lossless long-time delay of IP signals
CN114285513B (en) * 2021-11-22 2023-10-27 杭州当虹科技股份有限公司 Delay device and method for supporting long-time delay of lossless IP signal
CN115297337A (en) * 2022-08-05 2022-11-04 深圳市野草声学有限公司 Audio transmission method and system during video live broadcasting based on data receiving and transmitting cache
CN115297337B (en) * 2022-08-05 2024-05-28 深圳市野草声学有限公司 Audio transmission method and system for live video broadcast based on data transceiver cache

Similar Documents

Publication Publication Date Title
CN103546662A (en) A method for synchronizing audio and video in a network monitoring system
WO2023024834A9 (en) Game data processing method and apparatus, and storage medium
CN103237191B (en) The method of synchronized push audio frequency and video in video conference
CN100579238C (en) Synchronous playing method for audio and video buffer
EP2670157B1 (en) Fingerprint-based inter-destination media synchronization
CN103338386A (en) Audio and video synchronization method based on simplified timestamps
CN101902649A (en) A method of audio and video synchronization control based on H.264 standard
CN109168059B (en) Lip sound synchronization method for respectively playing audio and video on different devices
CN103414957A (en) Method and device for synchronization of audio data and video data
CN103856787B (en) Commentary video passing-back live system based on public network and live method of commentary video passing-back live system based on public network
CN109218794B (en) Remote work instruction method and system
CN102404650A (en) Audio and video synchronization control method for online video
CN101202613B (en) Terminal for clock synchronization
CN103607664B (en) A kind of audio and video synchronization method of embedded multimedia playing system
CN101938606A (en) Multimedia data push method, system and device
WO2011113315A1 (en) Stream media live service system and implementation method thereof
CN105791939A (en) Audio and video synchronization method and apparatus
CN105656616A (en) Inter-multi-device data synchronization method and device, transmitting end and receiving end
CN110381350A (en) Multichannel playing back videos synchronization system and its processing method based on webrtc
WO2017071670A1 (en) Audio and video synchronization method, device and system
CN103269448A (en) Realization of Audio and Video Synchronization Method Based on RTP/RTCP Feedback Early Warning Algorithm
CN103596033A (en) Method for solving problem of audio and video non-synchronization in multimedia system terminal playback
CN111669605B (en) Method and device for synchronizing multimedia data and associated interactive data thereof
CN117255236A (en) Audio and video synchronization method for digital visual intercom
KR20070008069A (en) Audio / Video Signal Synchronization Device and Method

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20140129

RJ01 Rejection of invention patent application after publication