WO2005101346A1 - 突発事象の記録・解析システム - Google Patents
突発事象の記録・解析システム Download PDFInfo
- Publication number
- WO2005101346A1 WO2005101346A1 PCT/JP2004/004739 JP2004004739W WO2005101346A1 WO 2005101346 A1 WO2005101346 A1 WO 2005101346A1 JP 2004004739 W JP2004004739 W JP 2004004739W WO 2005101346 A1 WO2005101346 A1 WO 2005101346A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- accident
- data
- recording
- sound
- incident
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/01—Detecting movement of traffic to be counted or controlled
- G08G1/04—Detecting movement of traffic to be counted or controlled using optical or ultrasonic detectors
-
- G—PHYSICS
- G08—SIGNALLING
- G08G—TRAFFIC CONTROL SYSTEMS
- G08G1/00—Traffic control systems for road vehicles
- G08G1/16—Anti-collision systems
Definitions
- the present invention relates to an incident recording / analysis system provided at an intersection, for example, for recording an incident such as a traffic accident and analyzing the content thereof.
- This type of traffic accident automatic recording device captures intersections with a camera device and automatically saves images before and after the detection of collision sounds, sudden braking sounds, etc. due to traffic accidents. It was a thing.
- a type of sound source is specified using a neural network method (for example, refer to Japanese Patent Application Laid-Open No. 2003-202602).
- the present invention makes it possible to eliminate the need to collect the recorded data of the incident that occurred at each intersection and to record the incident that can easily analyze the incident.
- the purpose is to provide an analysis system. Disclosure of the invention
- the recording / analysis system of the incident according to the present invention comprises:
- a sudden event recording means having a data recording means for temporarily recording video data from an event photographing means
- a sudden event classifying means for classifying the sudden event by inputting the audio data and the video data recorded by the sudden event recording means
- An incident analysis means for analyzing the content of the incident based on the classification data classified by the incident classification means
- the above-mentioned sudden event recording means is placed at the place where the sudden event occurred, and the above sudden event classification means and sudden event analysis means are installed in the management center.
- the audio data and the video data recorded by the sudden event recording means are sent to a management center via a communication line.
- Another recording / analysis system of an unexpected event may be configured such that:
- the classification data is at least the type of vehicle that is the object of the accident and the sound of the accident.
- Another recording / analysis system of an unexpected event may be configured such that:
- the incident analysis means analyzes at least the positional relationship and the presence or absence of a collision of the vehicle, which is the object of the accident, using video data and audio data.
- Another recording / analysis system of an unexpected event includes the above-mentioned recording / analysis system,
- a microphone is used as sound detection means provided in the sound source determination means, and the microphone-phones are installed on both sides of the center of the intersection.
- the video data and audio data of the sudden event are recorded at the location when the sound source determination means determines whether or not the event is an accident, and when the event is judged to be an accident, the data is recorded.
- the data recorded in the data recording means is transmitted to the management center via a communication line, where the incident event is classified by the incident classifying means, and the incident event is analyzed by the incident event analyzing means.
- FIG. 1 is a block diagram showing a schematic configuration of a system for recording and analyzing a traffic accident, which is an accident, according to an embodiment of the present invention.
- Figure 2 is a plan view showing the layout of the accident recording device at the intersection in the system.
- Figure 3 is a block diagram showing the schematic configuration of the accident recording device in the system.
- Fig. 4 is a graph showing detection signals from the sound source judgment device in the accident recording device.
- FIG. 5 is a graph showing a spectrum calculation result of an acoustic signal related to the first classification work in the sound source identification unit in the sound source determination device.
- FIG. 6 is a graph showing a spectrum distribution as a result of the second classification work in the sound source identification unit in the sound source determination device.
- Fig. 5 is a graph showing the spectrum distribution as a result of the third classification work in the sound source discriminating unit in the sound source determination device,
- Fig. 8 is a graph showing the spectrum distribution as a result of the fourth classification work in the sound source Raiya separate part in the same sound source determination device
- Figure 9 is a conceptual diagram of the classification work by the neural network in the sound source identification unit in the sound source determination device.
- Fig. 10 is a block diagram showing the schematic configuration of the accident content classification device in the system.
- Fig. 11 is a block diagram showing the schematic configuration of the accident congratulation analyzer in this system.
- Figs. 12 and 13 show the accident situation data obtained by the accident situation analysis device.
- (A) is a vehicle trace diagram
- (b) is a graph showing acoustic analysis results
- (c) is a vehicle.
- Figure 13 shows the contents of the analysis report obtained by the accident situation analyzer.
- Fig. 14 is a flowchart showing the evaluation method of the sound source judgment evaluation device in the system.
- FIG. 15 is a plan view showing an arrangement state at an intersection according to a modification of the accident recording device in the system.
- the recording and analysis system of traffic accidents is explained, and the clues to record and analyze the traffic accidents are sound sources and images.
- the clues to record and analyze the traffic accidents are sound sources and images.
- the explanation includes events other than traffic accidents.
- this traffic alert recording and analysis system is installed at the location where traffic accidents are monitored, for example, at the intersection K where traffic accidents occur frequently (the location where sudden incidents occur).
- Camera device for photographing traffic accidents (accidents, etc.)
- a CCD camera is used in the case of a system IJ) 1.
- Microphone an example of sound detection means, hereinafter referred to as microphone 2
- sound data the sound signal
- Sound source determination device an example of sound source determination means 3 and a video signal captured by the power camera device 1 when the sound source determination device 3 determines that a traffic accident is -1 (Hereafter, video data A data recording device (data in an example of a recording device, for example, an example of an accident recorder (accident recording means comprising a hard disk device is used) 4) 5 for recording with the acoustic data of the) that from the microphone 2,
- a data storage unit for inputting and temporarily storing the audio data and the video data recorded by the accident recording device 5 (a hard disk or the like is used and may be referred to as a data recording unit) 6;
- the data stored in the data storage unit 6 is read and based on the details of the accident
- the accident record A sound evaluation judgment device (an example of a sound source judgment evaluation means) 9 for updating the judgment data in the sound therapy judgment device 3 in the device 5, and a classification data classified by the accident content classification device 7 described above.
- Database (a hard disk or the like is used) 10 as a data storage unit that stores data, and data browsing that can search each database stored in
- the data browsing device 11 also has a function of simply browsing data stored in the database 10.
- the accident recording device 5 is located at the intersection, ⁇ :, and the other devices 6 to 11 are located at the control center 12; 5 and the management center 12 are connected via a communication line (the Internet or intranet using a public line, a broadband line or the like) 13.
- a communication line the Internet or intranet using a public line, a broadband line or the like
- a server (computer device) 14 for managing and operating the data storage unit 6 and the data base 10 is provided on the management center 12 side, and the server 14 and each device 6 to The server 11 is designed to be able to receive and receive data from each other via LAN 15 or the like, and the server 14 is connected to the traffic control system Senichi 16 via a communication line 17. Connected.
- the accident recording device 5 and the server 14 on the management center 12 disposed at each intersection are provided with communication devices 5a and 14a, respectively.
- the sound source determination device 3 includes a signal extraction unit 21 that inputs an acoustic signal (acoustic data) collected by the My 2 and extracts a signal in a predetermined frequency range,
- the signal extraction unit 21 receives the extracted acoustic signal generated by f, integrates the signal at a predetermined first integration time, obtains an acoustic energy (integral value, the same applies hereinafter), and It is determined whether or not the energy exceeds a predetermined first set level value. If the energy exceeds the predetermined first set level value, a level detection unit 22 that outputs a predetermined detection signal and the signal extraction unit 21 described above are used.
- the extracted extracted sound signal is input, and the sound energy is obtained by integrating the predetermined second integral shorter than the first integration time to obtain the sound energy, and the sound energy has a predetermined set peak value. Determine whether or not In this case, the peak detection unit 23 that outputs a predetermined detection signal and the extracted sound signal extracted by the signal extraction unit 21 are input, and the door / " The sound energy is calculated by integration, and if the sound energy exceeds the predetermined set level value, it is determined again after the lapse of a predetermined time that the sound energy has exceeded the predetermined set level value.
- a level continuation detection unit 24 that outputs a predetermined detection signal
- a recruitment unit that receives at least one of the detection signals from the level detection unit 22 and the peak detection unit 23
- the predetermined frequency region is divided into a predetermined number and A spectrum calculation unit 25 that calculates the frequency spectrum (hereinafter simply referred to as a spectrum and also referred to as a spectrum) of the acoustic signal relating to the region, and the spectrum calculation unit 25 calculates the spectrum.
- the detection signals from the peak detection section 23 and the level continuation detection section 24 are input, and the And an accident judging section 27 for judging whether or not they are the same.
- the accident recording device 5 saves the video to be recorded on the data recording device 4 while shooting with the camera device 1.
- a data storage instruction section 28 for outputting an instruction is provided.
- the signal extracting section 21 extracts a signal having a frequency of, for example, 0 to 2.5 kPiz, and then removes a portion of 0 to 500 Hz. This is to narrow down the range of accident noises generated during traffic accidents and vehicle running, that is, due to accidents and the like, and to remove extra engine sounds (0 to 500 Hz).
- the level detection unit 22 receives the output sound signal from the signal extraction unit 21 and performs integration during a predetermined first integration time (for example, about 500 msec) to perform sound energy
- the first integrator 3 L is compared with the acoustic energy obtained by the first integrator 31 and a predetermined set level value of Xe 1 to determine the acoustic energy at the first set level J Above value
- the detection signal (the trigger signal), for example,
- a first comparator 32 is provided for outputting a signal of "1" (note that "0" is output when the signal is equal to or less than the set level value). That is, the level detector 22 integrates the acoustic signal at a certain time interval, thereby determining whether or not the magnitude of the acoustic signal exceeds a predetermined level.
- the peak detecting section 23 receives the extracted acoustic signal from the signal extracting section 21 and performs integration during a second integration time (for example, about 100 msec) shorter than the first integration time.
- the second integrator 33 for obtaining the energy is compared with the sound energy obtained by the second integrator 33 and a predetermined second set level value, and the peak value of the sound energy is set to the second set value. If the level value (which is also the set peak value) is exceeded, a signal of “1”, for example, is output as a detection signal (which is a trigger signal). ) Is provided. That is, the peak detection unit 23 determines whether the peak value of the sound signal exceeds a predetermined level (peak value) by integrating the sound signal in a short time. .
- the level continuation detecting section 24 receives the extracted acoustic signal from the signal extracting section 21 and sets a predetermined third integration time (for example, the same as the first integration time in the level detecting means). ), A third integrator 35 for obtaining the acoustic energy by performing integration, and the acoustic energy obtained by the third integrator 35 and a predetermined third set level value (for example, level output means). If the sound energy exceeds the third set level value by comparing with the set level value at For example, at 300 msec), whether or not the set level value exceeds the same set level value is compared again. If the set level value is exceeded, it is determined that the set level value is continued (maintained), and the detection is performed. Shinguchi
- La which is the trigger signal
- a signal of “1” (Note that the set level value is not continued mA
- the mouth is provided with a third comparator 36 which outputs "0").
- FIG. 4 shows each of the comparators 32, 22 in each of the detection units 22 to 24.
- (A) is for the first comparator 32 in the level detector 22
- (b) is for the second comparator 34 in the peak detector 23
- (C) is continuous level detection 3 shows the state after the third comparator 36 in the unit 24, and (d) shows the reset signal.
- the detection signal (“1”) from the level detection section 22 and the detection signal (“1”) from the peak detection section 23 are set.
- the extracted audio signal is first converted to an AZD converter
- each division is performed by dividing a predetermined frequency region (450 to 250 Hz) into a predetermined number, for example, 105
- a frequency spectrum also called a frequency spectrum
- FFT fast Fourier transform
- the type of the sound source is specified using the neural network.
- the frequency spectrum is divided into four levels. Recognition and classification are performed based on these classification methods (Classes 1 to 4) and using neural networks, and the classification numbers obtained in these classifications are determined in advance by experiments and the like.
- the detected sound is compared with the classification table, which is the judgment data, and the detected sound is one of a number of random sounds, including crash noise, tire and road noise, cracks, runaway noise, and sirens Specified.
- the classification table for example, a five-digit number (two digits are assigned to the first category and one digit is assigned to each of the second to fourth categories) is assigned to each type of sound, and Is provided with a record flag column indicating whether or not to record the data.
- the first classification (the first stage) will be explained.
- the classification number is obtained by using two types of division patterns with different classification criteria.
- the divided frequency regions of the 105 banks are divided into, for example, five based on the division pattern predetermined according to the total area, and are assigned classification numbers of # 0 to # 4.
- the division method here is based on the sample data (500 samples) of actual traffic sounds, for example, frequency distribution of each total area value of 500 samples (the horizontal axis is the bank position, The vertical axis is the area value), and it is divided so that it is equally divided.
- the part where the frequency distribution of the total area value is large has a narrow division width.
- the part where the frequency distribution of the total area value is small has a wider division width (in some cases, the division is not necessarily equal, and the division number is not necessarily five).
- the classification number to which this total area 2401 belongs the total area value, the classification number and the Are assigned, for example, # 3.
- 105 banks are divided into, for example, 10 banks, and classification numbers # 0 to # 9 are assigned.
- each frequency spectrum of the audio signal relating to each of the divided frequency regions (which is a bank) divided into 105 is normalized by its maximum value, and then normalized. The one with the largest peak in the frequency spectrum in 105 banks where the is performed.
- the divided frequency domain of 105 banks is divided into, for example, 10 divided patterns according to the number of banks (bank positions) where the maximum peak exists, and the classification numbers of # 0 to # 9 ,
- the spectrum having the highest level belongs to any part of # 0 to # 9 in the spectrum series (shown by the bar graph in Fig. 5) related to the extracted acoustic signal. Is required.
- the method of dividing the 105 links is predetermined according to the bank position at the maximum level.
- the bank positions of each of the 50,000 cases are frequency-distributed (the horizontal axis is the bank position, and the vertical axis is Is the number of cases), and it is divided so that it is evenly divided.
- the division width is narrowed and the maximum level is The portion where the frequency distribution of the bank position of the file is small has a large division width (in some cases, the division is not necessarily equal, and the division number is not necessarily 10).
- classification numbers are assigned from # 0 to # 9, from the smallest bank number to the largest bank number. Therefore, in FIG. 5, the vicinity of 88 banks has the highest level, and the classification number to which the 88 banks belong is assigned, for example, as # 8.
- the classification number in the first classification is determined in consideration of the above two types of numbers. For example, if the number is assigned to # 3 in 5 divisions by the total area value, and to # 9 in 10 divisions by the maximum level bank position, the classification number according to the first classification is # 3 9 It becomes.
- the characteristic part of the acoustic signal is extracted based on the spectrum, and an experiment is performed in advance to identify the extracted sequence and the sound source.
- the pattern matching (pattern recognition) with the spectrum series obtained by the above is performed using neural nets, and based on the classification numbers obtained in each classification work, As described above, the detected sound was identified with one of many types including collision noise, tire / road friction, crack, runaway, siren, etc. Is done.
- a group of a predetermined number of signal identification patterns prepared in the database for example, five Are selected and used for pattern recognition.
- the second to fourth classification tasks will be described.
- the second classification work first, five patterns are extracted from the database based on the classification number (for example, # 39) obtained in the first classification.
- the third classification based on the classification number obtained in the second classification, five patterns are extracted from the data base, and at the same time, the pattern sequence of the bank of 105 in the sound signal concerned is obtained.
- the maximum 5 strokes of the maximum stroke and 2 before and after it are set to zero (zero reset), and a new 105 bank spectrum sequence is created.
- those with a maximum spectrum above a certain threshold are less than 25% of the maximum spectrum. Is assumed to be zero, and the above five patterns are represented by a neural network for the normalized spectral series (shown in Fig. 7).
- Classification numbers are assigned by performing pattern matching with a total of seven patterns, including patterns that are less than (thus, even patterns below the threshold are considered as one pattern). In other words, in this classification work, classification is performed on the remaining spectral series from which the spectral part with the strongest is removed. Become.
- the fourth classification the following two cases are classified: Of course, in this classification work as well, based on the classification number obtained in the three classification work, the data is changed from the data base to the data base. Five patterns used for matching are extracted.
- a pattern sequence (shown in Fig. 8) is normalized by the neural network for a normalized vector sequence (shown in Fig. 8).
- a classification number is assigned by performing pattern matching with the extracted 5 patterns plus a pattern indicating a pattern other than the 5 patterns and a pattern below the threshold value). In this classification work, classification is performed on the spectrum sequence from which the spectrum part having the second highest strength has been removed.
- Fig. 9 shows a conceptual diagram of the above-mentioned categorization work using dual-net.
- a detection signal (represented by PT) from the peak detection unit 23 is input, and a logical operation of ⁇ (NT and PD) or PT ⁇ is performed. ) Is determined.
- the identification signal (NT) is set to “1” if the sound is generated due to an accident or the like
- the detection signal (PD) is set to “1” if the sound is continuous.
- the detection signal (PT) is also set to “1” when the peak value is equal to or higher than a predetermined intensity.
- the logical product (and) in the above logical operation expression indicates that the sound is not instantaneous, and in the case of an accident or the like, it is considered that the sound is continuous for a short period of time. (PD), and on the other hand, if the sound is caused by an accident, etc., its peak value is considered to have considerable strength, so the peak value is If the value is larger than the set level value (of course, this value is set by experiments), the above logical product (NT and PD) is calculated so that it can be determined to be due to an accident or the like. The logical sum of the detection signals (PT) is calculated.
- the accident judging unit 27 judges that the accident is an accident, an instruction to that effect is output to the data storage instruction unit 28 and the data recording device 4 The video before and after is recorded and stored together with the sound.
- the accident determination section 27 described above uses, as an index of the video image, an accident content (for example, a coded ) Are recorded together.
- an accident content for example, a coded
- collision noise, collision noise + tire-road friction noise, collision noise + crush, tire-road friction noise, crash, runaway noise, siren, and other sounds are identified.
- each of the above units, the integrator, the comparator, and the like are each configured by an electric signal circuit.
- the sound source identification unit 26, which performs an operation by a neural network includes, for example, a CPU as an operation processing unit. Is provided.
- the acoustic signal detected by the microphone 2 is extracted in a predetermined frequency band by the signal extracting unit 21, and the extracted acoustic signal is The signal is input to the level detector 22, peak detector 23 and level continuation detector 24, and a preliminary judgment is made as to whether or not an accident has occurred. And, among the level detecting section 22 and the peak detecting section 23,
- the spectrum calculation unit 25 calculates the calculation power S of the spectrum. Done.
- the spectrum sequence obtained by this calculation is input to the sound source identification unit 26.
- the sound source power S is identified by the above-described classification method using the neural network, and the identified sound is likely to lead to an accident or the like. (Sound, collision sound + tire-road friction sound, collision noise + crush, tyre-road friction sound, crash, running noise, siren, etc.), a detection signal (NT) indicating an accident, etc. Is output.
- ⁇ > Shows the detection signal (NT) for the above-mentioned accidents, the detection signal (PD) indicating continuation from the level continuation detection section 24, and the peak detection signal (PT) from the peak detection section 23.
- the signal is input to the accident determination unit 27 and a logical operation is performed to determine whether the sound is caused by an accident or the like.
- the accident judging section 27 judges that an accident or the like has occurred, an instruction signal to that effect is output to the data storage instructing section 28, and images are taken before and after the sound is generated.
- the recorded video is recorded and stored in the data recording device 4.
- the code data of the sound source type specified by the sound source identification unit 26 is recorded as an index to the video data, and later retrieval of the video data is performed. Is facilitated.
- the one-time identification time in the sound source identification unit 26 is, for example, 3 seconds.
- a detection signal (trigger signal) is obtained in each of the detection units 22 to 24: The output of the detection signal is maintained until the lapse of 3 seconds, and reset after the lapse of 3 seconds. A signal is output.
- the low frequency that the vehicle normally emits and the high frequency that is difficult for humans to hear, such as engine sound For the extracted audio signal from which noise has been removed, the level detection unit 22 detects whether the level value of the audio signal exceeds the set level value, and the peak detection unit 23 sets the peak value of the audio signal. Detects whether the peak value is exceeded, and if at least one exceeds the set value, finds the frequency spectrum of the sound signal and uses the dual-net to determine the type of the sound source Since the sound source is specified, the sound source can be identified more accurately.
- the level continuation detection unit 24 further outputs the level continuation time of the sound signal to the sound source specified by the neural network. Since the determination as to whether or not the time exceeds the set duration is added, it is possible to accurately determine whether or not an accident has occurred.
- the accident content classification device 7 performs image processing by inputting video data from the database 10 and performs, for example, a contour of an object in an intersection for each image frame [for example, at a predetermined time interval (more specifically, The image processing unit 41 to be extracted and the image processing data obtained by the image processing unit 41 may be input to cause a traffic accident.
- the object (target object) determined to be For example, it recognizes any kind of car, motorcycle, pedestrian, etc. and also specifies the type of vehicle (it is a vehicle type). For example, if it is a car, it is a large car, a passenger car, an emergency vehicle, etc.
- An object recognition unit 42 for judging motorcycles, bicycles, etc., and a sound source identification unit 4 for inputting sound data from the database 10 and for specifying sound data based on recognition of image data, that is, for specifying accident sounds. 3 is provided.
- the object recognizing unit 42 detects an object moving with respect to the video by the difference method, for example, as a rectangular parallelepiped.
- the determination of the detected object, that is, the vehicle type is performed by pattern matching for a rectangular parallelepiped.
- the parameters for evaluation used for this pattern matching use the width, depth, height, volume, etc. of the rectangular parallelepiped. Specifically, looking at the magnitude relationship between width, depth, and height, the relationship is generally depth> height> width for motorcycles, and height> width, depth for pedestrians. Become a relationship. In the case of four-wheeled vehicles, the size, depth, and height vary widely, but the volume makes it possible to distinguish large vehicles from passenger vehicles.
- the accident situation analysis device 8 inputs image data from the database 10 and performs image processing. For example, an outline of an object in an intersection is set for each image frame [for example, at a predetermined time interval (more specific).
- the image processing unit 51 to be extracted and the image processing data obtained by the image processing unit 51 may cause a traffic accident.
- a trajectory calculation unit 52 for obtaining the trajectory of the determined object, and an object entry position, an object speed, an accident position, a brake estimation, and whether or not a signal is ignored from the trajectory calculated by the trajectory calculation unit 52.
- An accident situation detection section 53 for detecting an accident situation and an accident situation output section 54 for outputting the accident situation obtained by the accident situation detection section 53 in a report format are provided.
- the specific analysis contents obtained by this accident situation analysis device 8 are as follows.
- Figure 13 shows a specific example of an accident analysis report summarizing these results.
- This data browsing device 11 is a search and browsing software (hereinafter referred to as browsing software, etc.) that can search and browse various data related to video data and operation logs stored in the database 10.
- browsing software a search and browsing software
- a web browser is used), and of course, it is connected to the server 10 via LAN 15 overnight.
- the above viewing software can be used to search the video data for “intersection name, date, recording factor [type of sound judged to be recorded (specifically, sound source identification result)], type of accident , Accident vehicle type, etc., and a search can be performed, and a list of search results (eg, intersection name, date, recording factor, accident type, accident vehicle type, video data file name) can be displayed. be able to.
- the video data can be reproduced.
- the operation log can be searched or narrowed down from "intersection name, date", etc., and a list of search results (eg, intersection name, date, log file name) Display can be performed.
- search results eg, intersection name, date, log file name
- the operation log by selecting the log file name displayed in the list and pressing the display button, the operation log is displayed. Can be displayed.
- the sound source judgment and evaluation device 9 is a search and browsing software (hereinafter, referred to as a video browsing software (hereinafter, referred to as a video browsing software) capable of performing search and browsing of the video data stored in the database 10 and various data related to the operation port.
- a video browsing software hereinafter, referred to as a video browsing software
- This is a computer terminal equipped with a software such as browsing software (for example, a web browser is used), and is connected to the device 10 via the LAN 15.
- the video data can be reproduced.
- the neural network recognizes again that the accident was not a traffic accident but a neural accident. It has a re-learning function to make learning using the program.
- each accident recording device 5 the classification number used to identify (determine) the acoustic data relating to the erroneously recognized video data and the classification number for the erroneously recognized data are changed.
- the record flag of the misrecognized classification number from the classification table is changed from “1 (record)” to “0 (zero: not recorded)”, and the revised classification table It is transmitted to the accident recording device 5 relating to misrecognition, and the classification table is updated.
- Fig. 14 is a flowchart explaining the re-learning function.
- this data transfer operation is performed by virtually establishing a one-to-one communication path using the communication line 13, for example, the Internet connection network (intranet).
- the transmission protocol is, for example, T
- the FTP protocol in CP-IP is used. Accordingly, the server 14 and the accident recording device 5 of the management center 12 each have an FTP program file.
- the sound data used for identification of the video data and the sound source recorded by the accident recording device 5 is provided at a predetermined time, for example, about once a day.
- This data is sent to the management center 12 (for example, when the date is changed), and along with these data, the operation log of the recording device 5 (for example, power-on time, power failure time, abnormalities of the unit)
- the cause of the occurrence / abnormality recovery and the factors during recording are also sent. [Sound source identification result and spectrum calculation result (or sound pressure data)] are also sent.
- the operation setting data in the accident recording device 5 (for example, the number of video channels, video size, image quality, recording time, recording time before trigger, video frame interval, intersection name, signal path)
- a file that describes the data is sent to the accident recording device 5 as necessary.
- the classification table corrected by the sound source judgment evaluation device 9 is also sent to the accident recording device 5 side.
- the necessary traffic accident data (eg, intersection name, time of occurrence, accident situation, etc.) It is sent to Sen-Yu 1-16.
- the sound source determination device 3 determines whether or not the traffic accident occurred at the location where the traffic accident occurred, and the traffic If it is determined that the accident has occurred, the data is recorded on the data recording device 4 and the data recorded on the data recording device 4 is sent to the management center 12 via the communication line 13 where the accident occurs. Since the traffic accidents are classified by the content classification device 7 and the details of the traffic accidents are analyzed by the accident situation analysis device 8, the data of the traffic accidents are recorded in the data recorder installed at the intersection as in the past.
- the main data analysis (for example, contact sound detection, vehicle tracing, etc.) is automatically performed by the accident situation analysis device 8, which makes it easy for the observer to analyze the details of the accident and to make the analysis work easier. It can be done quickly.
- the classification has been described as being performed in four stages, such as the first to fourth classifications.
- the first to third classifications are used (in this case, The classification number may be four digits).
- the sound source may be identified.
- the sound source can be identified accurately in this case as in the first embodiment.
- a description has been given of recording a traffic accident at an intersection.
- the present invention may be applied to a place other than an intersection. It can also be applied to the monitoring of work performed in the office and the monitoring of convenience stores.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Traffic Control Systems (AREA)
Abstract
Description
Claims
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006512179A JP4242422B2 (ja) | 2004-03-31 | 2004-03-31 | 突発事象の記録・解析システム |
PCT/JP2004/004739 WO2005101346A1 (ja) | 2004-03-31 | 2004-03-31 | 突発事象の記録・解析システム |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2004/004739 WO2005101346A1 (ja) | 2004-03-31 | 2004-03-31 | 突発事象の記録・解析システム |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2005101346A1 true WO2005101346A1 (ja) | 2005-10-27 |
Family
ID=35150207
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/004739 WO2005101346A1 (ja) | 2004-03-31 | 2004-03-31 | 突発事象の記録・解析システム |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP4242422B2 (ja) |
WO (1) | WO2005101346A1 (ja) |
Cited By (29)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007286857A (ja) * | 2006-04-17 | 2007-11-01 | Sekisui Jushi Co Ltd | 事故監視システム |
JP2012504832A (ja) * | 2008-10-02 | 2012-02-23 | ボールズ、マーク | デバイスに対する二次市場およびベンディングシステム |
ITBZ20130054A1 (it) * | 2013-11-04 | 2015-05-05 | Tarasconi Traffic Tecnologies Srl | Sistema di videosorveglianza del traffico stradale con segnalazione di situazioni di pericolo |
JP2017010290A (ja) * | 2015-06-23 | 2017-01-12 | 株式会社東芝 | 情報処理装置および事象検出方法 |
US9818160B2 (en) | 2008-10-02 | 2017-11-14 | ecoATM, Inc. | Kiosk for recycling electronic devices |
US9881284B2 (en) | 2008-10-02 | 2018-01-30 | ecoATM, Inc. | Mini-kiosk for recycling electronic devices |
US9885672B2 (en) | 2016-06-08 | 2018-02-06 | ecoATM, Inc. | Methods and systems for detecting screen covers on electronic devices |
US9904911B2 (en) | 2008-10-02 | 2018-02-27 | ecoATM, Inc. | Secondary market and vending system for devices |
US9911102B2 (en) | 2014-10-02 | 2018-03-06 | ecoATM, Inc. | Application for device evaluation and other processes associated with device recycling |
US10127647B2 (en) | 2016-04-15 | 2018-11-13 | Ecoatm, Llc | Methods and systems for detecting cracks in electronic devices |
US10269110B2 (en) | 2016-06-28 | 2019-04-23 | Ecoatm, Llc | Methods and systems for detecting cracks in illuminated electronic device screens |
US10401411B2 (en) | 2014-09-29 | 2019-09-03 | Ecoatm, Llc | Maintaining sets of cable components used for wired analysis, charging, or other interaction with portable electronic devices |
US10417615B2 (en) | 2014-10-31 | 2019-09-17 | Ecoatm, Llc | Systems and methods for recycling consumer electronic devices |
US10445708B2 (en) | 2014-10-03 | 2019-10-15 | Ecoatm, Llc | System for electrically testing mobile devices at a consumer-operated kiosk, and associated devices and methods |
US10475002B2 (en) | 2014-10-02 | 2019-11-12 | Ecoatm, Llc | Wireless-enabled kiosk for recycling consumer devices |
US10572946B2 (en) | 2014-10-31 | 2020-02-25 | Ecoatm, Llc | Methods and systems for facilitating processes associated with insurance services and/or other services for electronic devices |
US10825082B2 (en) | 2008-10-02 | 2020-11-03 | Ecoatm, Llc | Apparatus and method for recycling mobile phones |
US10860990B2 (en) | 2014-11-06 | 2020-12-08 | Ecoatm, Llc | Methods and systems for evaluating and recycling electronic devices |
US11010841B2 (en) | 2008-10-02 | 2021-05-18 | Ecoatm, Llc | Kiosk for recycling electronic devices |
US11080672B2 (en) | 2014-12-12 | 2021-08-03 | Ecoatm, Llc | Systems and methods for recycling consumer electronic devices |
WO2022050086A1 (ja) * | 2020-09-03 | 2022-03-10 | ソニーグループ株式会社 | 情報処理装置、情報処理方法、検出装置、および情報処理システム |
US11462868B2 (en) | 2019-02-12 | 2022-10-04 | Ecoatm, Llc | Connector carrier for electronic device kiosk |
WO2022208586A1 (ja) | 2021-03-29 | 2022-10-06 | 日本電気株式会社 | 通知装置、通知システム、通知方法及び非一時的なコンピュータ可読媒体 |
US11482067B2 (en) | 2019-02-12 | 2022-10-25 | Ecoatm, Llc | Kiosk for evaluating and purchasing used electronic devices |
US11798250B2 (en) | 2019-02-18 | 2023-10-24 | Ecoatm, Llc | Neural network based physical condition evaluation of electronic devices, and associated systems and methods |
US11922467B2 (en) | 2020-08-17 | 2024-03-05 | ecoATM, Inc. | Evaluating an electronic device using optical character recognition |
US11989710B2 (en) | 2018-12-19 | 2024-05-21 | Ecoatm, Llc | Systems and methods for vending and/or purchasing mobile phones and other electronic devices |
US12033454B2 (en) | 2020-08-17 | 2024-07-09 | Ecoatm, Llc | Kiosk for evaluating and purchasing used electronic devices |
JP7560228B2 (ja) | 2018-07-05 | 2024-10-02 | モビディウス リミテッド | ニューラルネットワークを用いたビデオ監視 |
Families Citing this family (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE102017200153A1 (de) | 2017-01-09 | 2018-07-12 | Ford Global Technologies, Llc | Verfahren zum Erfassen von Verkehrssituationen |
CN107945512B (zh) * | 2017-11-27 | 2020-11-10 | 海尔优家智能科技(北京)有限公司 | 一种交通事故处理方法及系统 |
CN108764042B (zh) * | 2018-04-25 | 2021-05-28 | 深圳市科思创动科技有限公司 | 一种异常路况信息识别方法、装置及终端设备 |
CN110942629A (zh) * | 2019-11-29 | 2020-03-31 | 中核第四研究设计工程有限公司 | 道路交通事故管理方法、装置及终端设备 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1011694A (ja) * | 1996-06-24 | 1998-01-16 | Mitsubishi Heavy Ind Ltd | 自動車事故監視装置 |
JP2000207676A (ja) * | 1999-01-08 | 2000-07-28 | Nec Corp | 交通事故検出装置 |
JP2002230679A (ja) * | 2001-01-30 | 2002-08-16 | Natl Inst For Land & Infrastructure Management Mlit | 道路監視システム及び道路監視方法 |
JP2002342882A (ja) * | 2001-05-11 | 2002-11-29 | Fujitsu Ltd | 移動体識別装置並びに移動体に対する自動警告方法及び装置 |
JP2003061074A (ja) * | 2001-08-09 | 2003-02-28 | Mitsubishi Electric Corp | 画像認識処理システム |
JP2003157487A (ja) * | 2001-11-22 | 2003-05-30 | Mitsubishi Electric Corp | 交通状況監視装置 |
JP2003202260A (ja) * | 2001-10-25 | 2003-07-18 | Hitachi Zosen Corp | 音源識別装置および突発事象検出装置並びに突発事象自動記録装置 |
-
2004
- 2004-03-31 WO PCT/JP2004/004739 patent/WO2005101346A1/ja active Application Filing
- 2004-03-31 JP JP2006512179A patent/JP4242422B2/ja not_active Expired - Fee Related
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH1011694A (ja) * | 1996-06-24 | 1998-01-16 | Mitsubishi Heavy Ind Ltd | 自動車事故監視装置 |
JP2000207676A (ja) * | 1999-01-08 | 2000-07-28 | Nec Corp | 交通事故検出装置 |
JP2002230679A (ja) * | 2001-01-30 | 2002-08-16 | Natl Inst For Land & Infrastructure Management Mlit | 道路監視システム及び道路監視方法 |
JP2002342882A (ja) * | 2001-05-11 | 2002-11-29 | Fujitsu Ltd | 移動体識別装置並びに移動体に対する自動警告方法及び装置 |
JP2003061074A (ja) * | 2001-08-09 | 2003-02-28 | Mitsubishi Electric Corp | 画像認識処理システム |
JP2003202260A (ja) * | 2001-10-25 | 2003-07-18 | Hitachi Zosen Corp | 音源識別装置および突発事象検出装置並びに突発事象自動記録装置 |
JP2003157487A (ja) * | 2001-11-22 | 2003-05-30 | Mitsubishi Electric Corp | 交通状況監視装置 |
Cited By (53)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007286857A (ja) * | 2006-04-17 | 2007-11-01 | Sekisui Jushi Co Ltd | 事故監視システム |
US11790328B2 (en) | 2008-10-02 | 2023-10-17 | Ecoatm, Llc | Secondary market and vending system for devices |
US9881284B2 (en) | 2008-10-02 | 2018-01-30 | ecoATM, Inc. | Mini-kiosk for recycling electronic devices |
US10032140B2 (en) | 2008-10-02 | 2018-07-24 | ecoATM, LLC. | Systems for recycling consumer electronic devices |
US9818160B2 (en) | 2008-10-02 | 2017-11-14 | ecoATM, Inc. | Kiosk for recycling electronic devices |
US10055798B2 (en) | 2008-10-02 | 2018-08-21 | Ecoatm, Llc | Kiosk for recycling electronic devices |
US11907915B2 (en) | 2008-10-02 | 2024-02-20 | Ecoatm, Llc | Secondary market and vending system for devices |
US9904911B2 (en) | 2008-10-02 | 2018-02-27 | ecoATM, Inc. | Secondary market and vending system for devices |
US11526932B2 (en) | 2008-10-02 | 2022-12-13 | Ecoatm, Llc | Kiosks for evaluating and purchasing used electronic devices and related technology |
US11935138B2 (en) | 2008-10-02 | 2024-03-19 | ecoATM, Inc. | Kiosk for recycling electronic devices |
US10853873B2 (en) | 2008-10-02 | 2020-12-01 | Ecoatm, Llc | Kiosks for evaluating and purchasing used electronic devices and related technology |
US10825082B2 (en) | 2008-10-02 | 2020-11-03 | Ecoatm, Llc | Apparatus and method for recycling mobile phones |
US10157427B2 (en) | 2008-10-02 | 2018-12-18 | Ecoatm, Llc | Kiosk for recycling electronic devices |
US11443289B2 (en) | 2008-10-02 | 2022-09-13 | Ecoatm, Llc | Secondary market and vending system for devices |
JP2012504832A (ja) * | 2008-10-02 | 2012-02-23 | ボールズ、マーク | デバイスに対する二次市場およびベンディングシステム |
US11080662B2 (en) | 2008-10-02 | 2021-08-03 | Ecoatm, Llc | Secondary market and vending system for devices |
US11010841B2 (en) | 2008-10-02 | 2021-05-18 | Ecoatm, Llc | Kiosk for recycling electronic devices |
ITBZ20130054A1 (it) * | 2013-11-04 | 2015-05-05 | Tarasconi Traffic Tecnologies Srl | Sistema di videosorveglianza del traffico stradale con segnalazione di situazioni di pericolo |
US10401411B2 (en) | 2014-09-29 | 2019-09-03 | Ecoatm, Llc | Maintaining sets of cable components used for wired analysis, charging, or other interaction with portable electronic devices |
US11126973B2 (en) | 2014-10-02 | 2021-09-21 | Ecoatm, Llc | Wireless-enabled kiosk for recycling consumer devices |
US11790327B2 (en) | 2014-10-02 | 2023-10-17 | Ecoatm, Llc | Application for device evaluation and other processes associated with device recycling |
US10475002B2 (en) | 2014-10-02 | 2019-11-12 | Ecoatm, Llc | Wireless-enabled kiosk for recycling consumer devices |
US11734654B2 (en) | 2014-10-02 | 2023-08-22 | Ecoatm, Llc | Wireless-enabled kiosk for recycling consumer devices |
US9911102B2 (en) | 2014-10-02 | 2018-03-06 | ecoATM, Inc. | Application for device evaluation and other processes associated with device recycling |
US10496963B2 (en) | 2014-10-02 | 2019-12-03 | Ecoatm, Llc | Wireless-enabled kiosk for recycling consumer devices |
US10438174B2 (en) | 2014-10-02 | 2019-10-08 | Ecoatm, Llc | Application for device evaluation and other processes associated with device recycling |
US11232412B2 (en) | 2014-10-03 | 2022-01-25 | Ecoatm, Llc | System for electrically testing mobile devices at a consumer-operated kiosk, and associated devices and methods |
US11989701B2 (en) | 2014-10-03 | 2024-05-21 | Ecoatm, Llc | System for electrically testing mobile devices at a consumer-operated kiosk, and associated devices and methods |
US10445708B2 (en) | 2014-10-03 | 2019-10-15 | Ecoatm, Llc | System for electrically testing mobile devices at a consumer-operated kiosk, and associated devices and methods |
US10572946B2 (en) | 2014-10-31 | 2020-02-25 | Ecoatm, Llc | Methods and systems for facilitating processes associated with insurance services and/or other services for electronic devices |
US10417615B2 (en) | 2014-10-31 | 2019-09-17 | Ecoatm, Llc | Systems and methods for recycling consumer electronic devices |
US11436570B2 (en) | 2014-10-31 | 2022-09-06 | Ecoatm, Llc | Systems and methods for recycling consumer electronic devices |
US10860990B2 (en) | 2014-11-06 | 2020-12-08 | Ecoatm, Llc | Methods and systems for evaluating and recycling electronic devices |
US11315093B2 (en) | 2014-12-12 | 2022-04-26 | Ecoatm, Llc | Systems and methods for recycling consumer electronic devices |
US12008520B2 (en) | 2014-12-12 | 2024-06-11 | Ecoatm, Llc | Systems and methods for recycling consumer electronic devices |
US11080672B2 (en) | 2014-12-12 | 2021-08-03 | Ecoatm, Llc | Systems and methods for recycling consumer electronic devices |
JP2017010290A (ja) * | 2015-06-23 | 2017-01-12 | 株式会社東芝 | 情報処理装置および事象検出方法 |
US10127647B2 (en) | 2016-04-15 | 2018-11-13 | Ecoatm, Llc | Methods and systems for detecting cracks in electronic devices |
US9885672B2 (en) | 2016-06-08 | 2018-02-06 | ecoATM, Inc. | Methods and systems for detecting screen covers on electronic devices |
US10269110B2 (en) | 2016-06-28 | 2019-04-23 | Ecoatm, Llc | Methods and systems for detecting cracks in illuminated electronic device screens |
US11803954B2 (en) | 2016-06-28 | 2023-10-31 | Ecoatm, Llc | Methods and systems for detecting cracks in illuminated electronic device screens |
US10909673B2 (en) | 2016-06-28 | 2021-02-02 | Ecoatm, Llc | Methods and systems for detecting cracks in illuminated electronic device screens |
JP7560228B2 (ja) | 2018-07-05 | 2024-10-02 | モビディウス リミテッド | ニューラルネットワークを用いたビデオ監視 |
US12131536B2 (en) | 2018-07-05 | 2024-10-29 | Movidius Ltd. | Video surveillance with neural networks |
US11989710B2 (en) | 2018-12-19 | 2024-05-21 | Ecoatm, Llc | Systems and methods for vending and/or purchasing mobile phones and other electronic devices |
US11843206B2 (en) | 2019-02-12 | 2023-12-12 | Ecoatm, Llc | Connector carrier for electronic device kiosk |
US11482067B2 (en) | 2019-02-12 | 2022-10-25 | Ecoatm, Llc | Kiosk for evaluating and purchasing used electronic devices |
US11462868B2 (en) | 2019-02-12 | 2022-10-04 | Ecoatm, Llc | Connector carrier for electronic device kiosk |
US11798250B2 (en) | 2019-02-18 | 2023-10-24 | Ecoatm, Llc | Neural network based physical condition evaluation of electronic devices, and associated systems and methods |
US11922467B2 (en) | 2020-08-17 | 2024-03-05 | ecoATM, Inc. | Evaluating an electronic device using optical character recognition |
US12033454B2 (en) | 2020-08-17 | 2024-07-09 | Ecoatm, Llc | Kiosk for evaluating and purchasing used electronic devices |
WO2022050086A1 (ja) * | 2020-09-03 | 2022-03-10 | ソニーグループ株式会社 | 情報処理装置、情報処理方法、検出装置、および情報処理システム |
WO2022208586A1 (ja) | 2021-03-29 | 2022-10-06 | 日本電気株式会社 | 通知装置、通知システム、通知方法及び非一時的なコンピュータ可読媒体 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2005101346A1 (ja) | 2008-03-06 |
JP4242422B2 (ja) | 2009-03-25 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP4242422B2 (ja) | 突発事象の記録・解析システム | |
CN109616140B (zh) | 一种异常声音分析系统 | |
US6442474B1 (en) | Vision-based method and apparatus for monitoring vehicular traffic events | |
CN111986228B (zh) | 一种基于lstm模型扶梯场景下的行人跟踪方法、装置和介质 | |
CN109816987B (zh) | 一种汽车鸣笛电子警察执法抓拍系统及其抓拍方法 | |
CN110895662A (zh) | 车辆超载报警方法、装置、电子设备及存储介质 | |
Conte et al. | An ensemble of rejecting classifiers for anomaly detection of audio events | |
Zinemanas et al. | MAVD: a dataset for sound event detection in urban environments | |
CN111444843B (zh) | 一种多模态驾驶员及车辆违法行为监测方法及系统 | |
Rovetta et al. | Detection of hazardous road events from audio streams: An ensemble outlier detection approach | |
CN110620760A (zh) | 一种SVM和贝叶斯网络的FlexRay总线融合入侵检测方法和检测装置 | |
KR102066718B1 (ko) | 음향기반 터널 사고 검지 시스템 | |
KR102518615B1 (ko) | 이상 음원을 판단하는 복합 감시 장치 및 방법 | |
CN116302809A (zh) | 边缘端数据分析计算装置 | |
CN113362851A (zh) | 基于深度学习交通场景声音分类的方法及系统 | |
JP4046592B2 (ja) | 音源識別装置および突発事象検出装置並びに突発事象自動記録装置 | |
CN114926824A (zh) | 一种不良驾驶行为判别方法 | |
CN118569617A (zh) | 一种基于计算机视觉目标检测的智慧电厂管控系统及管控方法 | |
WO2008055306A1 (en) | Machine learning system for graffiti deterrence | |
JP3164100B2 (ja) | 交通音源種別識別装置 | |
JP3248522B2 (ja) | 音源種別識別装置 | |
CN113247730B (zh) | 基于多维特征的电梯乘客尖叫检测方法及系统 | |
Dedeoglu et al. | Surveillance using both video and audio | |
CN115221924A (zh) | 一种基于时间序列多模态的工业设备异常检测智能识别算法框架 | |
CN111818356A (zh) | 一种基于场景识别的高危作业直播中断的智能方法 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
WWE | Wipo information: entry into national phase |
Ref document number: 2006512179 Country of ref document: JP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
WWW | Wipo information: withdrawn in national office |
Country of ref document: DE |
|
122 | Ep: pct application non-entry in european phase |