[go: up one dir, main page]
More Web Proxy on the site http://driver.im/

CN113095559B - Method, device, equipment and storage medium for predicting hatching time - Google Patents

Method, device, equipment and storage medium for predicting hatching time Download PDF

Info

Publication number
CN113095559B
CN113095559B CN202110362045.3A CN202110362045A CN113095559B CN 113095559 B CN113095559 B CN 113095559B CN 202110362045 A CN202110362045 A CN 202110362045A CN 113095559 B CN113095559 B CN 113095559B
Authority
CN
China
Prior art keywords
sound frequency
sound
preset time
time period
frequency characteristics
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110362045.3A
Other languages
Chinese (zh)
Other versions
CN113095559A (en
Inventor
苏睿
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202110362045.3A priority Critical patent/CN113095559B/en
Publication of CN113095559A publication Critical patent/CN113095559A/en
Application granted granted Critical
Publication of CN113095559B publication Critical patent/CN113095559B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/04Forecasting or optimisation specially adapted for administrative or management purposes, e.g. linear programming or "cutting stock problem"
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Information and communication technology [ICT] specially adapted for implementation of business processes of specific business sectors, e.g. utilities or tourism
    • G06Q50/02Agriculture; Fishing; Forestry; Mining
    • GPHYSICS
    • G10MUSICAL INSTRUMENTS; ACOUSTICS
    • G10LSPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
    • G10L25/00Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00
    • G10L25/48Speech or voice analysis techniques not restricted to a single one of groups G10L15/00 - G10L21/00 specially adapted for particular use

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Physics & Mathematics (AREA)
  • Economics (AREA)
  • Human Resources & Organizations (AREA)
  • Strategic Management (AREA)
  • Health & Medical Sciences (AREA)
  • Theoretical Computer Science (AREA)
  • Marketing (AREA)
  • General Physics & Mathematics (AREA)
  • General Business, Economics & Management (AREA)
  • Tourism & Hospitality (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Development Economics (AREA)
  • Operations Research (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Game Theory and Decision Science (AREA)
  • Agronomy & Crop Science (AREA)
  • Animal Husbandry (AREA)
  • Marine Sciences & Fisheries (AREA)
  • Mining & Mineral Resources (AREA)
  • Quality & Reliability (AREA)
  • General Health & Medical Sciences (AREA)
  • Primary Health Care (AREA)
  • Computational Linguistics (AREA)
  • Signal Processing (AREA)
  • Audiology, Speech & Language Pathology (AREA)
  • Human Computer Interaction (AREA)
  • Acoustics & Sound (AREA)
  • Multimedia (AREA)
  • Measurement Of The Respiration, Hearing Ability, Form, And Blood Characteristics Of Living Organisms (AREA)
  • Telephonic Communication Services (AREA)

Abstract

The application relates to a method, a device, equipment and a storage medium for predicting a hatching moment, and relates to the field of intelligent cultivation. The method for predicting the young chicken outlet time comprises the following steps: acquiring sound data in the hatching device within a current preset time period and within N-1 preset time periods before the current preset time period, wherein N is larger than 1; extracting sound frequency characteristics in sound data in each preset time period to obtain N sound frequency characteristics, wherein the sound frequency characteristics are used for representing the number of times of sound in the chicking device in each preset time period; and obtaining the prediction time of the hatching according to the N sound frequency characteristics. The hatching rate improving method and device are used for solving the problem that the hatching rate can not be improved by accurately predicting the hatching time.

Description

Method, device, equipment and storage medium for predicting hatching time
Technical Field
The application relates to the field of intelligent cultivation, in particular to a method, a device, equipment and a storage medium for predicting a hatching time.
Background
The period from hatching to hatching of the poultry is 21 days, wherein the first 18 days are hatching in a hatching environment, and then the poultry enter a hatching link. The hatching link has a problem of hatching time, and the hatching time has a contradiction point that if the hatching is performed prematurely, the environment temperature and the like are changed, and eggs which are not hatched can not be hatched almost; if the chicks are hatched too late, the chicks which are hatched will become more numerous, but the chicks which have been broken will die or disabled due to the environment temperature and too high CO2 concentration.
At present, the young is only hatched according to a fixed time node according to past experience, and other factors cannot be compatible. Due to different varieties and energy differences of different batches of egg sources, and factors such as seasons, outdoor sunny days and rainy days, the fixed hatching time can be too early or too late for hatching, so that the hatching rate cannot be optimized. The current prediction of the hatching time of the poultry hatching eggs is still carried out at a timing, so the hatching rate is about 83%, and the hatching rate also comprises healthy chicks, B chicks and C chicks, which can be understood as that the physique of the chicks is worse and worse.
Disclosure of Invention
The application provides a method, a device, equipment and a storage medium for predicting a hatching time, which are used for solving the problem that the hatching time cannot be accurately predicted to improve the hatching rate.
In a first aspect, an embodiment of the present application provides a method for predicting a brooding time, including:
acquiring sound data in a current preset time period and within N-1 preset time periods before the current preset time period, wherein N is larger than 1;
extracting sound frequency characteristics in sound data in each preset time period to obtain N sound frequency characteristics, wherein the sound frequency characteristics are used for representing the number of times of sound in the brooding device in each preset time period;
and obtaining the predicting time of the hatching according to the N sound frequency characteristics.
Optionally, the extracting the sound frequency characteristics in the sound data in each preset time period to obtain N sound frequency characteristics includes:
generating an envelope of sound data within each preset time period;
and obtaining the wave crest number of each envelope, and taking the wave crest number as the N sound frequency characteristics.
Optionally, the extracting the sound frequency characteristics in the sound data in each preset time period to obtain N sound frequency characteristics includes:
and extracting the sound frequency characteristics in the sound data in each preset time period by using a voice activity detection algorithm to obtain N sound frequency characteristics.
Optionally, the extracting the sound frequency characteristics in the sound data in each preset time period to obtain N sound frequency characteristics includes:
generating an envelope of sound data within each preset time period;
obtaining the number of wave crests of each envelope, and taking the number of wave crests as N first sound frequency sub-features;
extracting second audio sub-features in sound data in each preset time period by using a voice activity detection algorithm to obtain N second audio sub-features;
and generating the N sound frequency sub-features according to the N first sound frequency sub-features and the N second sound frequency sub-features.
Optionally, the obtaining the predicting time of the chick according to the N sound frequency features includes:
fitting a first curve according to the N sound frequency characteristics and the moments corresponding to the N sound frequency characteristics;
and when the trend of the first curve shows a gentle trend, obtaining the hatching prediction time.
Optionally, the obtaining the predicting time of the chick according to the N sound frequency features includes:
inputting the sound frequency characteristics in the current preset time period and N-1 preset time periods before the current preset time period into a pre-trained brooding moment prediction model to obtain first sound frequency prediction characteristics;
inputting the sound frequency characteristics and the first sound frequency prediction characteristics in the current preset time period and N-2 preset time periods before the current preset time period into the brooding moment prediction model to obtain second sound frequency prediction characteristics;
inputting the sound frequency characteristics in the current preset time period and N-i preset time periods before the current preset time period and the i-1 sound frequency prediction characteristics into the brooding moment prediction model to obtain an i-th sound frequency prediction characteristic, wherein i is greater than 2 and i is less than N;
fitting a second curve according to the N sound frequency characteristics, the time corresponding to the N sound frequency characteristics, the i sound frequency prediction characteristics and the time corresponding to the i sound frequency prediction characteristics;
and when the trend of the second curve shows a gentle trend, obtaining the hatching prediction time.
Optionally, when the trend of the second curve shows a gentle trend, obtaining the predicting time of the hatching includes:
calculating a difference value between the i-th sound frequency prediction characteristic and the i-1-th sound frequency prediction characteristic;
and if the difference value is smaller than a preset value, taking the moment corresponding to the ith sound frequency prediction characteristic as the hatching prediction moment.
In a second aspect, an embodiment of the present application provides a device for predicting a brooding time, including:
the acquisition module is used for acquiring sound data in the hatching device within a current preset time period and within N-1 preset time periods before the current preset time period, wherein N is greater than 1;
the extracting module is used for extracting sound frequency characteristics in the sound data in each preset time period to obtain N sound frequency characteristics, wherein the sound frequency characteristics are used for representing the number of times of sound in the chicking device in each preset time period;
and the processing module is used for obtaining the prediction time of the young chicken according to the N sound frequency characteristics.
Optionally, the extracting module includes:
a first generation unit configured to generate an envelope of sound data within each of the preset time periods;
and the first acquisition unit is used for acquiring the wave crest number of each envelope and taking the wave crest number as the N sound frequency characteristics.
Optionally, the extracting module is configured to extract sound frequency features in the sound data in each preset time period by using a voice activity detection algorithm, so as to obtain N sound frequency features.
Optionally, the extracting module includes:
a second generation unit configured to generate an envelope of sound data within each of the preset time periods;
the second acquisition unit is used for acquiring the wave crest number of each envelope and taking the wave crest number as N first sound frequency sub-features;
the extraction unit is used for extracting second audio sub-features in the sound data in each preset time period by utilizing a voice activity detection algorithm to obtain N second audio sub-features;
and the third generating unit is used for generating the N sound frequency characteristics according to the N first sound frequency sub-characteristics and the N second sound frequency sub-characteristics.
Optionally, the processing module includes:
the first fitting unit is used for fitting a first curve according to the N sound frequency characteristics and the moments corresponding to the N sound frequency characteristics;
and the first processing unit is used for obtaining the predicted time of the hatching when the trend of the first curve shows a gentle trend.
Optionally, the processing module includes:
the second processing unit is used for inputting the sound frequency characteristics in the current preset time period and N-1 preset time periods before the current preset time period into a pre-trained brooding moment prediction model to obtain first sound frequency prediction characteristics;
the third processing unit is used for inputting the sound frequency characteristics and the first sound frequency prediction characteristics in the current preset time period and N-2 preset time periods before the current preset time period into the brooding moment prediction model to obtain second sound frequency prediction characteristics;
a fourth processing unit, configured to input the sound frequency characteristics in the current preset time period and N-i preset time periods before the current preset time period and the i-1 sound frequency prediction characteristics to the brooding time prediction model, to obtain an i-th sound frequency prediction characteristic, where i is greater than 2 and i is less than N;
a second fitting unit, configured to fit a second curve according to the N audio frequency features, the time instants corresponding to the N audio frequency features, the i audio frequency prediction features, and the time instants corresponding to the i audio frequency prediction features;
and the fifth processing unit is used for obtaining the predicted time of the hatching when the trend of the second curve shows a gentle trend.
Optionally, the fifth processing unit includes:
a difference calculating subunit, configured to calculate a difference between the i-th sound frequency prediction feature and the i-1-th sound frequency prediction feature;
and the processing subunit is used for taking the moment corresponding to the ith sound frequency prediction characteristic as the brooding prediction moment if the difference value is smaller than a preset value.
In a third aspect, an embodiment of the present application provides an electronic device, including: the device comprises a processor, a memory and a communication bus, wherein the processor and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to execute the program stored in the memory, and implement the method for predicting the time of the young chicken according to the first aspect.
In a fourth aspect, an embodiment of the present application provides a computer readable storage medium storing a computer program, where the computer program when executed by a processor implements the method for predicting a brooding time according to the first aspect.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages: according to the method, sound data inside the hatching device in the current preset time period and N-1 preset time periods before the current preset time period are obtained, N is larger than 1, sound frequency characteristics in the sound data in each preset time period are extracted, N sound frequency characteristics are obtained, the sound frequency characteristics are used for representing the number of times of sound calling inside the hatching device in each preset time period, the hatching prediction time is obtained according to the N sound frequency characteristics, sound audio frequency characteristics can be extracted from sound data actually collected inside the hatching device, and the hatching prediction time is obtained according to the sound frequency characteristics, so that the problem that the hatching rate cannot be improved by accurately predicting the hatching time is solved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the invention or the technical solutions of the prior art, the drawings which are used in the description of the embodiments or the prior art will be briefly described, and it will be obvious to a person skilled in the art that other drawings can be obtained from these drawings without inventive effort.
FIG. 1 is a flow chart of a method for predicting a hatching time according to an embodiment of the present application;
FIG. 2 is a schematic illustration of an envelope of sound data generated in one embodiment of the present application;
FIG. 3 is a schematic diagram of unfiltered sound data in one embodiment of the present application;
FIG. 4 is a schematic diagram of a frequency domain waveform of sound data according to an embodiment of the present application;
FIG. 5 is a schematic diagram of band-pass filtered sound data in one embodiment of the present application;
FIG. 6 is a diagram of sound data after wiener filtering in an embodiment of the present application;
FIG. 7 is a schematic representation of a second curve fit in one embodiment of the present application;
fig. 8 is a schematic structural diagram of a device for predicting a hatching time in an embodiment of the present application;
fig. 9 is a schematic structural diagram of an electronic device in an embodiment of the present application.
Detailed Description
For the purposes of making the objects, technical solutions and advantages of the embodiments of the present application more clear, the technical solutions of the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is apparent that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be made by one of ordinary skill in the art without undue burden from the present disclosure, are within the scope of the present application based on the embodiments herein.
The embodiment of the application provides a method for predicting the brooding time, which can be applied to a server, and can be applied to other electronic equipment, such as a terminal (a mobile phone, a tablet personal computer and the like). In the embodiment of the present application, an example of applying the method to a server will be described.
In this embodiment of the present application, as shown in fig. 1, a method flow for predicting a brooding time mainly includes:
step 101, acquiring sound data in the hatching device within a current preset time period and within N-1 preset time periods before the current preset time period, wherein N is greater than 1.
For example: the preset time period may be 1 second, the sound data in the hatching apparatus in the current preset time period is the sound data in 1 second, such as the sound data in 1 second, including 3 hours, 10 minutes and 20 seconds, which is just collected by the hatching apparatus, and N may be 10, and the sound data in the hatching apparatus in the current preset time period and N-1 preset time periods before the current preset time period is obtained, that is, the sound data in 1 second, including 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds, which is obtained.
Step 102, extracting sound frequency characteristics in sound data in each preset time period to obtain N sound frequency characteristics; the sound frequency characteristic is used for indicating the number of times of sound in the brooding device in each preset time period.
In one embodiment, the method for extracting the frequency characteristics of the sound is very diverse, including but not limited to the following ways:
mode one
Generating an envelope of sound data within each preset time period; the number of peaks of each envelope is obtained and used as N sound frequency characteristics.
For example: the number of peaks of the envelope of the sound data within 10 seconds, i.e., 1, 3,6, 8, 12, 14, 17, 20, 24, 25, which is 3 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds, 20 seconds, is 1, 3,6, 8, 12, 14, 17, 20, 24, 25, respectively, and is 1, 3,6, 8, 12, 14, 17, 20, 24, 25, respectively, as 10 sound frequency characteristics.
Wherein the envelope refers to a curve of amplitude over time. A common method for generating an envelope is to square and square the time domain signal after hilbert transformation with the time domain signal itself, and then obtain the envelope of the time domain signal.
One peak of the envelope may be used to represent a call and the number of peaks of the envelope may be used to represent the number of calls.
In one embodiment, the envelope of the generated sound data is shown in fig. 2. The upper and lower waveforms of fig. 2 are envelopes of sound data of two channels of the left channel and the right channel, respectively, the abscissa is time, the ordinate is sound intensity, and obvious peaks can be seen in fig. 2, each peak representing a chicken sound.
Mode two
And extracting the sound frequency characteristics in the sound data in each preset time period by using a voice activity detection algorithm to obtain N sound frequency characteristics.
Among them, voice activity detection (Voice Activity Detection, VAD) can detect whether there is voice in a piece of sound data, which is an algorithm for detecting voice in the frequency domain. The method comprises the steps of dividing the frequency into 6 sub-bands (80 Hz-250Hz,250Hz-500Hz,500Hz-1kHz,1kHz-2kHz,2kHz-3kHz,3kHz-4 kHz), calculating the energy of each sub-band, calculating the probability of noise and non-noise in a Gaussian mixture model according to the clustering thought through the energy, solving the probability of noise and non-noise of each frequency band, and then calculating the likelihood logarithmic ratio of the two probabilities of the frequency band, if the two probabilities of the frequency band are not passed, calculating the likelihood logarithmic ratio of the frequency band and the whole, and judging that the voice exists if the two types of the frequency bands pass through more than one time.
The number of calls per preset time period can also be extracted using a voice activity detection algorithm.
For example: the voice frequency characteristics of the voice data within 10 seconds, i.e., 1, 4, 6, 9, 11, 14, 18, 20, 25, 26, which are 3 hours, 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds, 20 seconds, are extracted by the voice activity detection algorithm, and 1, 4, 6, 9, 11, 14, 18, 20, 25, 26 are set as 10 voice frequency characteristics.
Mode three
Generating an envelope of sound data within each preset time period; the method comprises the steps of obtaining the number of wave crests of each envelope, and taking the number of wave crests as N first sound frequency sub-features; extracting second audio sub-features in sound data in each preset time period by using a voice activity detection algorithm to obtain N second audio sub-features; and generating N sound frequency features according to the N first sound frequency sub-features and the N second sound frequency sub-features.
According to the N first sound frequency sub-features and the N second sound frequency sub-features, N sound frequency features are generated, so that the sound frequency features can be more accurate.
In a specific embodiment, the N audio frequency features are generated according to the N first audio frequency sub-features and the N second audio frequency sub-features, which may be the average value of the first audio frequency sub-features and the second audio frequency sub-features at each time, and the average value is used as the audio frequency feature at the time.
For example: the number of peaks of the envelope of the sound data within 10 seconds, i.e., 1, 3,6, 8, 12, 14, 17, 20 seconds, which is 3 hours, i.e., 11 minutes, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds, 20 seconds, is 1, 3,6, 8, 12, 14, 17, 20, 24, 25, respectively, and 1, 3,6, 8, 12, 14, 17, 20, 24, 25 is set as 10 first sound frequency sub-features; extracting sound frequency characteristics of 1, 4, 6, 9, 11, 14, 18, 20, 25 and 26 from sound data within 10 seconds, namely 3 hours, 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds, by using a voice activity detection algorithm, wherein the sound frequency characteristics are 1, 4, 6, 9, 11, 14, 18, 20, 25 and 26 as 10 second sound frequency sub-characteristics; the average value of the first sound frequency sub-feature and the second sound frequency sub-feature at each moment is calculated, the average value is taken as the sound frequency feature at the moment, and the 10 sound frequency features are 1, 3.5, 6, 8.5, 11.5, 14, 17.5, 20, 24.5 and 25.5.
In one embodiment, the band-pass filtering and wiener filtering are performed on the sound data inside the hatching apparatus in the current preset time period and N-1 preset time periods before the current preset time period before extracting the sound frequency characteristics in the sound data in each preset time period.
Sound data is continuously collected in the brooding device by utilizing the sound collecting device, but various environmental noises such as air conditioner, alarm and the like often exist in the sound data, and the unfiltered original sound data cannot be seen in any amplitude characteristic in the time domain. In one embodiment, unfiltered sound data is shown in fig. 3. The upper and lower waveform diagrams of fig. 3 are waveform diagrams of unfiltered sound data of the left channel and the right channel in the time domain respectively, the abscissa is time, the ordinate is sound intensity, in fig. 3, burrs are more, obvious amplitude characteristics cannot be seen, and sound frequency characteristics cannot be extracted accurately.
The time domain signal of the sound data is first converted into the frequency domain by fourier transform, as shown in fig. 4, two curves in fig. 4 are waveform diagrams of unfiltered sound data of two channels of the left channel and the right channel in the frequency domain, the abscissa is frequency, the ordinate is sound intensity, and in fig. 4, the two curves can be seen to have obvious peaks at 2500Hz-7000 Hz. By adopting sound data in the frequency band of 2500Hz-7000Hz, partial background noise can be filtered. Therefore, the sound data in the hatching device in the current preset time period and N-1 preset time periods before the current preset time period are subjected to band-pass filtering, so that partial background noise can be filtered. The specific practice is that the sound data is filtered out by a band-pass filter with the frequency of 2500Hz-7000Hz, the sound data with other frequency domains removed is found to improve the filtering of background noise, and the band-pass filtered sound data is shown in figure 5. The upper and lower waveform diagrams of fig. 5 are waveform diagrams of the sound data of the left channel and the right channel after the band-pass filtering in the time domain, the abscissa is time, the ordinate is sound intensity, in fig. 5, many burrs are eliminated on the basis of fig. 3, the amplitude characteristics are obvious, and the background noise with stable partial amplitude is still present.
In order to filter the background noise with stable partial amplitude, wiener filtering is further carried out on the sound data after the band-pass filtering. Wiener filtering is to combine multiple voice/noise classification features into a model through likelihood ratio functions to form a multi-feature integrated probability density function, wherein the features comprise three indexes of LRT (likelihood ratio test ) features, spectrum flatness and spectrum difference. Because the voice has more harmonic waves than the noise, the peak value of the voice can appear in the fundamental frequency and the harmonic waves, and the noise spectrum is more stable than the voice spectrum, so that the background noise can be further filtered by carrying out wiener filtering on the sound data after band-pass filtering, and the sound data after wiener filtering is shown in fig. 6. The upper and lower waveform diagrams of fig. 6 are waveform diagrams of sound data of two channels of a left channel and a right channel after wiener filtering in a time domain, an abscissa is time, and an ordinate is sound intensity, and in fig. 6, background noise with stable partial amplitude is eliminated on the basis of fig. 5, obvious amplitude characteristics can be seen, and sound frequency characteristics can be further extracted accurately.
And step 103, obtaining the prediction time of the hatching according to the N sound frequency characteristics.
In one embodiment, the method for obtaining the predicted time of the chick may be varied, including but not limited to the following:
mode one
Fitting a first curve according to the N sound frequency characteristics and the moments corresponding to the N sound frequency characteristics; and when the trend of the first curve shows a gentle trend, obtaining the prediction moment of the hatching.
For example: the characteristic of sound frequency corresponding to 10 moments of 3 hours, namely, 11 minutes, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds is respectively 1, 3.5, 6, 8.5, 11.5, 14, 17.5, 20, 24.5 and 25.5, the moment is taken as an abscissa, the characteristic of sound frequency corresponding to the moment is taken as an ordinate, a first curve is fitted according to 10 points of (11, 1), (12,3.5), (13, 6), (14,8.5), (15, 11.5), (16, 14), (17, 17.5), (18, 20), (19, 24.5), (20 and 25.5), a functional expression of the first curve is obtained, and then a target point with a gentle trend of the first curve is obtained, and the abscissa of the target point is the predicted moment of the hatching.
Mode two
Inputting the sound frequency characteristics in the current preset time period and N-1 preset time periods before the current preset time period into a pre-trained brooding moment prediction model to obtain first sound frequency prediction characteristics; inputting the sound frequency characteristics and the first sound frequency prediction characteristics in the current preset time period and N-2 preset time periods before the current preset time period into a hatching moment prediction model to obtain second sound frequency prediction characteristics; inputting the sound frequency characteristics and i-1 sound frequency prediction characteristics in the current preset time period and N-i preset time periods before the current preset time period into a hatching moment prediction model to obtain an ith sound frequency prediction characteristic, wherein i is greater than 2 and i is less than N; fitting a second curve according to the N sound frequency characteristics, the time corresponding to the N sound frequency characteristics, the i sound frequency prediction characteristics and the time corresponding to the i sound frequency prediction characteristics; and when the trend of the second curve shows a gentle trend, obtaining the prediction moment of the hatching.
The pre-trained brooding time prediction model may be a recurrent neural network (Recurrent Neural Network, RNN), among others.
For example: the preset time period may be 1 second, the sound frequency characteristic in the current preset time period may be the sound frequency characteristic in 1 second, which is 3 hours 10 minutes 20 seconds, N may be 10, the sound frequency characteristic in the current preset time period and N-1 preset time periods before the current preset time period is the sound frequency characteristic corresponding to 10 moments, which are 3 hours 10 minutes 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds, 20 seconds, and 1, 3.5, 6, 8.5, 11.5, 14, 17.5, 20, 24.5, and 25.5, respectively.
The audio frequency characteristics 1, 3.5, 6, 8.5, 11.5, 14, 17.5, 20, 24.5 and 25.5 corresponding to 10 times of 3 hours, 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds are input into a pre-trained chick-playing time prediction model, and the first audio frequency prediction characteristic corresponding to 3 hours, 10 minutes and 21 seconds is obtained. The audio frequency characteristics 3.5, 6, 8.5, 11.5, 14, 17.5, 20, 24.5 and 25.5 corresponding to 9 times of 3 hours, 10 minutes and 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds and the first audio frequency prediction characteristics corresponding to 3 hours, 10 minutes and 21 seconds are input into the chick-playing time prediction model, and the second audio frequency prediction characteristics corresponding to 3 hours, 10 minutes and 22 seconds are obtained. And 8 audio frequency prediction features corresponding to 2 moments of 3 hours, 10 minutes, 19 seconds and 20 seconds, namely 24.5 and 25.5, and 8 audio frequency prediction features corresponding to 3 hours, 10 minutes, 21 seconds, 22 seconds, 23 seconds, 24 seconds, 25 seconds, 26 seconds, 27 seconds and 28 seconds are input into the hatching moment prediction model, so that a ninth audio frequency prediction feature corresponding to 3 hours, 10 minutes and 29 seconds is obtained.
According to the sound frequency characteristics 1, 3.5, 6, 8.5, 11.5, 14, 17.5, 20.5 and 25.5 corresponding to 10 times of 3 hours, 10 minutes, 11 seconds, 12 seconds, 13 seconds, 14 seconds, 15 seconds, 16 seconds, 17 seconds, 18 seconds, 19 seconds and 20 seconds, and the first to ninth sound frequency prediction characteristics corresponding to 9 times of 3 hours, 10 minutes, 21 seconds, 23 seconds, 24 seconds, 25 seconds, 26 seconds, 27 seconds, 28 seconds and 29 seconds, a second curve is fitted by taking the time as an abscissa, the sound frequency characteristics or the sound frequency prediction characteristics corresponding to the time as an ordinate, a function expression of the second curve is obtained, and then the trend of the second curve shows a gentle trend, and the abscissa of the target point is the brooding prediction time.
In one embodiment, there are a number of methods for determining whether the second curve exhibits a gradual trend, including but not limited to the following:
mode one
Calculating the difference between the i-th sound frequency prediction characteristic and the i-1-th sound frequency prediction characteristic; if the difference value is smaller than the preset value, the time corresponding to the ith sound frequency prediction characteristic is taken as the hatching prediction time.
Mode two
Calculating a slope between a point of the i-th sound frequency prediction feature on the second curve and a point of the i-1-th sound frequency prediction feature on the second curve; if the slope is smaller than the preset slope, the time corresponding to the ith sound frequency prediction characteristic is taken as the hatching prediction time.
In a specific embodiment, as shown in fig. 7, the fitted second curve is shown by a dashed line, the solid line is a curve drawn according to the sound frequency characteristics extracted from the sound data in the actual collected hatching device, the abscissa is time, and the ordinate is the number of times of sound, and the inflection point of the fitted second curve with slow growth is observed to be smaller than the abscissa of the inflection point with slow growth of the solid line, i.e. the fitted second curve can accurately capture the inflection point with slow growth in advance, so that the hatching prediction time can be accurately obtained, and the hatching rate is improved.
In summary, in the embodiment of the application, by acquiring the sound data inside the chick-playing device in the current preset time period and in the N-1 preset time periods before the current preset time period, N is greater than 1, extracting the sound frequency characteristics in the sound data in each preset time period to obtain N sound frequency characteristics, wherein the sound frequency characteristics are used for representing the number of times of sound in the chick-playing device in each preset time period, according to the N sound frequency characteristics, the chick-playing prediction time is obtained, the sound audio frequency characteristics can be extracted from the sound data actually collected in the chick-playing device, and the chick-playing prediction time is obtained according to the sound frequency characteristics, so that the problem that the chick-playing time cannot be accurately predicted to improve the hatching rate is solved.
Based on the same conception, the embodiment of the present application provides a device for predicting a brooding time, and the specific implementation of the device may be referred to the description of the embodiment of the method, and the repetition is omitted, as shown in fig. 8, where the device mainly includes:
an obtaining module 801, configured to obtain sound data inside a brooding device in a current preset time period and N-1 preset time periods before the current preset time period, where N is greater than 1;
an extracting module 802, configured to extract sound frequency features in the sound data in each preset time period to obtain N sound frequency features, where the sound frequency features are used to represent the number of times of sound in the chick-playing device in each preset time period;
and the processing module 803 is used for obtaining the prediction time of the hatching according to the N sound frequency characteristics.
Based on the same concept, the embodiment of the application also provides an electronic device, as shown in fig. 9, where the electronic device mainly includes: processor 901, memory 902 and communication bus 903, wherein processor 901 and memory 902 communicate with each other via communication bus 903. The memory 902 stores a program executable by the processor 901, and the processor 901 executes the program stored in the memory 902 to implement the following steps:
acquiring sound data in the hatching device within a current preset time period and within N-1 preset time periods before the current preset time period, wherein N is larger than 1; extracting sound frequency characteristics in sound data in each preset time period to obtain N sound frequency characteristics, wherein the sound frequency characteristics are used for representing the number of times of sound in the chicking device in each preset time period; and obtaining the prediction time of the hatching according to the N sound frequency characteristics.
The communication bus 903 mentioned in the above-mentioned electronic device may be a peripheral component interconnect standard (Peripheral Component Interconnect, abbreviated to PCI) bus or an extended industry standard architecture (Extended Industry Standard Architecture, abbreviated to EISA) bus, or the like. The communication bus 903 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in fig. 9, but not only one bus or one type of bus.
The memory 902 may include random access memory (Random Access Memory, simply RAM) or may include non-volatile memory (non-volatile memory), such as at least one magnetic disk memory. Optionally, the memory may also be at least one memory device located remotely from the aforementioned processor 901.
The processor 901 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a digital signal processor (Digital Signal Processing, DSP), an application specific integrated circuit (Application Specific Integrated Circuit, ASIC), a Field programmable gate array (Field-Programmable Gate Array, FPGA), or other programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In yet another embodiment of the present application, there is also provided a computer-readable storage medium having a computer program stored therein, which when run on a computer, causes the computer to perform the method of predicting the time instant of a hatchling described in the above embodiments.
In the above embodiments, it may be implemented in whole or in part by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product. The computer program product includes one or more computer instructions. When the computer instructions are loaded and executed on a computer, the processes or functions described in accordance with the embodiments of the present application are produced in whole or in part. The computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus. The computer instructions may be stored in a computer-readable storage medium or transmitted from one computer-readable storage medium to another computer-readable storage medium, for example, by a wired (e.g., coaxial cable, optical fiber, digital Subscriber Line (DSL)), or wireless (e.g., infrared, microwave, etc.) means from one website, computer, server, or data center to another. The computer readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that contains an integration of one or more available media. The usable medium may be a magnetic medium (e.g., floppy disk, hard disk, magnetic tape, etc.), an optical medium (e.g., DVD), or a semiconductor medium (e.g., solid state disk), etc.
It should be noted that in this document, relational terms such as "first" and "second" and the like are used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Moreover, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or apparatus that comprises the element.
The foregoing is only a specific embodiment of the invention to enable those skilled in the art to understand or practice the invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method for predicting a time of a hatchling, comprising:
acquiring sound data in a current preset time period and within N-1 preset time periods before the current preset time period, wherein N is larger than 1;
extracting sound frequency characteristics in sound data in each preset time period to obtain N sound frequency characteristics, wherein the sound frequency characteristics are used for representing the number of times of sound in the brooding device in each preset time period;
and fitting a curve of time and sound frequency characteristics according to the N sound frequency characteristics, and obtaining a hatching prediction moment when the trend of the curve shows a gentle trend.
2. The method for predicting a brooding time according to claim 1, wherein the extracting the sound frequency characteristics in the sound data in each preset time period to obtain N sound frequency characteristics includes:
generating an envelope of sound data within each preset time period;
and obtaining the wave crest number of each envelope, and taking the wave crest number as the N sound frequency characteristics.
3. The method for predicting a brooding time according to claim 1, wherein the extracting the sound frequency characteristics in the sound data in each preset time period to obtain N sound frequency characteristics includes:
and extracting the sound frequency characteristics in the sound data in each preset time period by using a voice activity detection algorithm to obtain N sound frequency characteristics.
4. The method for predicting a brooding time according to claim 1, wherein the extracting the sound frequency characteristics in the sound data in each preset time period to obtain N sound frequency characteristics includes:
generating an envelope of sound data within each preset time period;
obtaining the number of wave crests of each envelope, and taking the number of wave crests as N first sound frequency sub-features;
extracting second audio sub-features in sound data in each preset time period by using a voice activity detection algorithm to obtain N second audio sub-features;
and generating the N sound frequency sub-features according to the N first sound frequency sub-features and the N second sound frequency sub-features.
5. The method according to any one of claims 1 to 4, wherein the fitting a curve of time and frequency of sound features according to the N frequency of sound features, when the trend of the curve shows a gentle trend, obtains a predicted time of the chick, includes:
fitting a first curve according to the N sound frequency characteristics and the moments corresponding to the N sound frequency characteristics;
and when the trend of the first curve shows a gentle trend, obtaining the hatching prediction time.
6. The method according to any one of claims 1 to 4, wherein the fitting a curve of time and frequency of sound features according to the N frequency of sound features, when the trend of the curve shows a gentle trend, obtains a predicted time of the chick, includes:
inputting the sound frequency characteristics in the current preset time period and N-1 preset time periods before the current preset time period into a pre-trained brooding moment prediction model to obtain first sound frequency prediction characteristics;
inputting the sound frequency characteristics and the first sound frequency prediction characteristics in the current preset time period and N-2 preset time periods before the current preset time period into the brooding moment prediction model to obtain second sound frequency prediction characteristics;
inputting the sound frequency characteristics and i-1 sound frequency prediction characteristics in the current preset time period and N-i preset time periods before the current preset time period into the brooding moment prediction model to obtain an i-th sound frequency prediction characteristic, wherein i sequentially takes all values which are more than 2 and less than N, and the i-1 sound frequency prediction characteristic refers to the first sound frequency prediction characteristic to the i-1-th sound frequency prediction characteristic;
fitting a second curve according to the N sound frequency characteristics, the time corresponding to the N sound frequency characteristics, the N-1 sound frequency prediction characteristics and the time corresponding to the N-1 sound frequency prediction characteristics;
and when the trend of the second curve shows a gentle trend, obtaining the hatching prediction time.
7. The method of claim 6, wherein obtaining the predicted time of the hatchling when the trend of the second curve exhibits a gradual trend comprises:
calculating a difference value between the i-th sound frequency prediction characteristic and the i-1-th sound frequency prediction characteristic;
and if the difference value is smaller than a preset value, taking the moment corresponding to the ith sound frequency prediction characteristic as the hatching prediction moment.
8. A device for predicting a time of a hatchling, comprising:
the acquisition module is used for acquiring sound data in the hatching device within a current preset time period and within N-1 preset time periods before the current preset time period, wherein N is greater than 1;
the extracting module is used for extracting sound frequency characteristics in the sound data in each preset time period to obtain N sound frequency characteristics, wherein the sound frequency characteristics are used for representing the number of times of sound in the chicking device in each preset time period;
and the processing module is used for fitting out a curve of the time and the sound frequency characteristics according to the N sound frequency characteristics, and obtaining the prediction moment of the hatching when the trend of the curve shows a gentle trend.
9. An electronic device, comprising: the device comprises a processor, a memory and a communication bus, wherein the processor and the memory are communicated with each other through the communication bus;
the memory is used for storing a computer program;
the processor is configured to execute a program stored in the memory, and implement the method for predicting a brooding time according to any one of claims 1 to 7.
10. A computer readable storage medium storing a computer program, wherein the computer program when executed by a processor implements the method for predicting a time instant of a hatchling according to any one of claims 1 to 7.
CN202110362045.3A 2021-04-02 2021-04-02 Method, device, equipment and storage medium for predicting hatching time Active CN113095559B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110362045.3A CN113095559B (en) 2021-04-02 2021-04-02 Method, device, equipment and storage medium for predicting hatching time

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110362045.3A CN113095559B (en) 2021-04-02 2021-04-02 Method, device, equipment and storage medium for predicting hatching time

Publications (2)

Publication Number Publication Date
CN113095559A CN113095559A (en) 2021-07-09
CN113095559B true CN113095559B (en) 2024-04-09

Family

ID=76673579

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110362045.3A Active CN113095559B (en) 2021-04-02 2021-04-02 Method, device, equipment and storage medium for predicting hatching time

Country Status (1)

Country Link
CN (1) CN113095559B (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005311534A (en) * 2004-04-19 2005-11-04 Ntt Docomo Inc Server, information communication terminal, and alarm system
JP2014187625A (en) * 2013-03-25 2014-10-02 Pioneer Electronic Corp Audio signal processing device, acoustic device, method of controlling audio signal processing device, and program
CN107578344A (en) * 2017-07-28 2018-01-12 深圳市盛路物联通讯技术有限公司 A kind of monitoring method of biological information, and monitoring device
KR20180038833A (en) * 2016-10-07 2018-04-17 건국대학교 글로컬산학협력단 Method of estimating environment of layer chicken based on chickens sound and apparatus for the same
WO2018150616A1 (en) * 2017-02-15 2018-08-23 日本電信電話株式会社 Abnormal sound detection device, abnormality degree calculation device, abnormal sound generation device, abnormal sound detection learning device, abnormal signal detection device, abnormal signal detection learning device, and methods and programs therefor
US10062378B1 (en) * 2017-02-24 2018-08-28 International Business Machines Corporation Sound identification utilizing periodic indications
WO2019019667A1 (en) * 2017-07-28 2019-01-31 深圳光启合众科技有限公司 Speech processing method and apparatus, storage medium and processor
CN110111815A (en) * 2019-04-16 2019-08-09 平安科技(深圳)有限公司 Animal anomaly sound monitoring method and device, storage medium, electronic equipment
CN110738351A (en) * 2019-09-10 2020-01-31 北京海益同展信息科技有限公司 intelligent monitoring device, system and control method
CN110955286A (en) * 2019-10-18 2020-04-03 北京海益同展信息科技有限公司 Poultry egg monitoring method and device
CN111583962A (en) * 2020-05-12 2020-08-25 南京农业大学 Sheep rumination behavior monitoring method based on acoustic analysis

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005311534A (en) * 2004-04-19 2005-11-04 Ntt Docomo Inc Server, information communication terminal, and alarm system
JP2014187625A (en) * 2013-03-25 2014-10-02 Pioneer Electronic Corp Audio signal processing device, acoustic device, method of controlling audio signal processing device, and program
KR20180038833A (en) * 2016-10-07 2018-04-17 건국대학교 글로컬산학협력단 Method of estimating environment of layer chicken based on chickens sound and apparatus for the same
WO2018150616A1 (en) * 2017-02-15 2018-08-23 日本電信電話株式会社 Abnormal sound detection device, abnormality degree calculation device, abnormal sound generation device, abnormal sound detection learning device, abnormal signal detection device, abnormal signal detection learning device, and methods and programs therefor
US10062378B1 (en) * 2017-02-24 2018-08-28 International Business Machines Corporation Sound identification utilizing periodic indications
CN107578344A (en) * 2017-07-28 2018-01-12 深圳市盛路物联通讯技术有限公司 A kind of monitoring method of biological information, and monitoring device
WO2019019667A1 (en) * 2017-07-28 2019-01-31 深圳光启合众科技有限公司 Speech processing method and apparatus, storage medium and processor
CN110111815A (en) * 2019-04-16 2019-08-09 平安科技(深圳)有限公司 Animal anomaly sound monitoring method and device, storage medium, electronic equipment
CN110738351A (en) * 2019-09-10 2020-01-31 北京海益同展信息科技有限公司 intelligent monitoring device, system and control method
CN110955286A (en) * 2019-10-18 2020-04-03 北京海益同展信息科技有限公司 Poultry egg monitoring method and device
CN111583962A (en) * 2020-05-12 2020-08-25 南京农业大学 Sheep rumination behavior monitoring method based on acoustic analysis

Also Published As

Publication number Publication date
CN113095559A (en) 2021-07-09

Similar Documents

Publication Publication Date Title
CN112700786B (en) Speech enhancement method, device, electronic equipment and storage medium
Boashash et al. A review of time–frequency matched filter design with application to seizure detection in multichannel newborn EEG
US11282514B2 (en) Method and apparatus for recognizing voice
CN115048984A (en) Sow oestrus recognition method based on deep learning
CN113095559B (en) Method, device, equipment and storage medium for predicting hatching time
CN108903914A (en) A kind of heart sound kind identification method of the MFCC decomposed based on EMD
CN111317467A (en) Electroencephalogram signal analysis method and device, terminal device and storage medium
CN117457017B (en) Voice data cleaning method and electronic equipment
CN103323853A (en) Fish identification method and system based on wavelet packets and bispectrum
CN116570239A (en) Snore detection method and device
CN112989106B (en) Audio classification method, electronic device and storage medium
CN110322894B (en) Sound-based oscillogram generation and panda detection method
CN112237433B (en) Electroencephalogram signal abnormity monitoring system and method
CN111916107A (en) Training method of audio classification model, and audio classification method and device
CN114692693A (en) Distributed optical fiber signal identification method, device and storage medium based on fractal theory
CN113238206B (en) Signal detection method and system based on decision statistic design
CN110689875A (en) Language identification method and device and readable storage medium
CN114325722B (en) High-gain detection method and system based on underwater acoustic beacon signal multi-pulse accumulation
CN114863640B (en) Feature enhancement and data amplification method and motion detection device thereof
CN118609586B (en) Sound data processing method and system
CN116665701B (en) Method, system and equipment for classifying fish swarm ingestion intensity
CN113707159B (en) Power grid bird-involved fault bird species identification method based on Mel language graph and deep learning
CN112382302B (en) Baby crying recognition method and terminal equipment
CN115659225A (en) Laying hen house noise stress source identification and classification method based on deep learning
CN113392771A (en) Plant growth state diagnosis method, system and device based on deep learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
CB02 Change of applicant information

Address after: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant after: Jingdong Technology Information Technology Co.,Ltd.

Address before: 601, 6 / F, building 2, No. 18, Kechuang 11th Street, Daxing District, Beijing, 100176

Applicant before: Jingdong Shuke Haiyi Information Technology Co.,Ltd.

CB02 Change of applicant information
GR01 Patent grant
GR01 Patent grant