Spectrograms are used extensively in the fields of music, linguistics, sonar, radar, speech processing,[1] seismology, ornithology, and others. Spectrograms of audio can be used to identify spoken words phonetically, and to analyse the various calls of animals.
A spectrogram can be generated by an optical spectrometer, a bank of band-pass filters, by Fourier transform or by a wavelet transform (in which case it is also known as a scaleogram or scalogram).[2]
A common format is a graph with two geometric dimensions: one axis represents time, and the other axis represents frequency; a third dimension indicating the amplitude of a particular frequency at a particular time is represented by the intensity or color of each point in the image.
There are many variations of format: sometimes the vertical and horizontal axes are switched, so time runs up and down; sometimes as a waterfall plot where the amplitude is represented by height of a 3D surface instead of color or intensity. The frequency and amplitude axes can be either linear or logarithmic, depending on what the graph is being used for. Audio would usually be represented with a logarithmic amplitude axis (probably in decibels, or dB), and frequency would be linear to emphasize harmonic relationships, or logarithmic to emphasize musical, tonal relationships.
The bandpass filters method usually uses analog processing to divide the input signal into frequency bands; the magnitude of each filter's output controls a transducer that records the spectrogram as an image on paper.[3]
From the formula above, it appears that a spectrogram contains no information about the exact, or even approximate, phase of the signal that it represents. For this reason, it is not possible to reverse the process and generate a copy of the original signal from a spectrogram, though in situations where the exact initial phase is unimportant it may be possible to generate a useful approximation of the original signal. The Analysis & Resynthesis Sound Spectrograph[6] is an example of a computer program that attempts to do this. The Pattern Playback was an early speech synthesizer, designed at Haskins Laboratories in the late 1940s, that converted pictures of the acoustic patterns of speech (spectrograms) back into sound.
The size and shape of the analysis window can be varied. A smaller (shorter) window will produce more accurate results in timing, at the expense of precision of frequency representation. A larger (longer) window will provide a more precise frequency representation, at the expense of precision in timing representation. This is an instance of the Heisenberg uncertainty principle, that the product of the precision in two conjugate variables is greater than or equal to a constant (B*T>=1 in the usual notation).[8]
I am looking to plot a spectrogram of a signal, but I am running into some issues. I have created a minimal working example, and will walk through it. What follows is a bit messy, but I think that I clear question emerges at the end, and that showing the messy-ness is a good way of showing my confusion, which is the problem:
Can you see the skinny line down along the x-axis? That is the only content of the plot. So initially, the axis-scaling is all messed up. This has been the case for all 3 times I have tried, and every time, the data is squished up against the x-axis. But lets zoom in on the output we got:
image749499 61.8 KB
Now, I can make out that we have a frequency that rises with time, but the plot does look quite bad. The y-ticks are gone, and I can not hover to see the y-values. I also would prefer it to be continuos in colour instead of controur-lines. I have tried calling the last plot-command with heatmap instead of plot, which produces the following:
image742472 4.26 KB
I can zoom into this as well, but the resolution seems horrible, and I can not see what in m example should reduce :
image749500 9.09 KB
In addition there is no colorbar, which I have not been able to add.
That helps so much! Playing around with the numbers of samples per fft of the spectrogram made things more clear. The frequency-axis also extends from 0 to fs/2, and so setting y-lims to what I could tell to be the interesting parts based on an FFT of the whole signal turned out to be very helpful as well. It took some tinkering, but the result is great
, where data is a morse signal with some high frequency noise that fades in and out on top. The resulting spectrogram became
image748500 70.4 KB
, and one can see the morse signl and the noise very clearly.
NB:
The Wigner-Ville transform should achieve better time-frequency resolution. See this reference for a nice summary and comparison of different Time-Frequency transform methods.
For an old Julia implementation of the W-V transform, you may want to look here.
The Wigner-Ville transform should achieve better time-frequency resolution. See this reference for a nice summary and comparison of different Time-Frequency transform methods.
For an old Julia implementation of the W-V transform, you may want to look here.
You are getting this error as spectrogram requires an AbstractVector. The mono function you are using from SampledSignals returns a [Nx1] matrix. You can fix this using the vec function returning that as a vector instead.
I am doing a project where I will be implementing a trained neural network (trained with Keras) onto a STM32F746-DISCOVERY board with X-Cube AI. The goal is to train the network to recognize audio samples converted into spectrograms. This would mean that on the microcontroller, I would need to convert the audio input into spectrogram images, and then input that into the neural network for recognition.
In FP-AI-SENSING1 v3.0.0, there is an STM32_AI_AudioPreprocessing_Library Middleware library that can be used exactly for this purpose. The library provides the building blocks for spectral analysis and feature extraction, such as:
Recalculating common tables is only required if you change some preprocessing parameters and you want to avoid going through the MelFilterbank_Init, and Window_Init. The lookup tables stored in common_tables.h are for a given configuration. If you are using different preprocessing parameters, these lookup tables can be created at runtime in RAM using the _Init() functions. This is not the case in FP-AI-SENSING1. The preprocessing lookup tables have been generated offline and stored in ROM Flash using common_tables.c
Not knowing your use case, you might also check out FluidBufSTFT from the FluCoMa package www.flucoma.org. Maybe the data formatting with be useful there. You can see the spectrogram using the FluidWaveform object.
I repeatedly get the error message No module named spectrogram when importing on my Windows which is running Python version 2.7.12. In contrast when importing the module on my mac which runs version 2.7.11 I have no problems. I can't find anything that suggests that this is a version specific problem and was hoping someone might be able to help me fix this.
The Merlin app is remarkably good at ID by cruddy wavefile - better, in fact, than by photoID. The reason, I think, is that it picks out specific frequencies in a 1D FFT and maps them to a relatively small set of signatures. Thus, a lot of the noise is irrelevant.
so in my mind, the ideal interface would allow the user to specify the min and max frequency (with default 0 to 20000 Hz) and the type of scaling (i.e. linear, logarithmic, etc.). this kind of interface would be dynamic, showing you the detailed spectrogram for a given window of audio, and that window would move as the audio was played. something like this:
Perhaps creating a module that would read the audio data and show interface from audio files on the fly would be interesting. Also providing interactive elements to change frequency and zoom would make visualization of the audio files much more useful than static spectogram images.
Having seen the posts from 2010 regarding plotting a spectrogram, I downloaded the relevant files for the work around, only to find this doesn't work on the 64 bit version. The recommended solution: to operate the 32 bit version in parallel. Please can someone tell me how to do a spectrogram sensibly: its a pretty normal thing to want to do with noise or vibration data...
If you'd like to try this approach, and if you're using DIAdem 2015 or later, then I'll have to post a version of the tool that works with modern DIAdems. It's on my short list of things to update, but I keep not getting around to it (Grrr).
Yes, Fig. 1 in that link is the spectrogram type result I wish to plot, showing how the spectra evolve over time (usually with multiple short period FFTs, each covering a second or so of a longer data record).
Please download the edited script application in the next few days from the below ftp URL. All the instructions and screenshots on the existing web pages you saw should still be valid, but this version will run in DIAdem 2015 and later:
Hey so, a long time ago I once thought hey how nice would it be if we could switch from the waveform view of audio clips to a spectrogram view! But I though maybe in the future, because that sounds like really hard to implement and probably cpu heavy. But one year ago Reaper did it. I don't know if any other daw did it too.
The importance of this is that waveform view only tells us amplitude and phase. With some audio material it gets pretty hard to spot the exact places where some artifact or event is happening. More importantly for editing music and specially for time correction, it can get pretty hard to tell with just the waveform where to make the cuts and where to align to the grid with some audio materials. Such as guitars for example. On the other hand, with a spectrogram it is really easy to spot with precision the exact time of the acoustic guitar strums and picks. Or exactly where in a certain note did some noise occur so we can try to fix that within the DAW before resorting to external editors such as RX.
c80f0f1006