I have observed that embedded album art increases the kbps rate of FLAC files. The file size is divided by the lenght of the audio file, hence kilobytes per second. The effect is more noticeable the shorter the tracks are. Keep in mind that when the specific track is not played, opening its info/tags box displays the correct (?) bitrate, which then changes (increases) when played. It seems a bit odd to me. Am I missing something when I say that kbps is supposed to indicate the audio data transmission and demonstrate the audio quality (partially) that way, or is everything working as intended? I only noticed this behavior in Poweramp so far. Kind of reminds me of that one problem back in the day where MP3 album art would increase the lenght of tracks. Thanks for any advice!
Flac should contain this info in the file header. If it doesn't and there is something weird with the tags as well, the estimation (which happens then) may be off. There is any kind of weirdness for the files in the wild and I can say for sure and may be add some workaround if you share the file for tests with gpma...@gmail.com
Does that mean that when Poweramp displays the increased kbps rate (with the embedded image) it is actually the overall bitrate and is therefore correct? Or is it supposed to show the audio bitrate only? I mean what is the point of seeing the overall bitrate? It's not like the cover art is partially loaded every second, right? I thought that the bitrate is supposed to indicate sound quality... maybe a setting could be implemented that lets the user decide which bitrate is being displayed.
@soundpimp for Poweramp and flac, the bit rate estimation during tag scan differs from the playback estimation. The first one excludes tag size, but the playback estimation (as implemented in ffmpeg) uses whole file size for the estimation, including metadata. This is usually OK for the moderate image sizes, but if you push very large image for some reason, indeed it offsets displayed bitrate while playing.
So if I want to avoid this issue AND keep artworks embedded, I'd have to switch to ALAC... It doesn't care if the image is 10MB, the kbps rate stays the same! It even removes the padded file space after you remove the artwork!
Now the tag scan bitrate estimation for all ALAC files is at 1536/1411 kbps somehow... it is probably the AIFF/WAV file bitrate it was converted from. I think I can live with that tradeoff, since the playback estimation is correct now.
I have seen articles about Java being much faster than it used to be, coming close to C / C++, and now having realtime extensions as well. Is this reality? Does it require hard core coding and tuning to achieve the %50-%100 performance of C some are spec'ing?
In Java, you can always use the JNI (Java Native interface) and move your computational heavy code into a C-module (or assembly using SSE if you really need the power). So I'd say use Java and get your code working. If it turns out that you don't meet your performance goal use JNI.
90% of the code will most likely be glue code and application stuff anyway. But keep in mind that you loose some of the cross platform features that way. If you can live with that JNI will always leave you the door open for native code performance.
Java is fine for many audio applications. Contrary to some of the other posters, I find Java audio a joy to work with. Compare the API and resources available to you to the horrendous, barely documented mindf*k that is CoreAudio and you'll be a believer. Java audio suffers from some latency issues, though for many apps this is irrelevant, and a lack of codecs. There are also plenty of people who've never bothered to take the time to write good audio playback engines(hint, never close a SourceDataLine, instead write zeros to it), and subsequently blame Java for their problems. From an API point of view, Java audio is very straightforward, very easy to use, and there is lots and lots of guidance over at jsresources.org.
If your program can keep up with the throughput on the average, and you have enough room for latency, then you should be able to use queues for inputs and outputs, and the only parts of the program that are critical for timing are the pieces that put the data into the input queue and take it out of the output queue and send it to a DAC/speaker/whatever.
Delay lines have low computational load, you just need enough memory (+ memory bandwidth)... in fact you should probably just use the input/output queues for it, i.e. start putting data into the input queue immediately, and start taking data out of the output queue 30s later. If it's not there, your program is too slow...).
I think latency will be your major problem - it is quite hard to maintain latency already in C/C++ on modern OSes, and java surely adds to the problem (garbage collector). The general design for "real-time" audio processing is to have your processing threads running at real time scheduling (SCHED_FIFO on linux kernels, equivalent on other OSes), and those threads should never block. This means no system calls, no malloc, no IO of course, etc... Even paging is a problem (getting a page from disk to memory can easily take several ms), so you should lock some pages to be sure they are never swapped out.
You may be able to do those things in Java, but java makes it more complicated, not easier. I would look into a mixed design, where the core would be in C, and the rest (GUI, etc...) would be in java if you want.
One thing I didn't see in your question is whether you need to play out these processed samples or if you're doing something else with them (encoding them into a file, for example). I'd be more worried about the state of Java's sound engine than in how fast the JVM can crunch samples.
I pushed pretty hard on javax.sound.sampled a few years back and came away deeply unimpressed -- it doesn't compare with equivalent frameworks like OpenAL or Mac/iPhone's Core Audio (both of which I've used at a similar level of intensity). javax.sound.sampled requires you to push your samples into an opaque buffer of unknown duration, which makes synchronization nigh impossible. It's also poorly documented (very hard to find examples of streaming indeterminate-length audio over a Line as opposed to the trivial examples of in-memory Clips), has unimplemented methods (DataLine.getLevel()... whose non-implementation isn't even documented), and to top it off, I believe Sun laid off the last JavaSound engineer years ago.
If I had to use a Java engine for sound mixing and output, I'd probably try to use the JOAL bindings to OpenAL as a first choice, since I'd at least know the engine was currently supported and capable of very low-latency. Though I suspect in the long run that Nils is correct and you'll end up using JNI to call the native sound API.
Yes, Java is great for audio applications. You can use Java and access audio layers via Asio and have really low latency (64 samples latency which is next to nothing) on Windows platform. It means you will have lip-sync on video/movie. More latency on Mac as there is no Asio to "shortcut" the combination of OS X and "Java on top", but still OK. Linux also, but I am more ignorant. See soundpimp.com for a practical (and world first) example of Java and Asio working in perfect harmony. Also see the NRK Radio&tv Android app containing a sw mp3 decoder (from Java). You can do most audio things with Java, and then use a native layer if extra time critical.
7fc3f7cf58