has anyone benchmarked qoa to see roughly how many instructions per sample it needs? all i see here is that it's more than adpcm and less than mp3, but those differ by orders of magnitude
like, can you reasonably qoa-compress real-time 16ksps audio on a 16 megahertz atmega328?
hmm, https://phoboslab.org/log/2023/04/qoa-specification has some benchmark results, let's see... seems like he encoded 9807 seconds of 44.1ksps stereo in 25.8 seconds and decoded it in 3.00 seconds on an i7-6700k running singlethreaded. what does that imply for other machines?
it seems to be integer code (because reproducibility between the predictor in encoding and decoding is important, and a significant part of it is 16-bit. https://ark.intel.com/content/www/xl/es/ark/products/88195/i... says it's a 4.2 gigahertz skylake. agner says skylake can do 4–6 ipc (well, μops/cycle) https://www.agner.org/optimize/blog/read.php?i=628, coincidentally testing on an i7-6700k himself, but let's assume it's 3 ipc, because it's usually hard to reach even that level of ilp in useful code
so that's about 380 μops per sample if i'm doing my math right; that might be on the order of 400 32-bit integer instructions per sample on an in-order processor. if (handwaving wildly now!) that's 600 8-bit instructions, the atmega328 should be able to encode somewhere in the range of 16–32 kilosamples per second
so, quite plausibly
for decoding the same math gives 43 μops per sample rather than 380
i'm very interested to hear anyone else's benchmarks or calculations
Comparing against 4bit ADPCM, which is already able to give quite good performance as long as your sample rates are relatively modern, this only improves it to 3.2 bits. It is fast, but ADPCM is also fast.
Would be nice to see joint stereo support. If you were to take ADPCM or this OK format and try to encode any stereo music with it, you will need 2 channels. However, there is an extremely advantageous optimization that can be made here - most music is largely center panned, so both channels are almost the same. With joint stereo you record one channel (either by picking one or mixing to an average) and then you can store the difference for the other channel which will occupy a lot fewer bits, assuming you are able to quantize away the increased entropy.
For example, instead of using two 4bit ADPCM channels for stereo, which would only be a 50% savings over uncompressed, you could probably use an average of 5 bits per sample.
I like the philosophy of QOA (and other similar projects, including QOI and TinyVG), but unlike others, it seems like it's not ready to use yet, see https://github.com/phoboslab/qoa/issues/25
> I have just pushed a workaround to master. [...]
> This still introduces audible artifacts when the weights reset. It prevents the LMS from exploding, but is far from perfect :/
This, combined with the fact that that issue is still open mean that a breaking change is still to be expected.
It's interesting that this works in the time domain (instead of frequency domain), and I wonder what the resulting quality limitations are, if any. The sound samples on the demo page, at the least the dozen I clicked on, didn't seem all that challenging. Few, mostly synthesized instruments, low dynamic range. My ears aren't good enough to evaluate audio codecs anyway, however.
LFE audio channel is different from subwoofer output.
Subwoofers come with multichannel audio systems in which directional speakers usually can't cover the lower range of audio frequencies. They are responsible for bass content from all channels, and get it from software or hardware crossover filter which is independent from specific input formats. Placement of low frequency speaker does not matter much because of human perception.
LFE track is an additional effects channel for movie theaters and similar amusement rides in which audio system plays low frequencies from other channels just fine. Dedicated LFE emitter then adds rattling and other wub-wub effects without overloading audio speakers with all that extra energy. Movies that lack car chases and explosions routinely have completely silent LFE tracks.
Today, it seems to be it. The article does mention that historically the limits of transmission and playback systems were the reason for introduction of independent channel for lowest frequencies (instead of mixing it into main audio channels).
LFE is usually a bass shaker which is a subwoofer but it moves a weight instead of a cone, so you get vibrations in your seat. It stimulates movement to your body somewhat, I use two for my sim racing rig, one under my seat to inform me of the car dynamics and immersive feeling, one under my pedals to inform me when ABS is active and when my tires are spinning.
LFE can mean "bass shaker", but it's an industry-standard term invented by Dolby that effectively means "between 3 and 120hz", which usually means "subwoofer".
These days crossover points are very configurable. Most bass shakers are rated for use between 20hz and 200hz.
I looked around, but didn't see any mention of potential patent issues. I assume that this has been considered? The Ogg Vorbis people spent a lot of time on that back when they were developing their format.
No. The claim that there are no patents in Germany on this stuff is common internet misinformation(*). There are a great many coding patents from Fraunhofer all around the world, including in Europe.
Presumably because it's much easier to get injunctive relief in Germany I've seen more codec related litigation there than anywhere else.
(*) Like many pieces of misinformation it has its roots in a seed of truth: Particularly between 1998 (State Street) and 2014 (CLS v Alice) the case law in the US supported software patents.
The real confusion is that "Software patents" is an obscure term of art which refers to patents specifically on software methods without any reference to a physical machine or good.
When non-patent-attorneys say "software patents" they mean something more like "something I could infringe by writing software". But clever drafting allows people to write patents that software causes an infringement of without it technically being a "software patent": The patent's claims language will say something like "A recorded media containing instructions..." or "A microprocessor programmed to...". And this has been true in the US and Europe through the whole span.
Which is why there is an awful lot of patent action impacting software in places where "software patents" don't exist, such as the US (as of right now) and Europe.
>The patents were held to be invalid, because the claims were drawn to an abstract idea, and implementing those claims on a computer was not enough to transform that abstract idea into patentable subject matter.
You can absolutely patent software in Europe. Sorry. It's a common misconception that you can't. There's a stupid dance you have to do so it isn't technically "software" that you're patenting... but really it is.
From my understanding you can patent things supported by software, but not the software itself. A physical digital music player with a fancy software audio compression is patentable, but not the algorithm on its own.
> As a result, the widespread belief in the non-patentability of software is simply a misconception, partly as a result of insufficient training of innovators and the lobbying activities of certain interested parties.
> The European Patent Convention states that software is not patentable. But laws are always interpreted by courts, and in this case interpretations of the law differ. So the European Patents Office (EPO) grants software patents by declaring them as "computer implemented inventions".
Yes I find the EPO to be a bit shady by accepting software patents, and the fees, when the patents aren’t enforceable by law. I’m not a lawyer but I known how to read and I would ignore the patents trolls and I consider the risk to lose in court very low. The day something like the VideoLan association loses a trial, I may reconsider my position.
True, it hasn’t stopped hobbyists from using x264, ffmpeg or VLC in the past but that would probably prevent companies in some markets to use this audio format.
HTML+CSS, converted to PDF via the Save As PDF feature in Firefox. (Or the same could be done with other browsers, but this one apparently comes from FF.)
We have a 3D environment with spatial audio. Audio is encoded server-side, and since it's spatial everyone needs their own mix. We're using Opus, and audio encoding turns out to be the usual limiting factor on small servers.
So this kind of thing is exactly up our alley: an alternate option that uses less CPU than Opus, but consumes less bandwidth than raw audio.
But adding supporting for FLAC is also on our list. It seems nicely performant when compared to Opus.
I'm curious, why encode audio server-side? Other games in this genre I've seen seem to have clients do encoding/decoding, and do the spatial audio clientside, with the server just passing each user's audio and position data along from client to client. Especially in VR where ideally there should be no latency between turning your head and the audio shifting. Are there any reasons to do this on the server, or am I misunderstanding something?
Could be an option, but we take high audio quality as a point of pride and encode in Opus 128k by default. Audio doesn't only include speech but also any sound effects, media present in-world, etc.
But that might be an interesting experiment. Right now the low cpu usage/high quality/faily high bandwidth usage category is something we're looking to have an option for.
"very fast" could mean many different things that vary by orders of magnitude
in https://phoboslab.org/log/2023/04/qoa-specification he got ffmpeg on one core of an i7-6700k (which is arguably 'modern hardware') to encode a 9807-second file in mp3 in 146.2 seconds, 67× faster than real time. but qoa was 25.75 seconds, 5.7 times faster than that. qoa decoding was 2.5× as fast as dr_mp3
you can imagine situations where reducing the number of audio encoding servers in your audio encoding cluster by a factor of 6 would be a big win, or where you want to encode 100+ audio streams in real time on your laptop (maybe an sdr tuned to every am radio station at once), but i agree with you that battery-constrained devices are a more likely application area: making your audio recorder battery last twice as long is a much bigger win
That in terms of quality per any bitrate it comes nowhere near ubiquitous formats like AAC or MP3 when produced with good encoders. But it's good to have (possibly) patent-free solutions available.
Yeah, some 5-10 years ago. I just don't know if there's anything specific in the applied techniques/methods that some troll can still somehow manage to leverage in, say, a court in Texas.
The author wrote a very simple MPEG[1] decoder, so there's an obvious benchmark for making that even simpler.
I personally wouldn't mind a Quite OK Page Description Langage. Something that gets you most of PDF/PS/HPGL without all the effort. Could use the Quite OK Image Format for bitmap images. Not sure whether you'd need a Quite OK Vector Format and/or a Quite OK Font Format as prerequisites…
Quite OK browser. It doesn't have webgl, webgpu or other fancy and easy to exploit stuff, but it renders 95% of websites and source code is easy enough to be maintained with very few people.
Quite OK JS Plotting Library (QOJSPL, nice, sounds like my cat walking on the keyboard). With an intuitive, documented API that doesn't require you to dig through tons of examples on sites that take ages to load. Because, no, a massive stash of non-orthogonal examples does not replace a documentation.
AKA last Tuesday morning frustration : I wanted to make interactive plots on a web page to explain math stuffs.
like, can you reasonably qoa-compress real-time 16ksps audio on a 16 megahertz atmega328?
hmm, https://phoboslab.org/log/2023/04/qoa-specification has some benchmark results, let's see... seems like he encoded 9807 seconds of 44.1ksps stereo in 25.8 seconds and decoded it in 3.00 seconds on an i7-6700k running singlethreaded. what does that imply for other machines?
it seems to be integer code (because reproducibility between the predictor in encoding and decoding is important, and a significant part of it is 16-bit. https://ark.intel.com/content/www/xl/es/ark/products/88195/i... says it's a 4.2 gigahertz skylake. agner says skylake can do 4–6 ipc (well, μops/cycle) https://www.agner.org/optimize/blog/read.php?i=628, coincidentally testing on an i7-6700k himself, but let's assume it's 3 ipc, because it's usually hard to reach even that level of ilp in useful code
so that's about 380 μops per sample if i'm doing my math right; that might be on the order of 400 32-bit integer instructions per sample on an in-order processor. if (handwaving wildly now!) that's 600 8-bit instructions, the atmega328 should be able to encode somewhere in the range of 16–32 kilosamples per second
so, quite plausibly
for decoding the same math gives 43 μops per sample rather than 380
i'm very interested to hear anyone else's benchmarks or calculations