Icould adjust to getting cues every half mile instead, but the real deal breaker is that it doesn't seem like I can get ANY audio cues on my Wear OS device (Pixel Watch GPS+LTE). It only has cues for start/stop/pause...which is pretty much totally useless...
This! The audio cues every half mile have been super important for me, and now that I have a pixel watch (that I bought solely for running), it's sad that there are no more audio cues. I have to either run with my phone or get a different app, neither of which is great.
I agree, this feature is vital. With the launch of the Google Pixel watch, more of us are using the Strava app for Wear OS and right now this is the feature that's preventing me from leaving my phone at home and instead go with my watch only. A few other apps support the audio queues functionality but I really would like to stay with Strava if there's even a small chance that this feature will get added soon.
Yes! Please. In addition to adding the Time option, I would like to be able to mix-and-match what call-outs I want to hear. Some days, I only want distance; I don't want to hear my pace. Other days, I'd rather run just by time. It would be awesome to have a number of different audio cues that we can choose from, using checkboxes. Besides the cues that are currently available to us -- current distance, current split pace/previous mile -- it would be nice to add options such as: `overall average pace`.
I would love it if the Audio cues were more flexible/ideally bespoke. I don't like to use a watch so I rely on the Audio half mile/km and time summaries. When I am doing shorter runs however I would really like those cues to come more frequently. For longer runs it would be good to be able to set cues that happen at set times/distances too eg. '10km time in 40mins', or '60 mins completed, average pace 8min/mile'.
For my hike activities on route I would like to have audio cue for this how much distance left to the final destination. It can be on time (e.g. each 10 min) or on each kilometer, and can be used also for segment.
It doesn't look like Fitbit can do any better, however it does look like Google Fit can be configured to do a lot more and has pace alerts, will try it next time, and I can probably configure it to sync to Strava
I submitted this idea directly to the Strava developer and was redirected here simply to click "kudos". The fans have spoken, and are getting frustrated - Please implement this simple change that already exists on the phone app and by all logic should have already been implemented in the watch app.
These principles are not platform-specific. The ideas in this post apply equally to real-time audio programming on Windows, Mac OS X, iOS, and Linux using any number of APIs including JACK, ASIO, ALSA, CoreAudio AUHAL, RemoteIO, WASAPI, or portable APIs such as SDL, PortAudio and RTAudio.
4. Even if you have a wait-free (lock-free) implementation of a data structure, implying bounded number of steps, your hardware will have a hard time guaranteeing that each of the steps takes only a bounded number of times: the worst case execution time scales with the number of CPUs contending for the same cache-line. Therefore you can actually only guarantee *anything* if you can make sure that the number of threads accessing the shared data is bounded (but in that case prio inheritance also provides the same guarantees).
Audio programming is between a rock and a hard place: either use mutexes and pray they properly inherit priority, or use memory barriers and write code that no sane man or woman can determine is correct or not. Something will have to give: OSes will have to either provide a library of lock-free algorithms for use in audio contexts (a long shot), or provide mutexes which properly inherit priority (except on Mac OS X/iOS, a long shot too).
In theory, multi-core ARM is indeed nothing new from multi-proc ppc. However, Olivier Guilyardi on the andraudio list said my blog post was the first time he heard of barrierless FIFO/ringbuffer code actually failing in practice (and this matches my experience preparing that blog post). So I was under the impression that available ringbuffer code was unprepared, but I may have judged too soon.
Thank you for the excellent info, Ross. And sorry for necroing an old post.
Everything you write rings true, but how do you deal with large files that need to be streamed, m y scenario being background music in a video game on Android smartphones.
I would like to avoid loading a whole song into RAM, since I already keep sound effects resident in it (due to latency woes).
Instead I decode songs to a cache file on disk during initialization (from vorbis to pcm), in order to avoid doing so during the audio callback later. During the callback I then fill an audio double buffer by reading from the cache file. Obviously I have to call read for that, but otherwise I try to avoid all kinds of computations inside of the callback.. Do you think leaving the audio buffer large enough (couple seconds of data, since latency is no issue for background music) would be a good enough approach? Or maybe I should decode the file during the callback, in order to keep the amount of data read small (since the difference between vorbis and pcm is quite significant). Any experiences you can share?
It is neat to see such there has been such an important real-time application sitting right under my nose on regular desktop systems since before I learned C. An application whose developers do such a good job implementing, I have never given this any thought before. Of course the tiniest hiccup is going to be dire, and yet nobody ever talks about bending over backwards to make sure audio threads are not interrupted.
I am frankly amazed many of the stupid things I have done in years past did not royally hose audio playback in my software. This stuff is fragile, yet seemingly idiot proof enough to be used by people who know nothing about scheduling.
I had a question regarding real time audio processing. Im trying to create a project that takes in a 3.5 mm jack/ and or mic input to A0 coverts it through ADC and outputs it on the audrino pins of the DAC. I have already created a preamp that takes in a mic into A0 as well as a 3.5 jack.
Furthermore Ive created a R2R resistor ladder with a low pass filter, power amplifier, and output. I know the R2R ladder can be low quality. Ive inserted an image of my dac circuit below. As well can be seen in this link -Audio-Output/
However my question is there a way I can code to take the audio and output it to the R2R resistor ladder I am fairly new to this and any help is greatly appreciated. Thank you! (preferable register programming help rather than libraries)
Unless you use resistors that are matched to within 0.2% of each other than that circuit will cause all sorts of distortions in your already poor audio signal.
It is poor because you have a limited sample rate.
You will need to use the direct register addressing to write to them and you will need more than one access because you do not have a clear run of 8 bits free in any register used with an Arduino. This will cause additional glitches in your sample.
The AVR microcontrollers used in Arduinos are not suited for audio processing.
They were simply not designed with audio in mind: they lack memory, processing power, ADC resolution and speed, and a DAC.
Use a microcontroller with more memory, IS support, DMA, etc.
Take a look at ARM microcontrollers like STM32 if you have some experience, or use a Teensy if you want a more beginner-friendly platform.
You could probably also use an ESP32.
With tight coding you can do this audio processing to some extent, you'll need to up the ADC clock speed a bit
I think, and restrict the processing to cheap operations. An external SPI ADC can be a lot quicker to read
(compared to waiting for the on-chip ADC), freeing lots of processor cycles.
I think some people have made simple-limited guitar pedal effects... But, the Arduino is not going to do anything high-fidelity or highly-complex. (And DSP tends to be mathematically complex unless you're something simple like volume manipulation.)
I had a question, so I was able to implement the code below with the specification i said above. However the static noise is very loud. I already have a low pass filter as stated up in my specification. Anything i can do to reduce the static.
Assuming the typical 16 MHz clock frequency, then the interrupt is firing up at a rate of 64.25 KHz (every 15.56 microseconds).
Although with the ADC's default prescaler, effective sampling rate is more like 10 KHz or even less. Successive approximation ADCs aren't known for fast conversions; so in order to achieve high sampling rates, their clock shouldn't be too slow.
PD: the map() function is rather slow also. The CPU is 8 bits "wide" (against the 32-bit variables it uses inside), and its ALU lacks integer division support (such operation is actually done in software, and the function needs it).
Since you're scaling two power-of-two-wide ranges, the fastest way to accomplish the same result, is to move around some bits. Narrowing 10-bit values into an 8-bit input is possible simply by discarding two least significant bits (aka right shift 2 bits). In the end, it's something like this:
Furthermore, you may even want to change the duty cycle quicker. Instead of analogWrite(), you can set a single register to achieve the same effect.
Pin 10 belongs to the channel B of the timer1, so the correct register is called OCR1BL (ends with 'L' because that's a 16-bit timer and you only need the LSB to set the duty cycle). Improving that line even more:
PD 2: I've just realized another problem: you're using a timer1 output, which defaults to a prescaler of 64 and runs in "phase-correct" mode; resulting in a carrier frequency of just 490 Hz. I don't think you'll manage to modulate like that any meaningful audio signal, do you?
3a8082e126