ProducerLoops offer a wide range of sample packs in many different styles. Their sonic materials are specifically designed to meet the high demands of producers, from bedroom beatmakers to film soundtrack composers. Their construction kit style sound design packs come in all shapes and sizes, covering genres from ambient to drum and bass, and everything in between. Offering up a variety of formats, every producer is sure to find the perfect sound integration for their next studio session.
There is a lot going on here. I am not sure that this is the best way to implement the producer consumer pattern. The "consumer" that you have here is processing the data and writing to file, and it is also doing a VISA call and writing to file. One of the basic concepts to keep in mind with parallel loops generally is the separation of concerns.
It sort of makes sense that you are doing data acquisition in separate loops. Why are you not doing the VISA communication in it's own loop? If there is some sort of synchronization issue, you may be able to use one of the synchronization types (notifiers, queues, semaphores) to accomplish this.
As for stopping issue, there are a number of reasons for that behavior as well. By reading the stop value from a local variable, all the loops stop whenever they finishing executing the most recent loop iteration. This is something close to a race condition. In most cases, the "consumer" loop is probably finishing execution and ending first when the stop button is pressed, which is then closing the queues before the "producer" loops are done. Once they are closed, the "producer" loops no longer have a valid reference to the queue and will error. In the current setup of the software, you need to at least check in the consumer to see if all the producers have stopped (with a flag maybe?) before ending. As for the "unpressing" of the stop button, it needs to be set to some form of latching behavior.
Is there a reason you need 3 different producer loops? What happens when one loop produces data on its queue faster than the other two loops? I don't know how likely this is but I could see this happening if your running system experiences a slowdown. If you actually want to implement the "Stop when queue empty" feature, you may run into issues. Since you have 3 dequeue operations with no timeout in the same loop, each iteration of the consumer loop is dependent on data being available in all 3 queues. If one of those queues has fewer elements than the others, the consumer loop will never end. I think you would be better off combining the read functions into a single loop, cluster the data from each instrument together and add it to a single queue.
The stop button is all wrong. Again, think dataflow. As soon as you start this program, the code is going to read the value of the stop button and update the local variable. And then it will not be checked when you actually want to stop the program. If you are not going to use an event structure, you need to come up with something different for stopping the loops. One suggestion is to add an empty value to the queue. In the consumer, check to see if the dequeued value is empty and if so, exit the loop and release the queue.
Since your producer loops are running relatively slow, you may want to add the VISA code into the same combined loop, if the serial data is related to the TC/strain/accel data. You just need to be aware of timing since serial is a slow protocol. Currently your producer loops are running one iteration per second (samples and sample rate are the same). In order to avoid buffer overflow on the DAQmx devices, your serial connection needs to be able to run at a similar rate.
Consumer loop is not doing a VISA call. It is reading data from the serial queue and writing it to a file, same for the DAQmx data. However, if this serial data is not related to the DAQmx data in any way, I think that the serial data should have it's own consumer loop or the dequeue function should have a timeout value so as not to hold up the DAQmx consumer loop. If the data is related to the DAQmx data and the timings all work out (see my previous post), I think the producers should be combined and all of the data fed into a single queue.
I cannot thank you enough! Your input was so helpful. I implemented your suggested changes and added some new functionality. The program now functions exactly as anticipated (files attached for anyone interested).
- Wiring a stop button to the end conditions of the producer loops. The last item to be added onto a queue is a signal to end the queue (empty waveform data for the DAQmx tasks and the string "empty" for the serial read). These "signals" are used to stop the consumer loop.
For my VI I want two or more Producer loops and one Consumer loop. The Producer loops should acquire the data and store it in the queue. The Consumer loop should take the data out of the loop and write it into a file.
1. First make a decision about your file format. A format like TDMS is very good at handling different "channels" at different rates. But you can't open such files in simple apps like Notepad. There *is* however a free plugin for Excel that will painlessly import a tdms file to Excel, with separate worksheets as needed for the different-rate channels.
2. If you're sure you want to stick with simple ASCII text files (such as CSV format), there's another approach I've been known to use. It has generally been built on top of a QMH-style framework where the queue datatype is a cluster of string message and variant data.
Each of the producers is contained in its own parallel loop, and the producers get into a free running state where they repeatedly "push" their data and timing info into a single shared queue. The string message identifies what kind of data it is, the variant holds the data and timing info. Only the parallel consumer loop ever performs dequeues.
Meanwhile the consumer maintains a (typedef'ed) cluster of state variables, including fields for the data from each of the producers. Each time the consumer dequeues, the string message identifies who the data is from (and by implication, what to convert the variant data into), and that data goes into the correct field of the big state variable cluster. (Sometimes I may accumulate data into an array, sometimes I simply replace prior stale values -- it depends on the needs of the app.)
I'll have decided ahead of time what's going to trigger me to write 1 CSV line to a file. It's usually every time I get an update from one of the producers. Which one? Well again, that's very app dependent. When I write based on a faster producer, I'll end up with many lines in the file where the slower producers' data are repeated because I'm always simply writing the most recent known value. When I write based on a slower producer, I'll typically accumulate data from the faster producers in an array so when it comes time to write I'll have options. I can do averaging, filtering, most recent value, etc. In some cases I might do 2 or more of those things in separate "columns" of the CSV.
There are pros and cons and a lot depends on the kind of workflow you need to support when analyzing the data and creating reports. I think TDMS is a better inherent fit for multi-rate data, but I've more often written to CSV simply to support internal customers' preferred workflow.
You need to your consumer which 'sensor' the data is coming from, i.e. add a constant to the build array with '1' for the first sensor (edit: I'd make this a Cluster), '2' for the secound and add 2 columns to the data of the second sensor before writing to the file.
But I still have some problems with sorting the data as shown in the picture I attached above. I tried some things, but in both solutions the data of the different sensors are not aligned through the time. I want the data which is aquired within the same timestamp to be written in the same row of the array.
What comes into my mind, is to save the first result in a shift register, with the next result, compare if the time matches (without the fractions, like in your picture), if not, write the first one to the file and put the latest result in the shift register. If they match, concatenate the strings and write them into the file.
Sorry, I haven't been following this discussion so closely, but I wonder if it wouldn't be easier to save the "raw" data files that come from different sources at different timings in different files (particularly if at least some of the data are "regular in time" so that they can be saved in a compact form such as a Waveform). Once the data files are all written, they can be examined and "merged" if this is necessary.
There could well be interaction between the "regular" and the "irregular" data channel, but that might have nothing to do with the format of the data files, rather with how the two channels interact. It might be that you can describe a condition where "when Channel 2 shows this, then we need to be on the lookout for Channel 1 to do that". This almost sounds like a parallel task handling these data, maybe via a separate Producer/Consumer design. You, of course, are in the best position to determine "What" you want to do, or "What" you have to do. Try to get that clearly delineated before getting lost in the Weeds of "how" you do that -- therein lies Spaghetti Code
The free producer loops, samples and sounds listed here have been kindly uploaded by other users for your commercial and non-commercial use on a royalty free basis (subject to our terms and conditions). If you use any of these producer loops please leave your comments.
For details on how you can use any loops and samples (including details on the specific licences granted by the creators of loops), please see the loops section of the help area and our terms and conditions. If you have any questions about these files, please contact the user who uploaded them. If you come across any content that is in breach of copyright or our upload guidelines please contact support.
3a8082e126