Build a Proper LabVIEW Producer/Consumer Pattern

It’s not for nothing that people who program a lot spend a lot of time talking about design patterns: Design patterns are basic program structures that have proven their worth over time. And one of the most commonly-used design patterns in LabVIEW is the producer/consumer loop. You will often hear it recommended on the user forum, and NI’s training courses spend a lot of time teaching it and using it.

The basic idea behind the pattern is simple and elegant. You have one loop that does nothing but acquire the required data (the producer) and a second loop that processes the data (the consumer). To pass the data from the acquisition task to the processing task, the pattern uses some sort of asynchronous communications technique. Among the many advantages of this approach is that it deserializes the required operations and allows both tasks (acquisition and processing) to proceed in parallel.

Now, the implementation of this design pattern that ships with LabVIEW uses a queue to pass the data between the two loops. Did you ever stop to ask yourself why? I did recently, so I decided to look into it. As it turns out, the producer-consumer pattern is very common in all languages — and they always use queues. Now if you are working in an unsophisticated language like C or any of its progeny, that choice makes sense. Remember that in those languages, the only real mechanism you have for passing data is a shared memory buffer, and given that you are going to have to write all the basic functionality yourself, you might as well do something that is going to be easy to implement, like a queue.

However, LabVIEW doesn’t have the same limitations, so does it still make sense to implement this design pattern using queues? I would assert that the answer is a resounding “No” — which is really the point that the title of this post is making: While queues certainly work, they are not the technique that a LabVIEW developer who is looking at the complete problem would pick unless of course they were trying to blindly emulate another language.

The Whole Problem

The classic use case for this design pattern includes a consumer loop that does nothing but process data. Unfortunately, in real-world applications this assumption is hardly ever true. You see, when you consider all the other things that a program has to be able to do — like maintain a user interface, or start and stop itself in a systematic manner — much of that functionality ends up being in the consumer loop as well. The only other alternative is to put this additional logic in the producer loop where, for example, asynchronous inputs from the operator could potentially interfere with your time-sensitive acquisition process. So if the GUI is going to be handled by the consumer loop, we have a couple questions to answer.

Question: What is the most efficient way of managing a user interface?

Answer: Control events

The other option of course is to implement some sort of polling scheme, which is at the very least, extremely inefficient. While it is true that you could create a third separate process just to handle the GUI, you are still left with the problem of how to communicate control inputs to the consumer loop — and perhaps the producer too. So let’s just stick with events.

Question: Do queues and control events play well together?

Answer: Not so much, no…

The problem is that while queues and events are similar in that they are both ways of communicating that something happened by passing some data associated with that happening, they really operate in different worlds. Queues can’t tell when events occur and events can’t trigger on the changes in queue status. Although there are ways of making them work together, the “solutions” can have troublesome side effects. You could have an event structure check for events when the dequeue operation terminates with a timeout, but then you run the risk of the GUI locking up if data is put into the queue so fast it never has the chance to time out. Likewise, you could put the dequeue operation into a timeout event case, but now you’re back to polling for data — the very situation you are wanting to avoid.

Thankfully, there is a simple solution to the problem: All you have to do is lose the queue…

The Solution

The alternative is to use something that integrates well with control events: user-defined events (UDEs). “But wait!”, you protest, “How can a UDE be used to transfer data like this? Isn’t the queue the thing that makes the producer-consumer pattern work?” Well, yes. But if you promise to keep it under your hat, I can tell you a little secret: Events have a queue too.

The following block diagram shows a basic implementation of this technique that mirrors the queue-based pattern that ships with LabVIEW.

event-driven producer-consumer

Note that in addition to the change to events, I have tweaked the pattern a bit so you can see its operation a bit better:

  1. The Boolean value that stands in for the logic that determines whether to fire the event, is now a control so you can test the pattern operation.
  2. The data transferred is no longer a constant so you can see it change when new data is transferred.
  3. Because the consumer loop is now event-driven, the technique for shutting down the program is actually usable (though not necessarily optimum) for deliverable code.

For those who are only familiar with control events, the user-defined event is akin to what back in the day was called a software interrupt. The first function to execute in the diagram creates the event and defines its datatype. The datatype can be anything, but the structure shown is a good one to follow due to a naming convention that LabVIEW implements. The label of the outer cluster is used as the name of the event — in this case acquire data. Anything inside the loop becomes the data that is passed through the event. This event, as defined, will pass a single string value called input data.

The resulting event reference goes to two places: the producer loop so it can fire the event when it needs to, and a node that creates an event registration. The event registration is the value that allows an event structure to respond to a particular event. It is very important that the event registration wire only have two ends (i.e. no branches). Trying to send a single registration to multiple places can result in very strange behavior.

Once this initialization is done, operation is not terribly different from that of the queue-based version. Except that it will shut down properly, of course. When the stop button is pressed, the value change event for the button causes both loops to stop. After the consumer loop stops, the logic first unregisters and then destroys the acquire data UDE.

And that, for now at least, is all there is to it. But you might be wondering, “Is it worth taking the time to implement the design pattern in this way just so the stop button works?” Well, let’s say for now that the stop button working is a foretaste of some good stuff to come, and we will be back to this pattern, but first I need to talk about a few other things.

Until next time…

Mike…