If you have an application that you want to construct from multiple parallel processes, a key requirement is signalling – telling the various parts of the application that something important has happened, or is happening, in one of the other processes. When it comes to fulfilling that requirement, a valuable tool to have at hand is the User Defined Event, or UDE. In fact, over the past year this blog has considered a variety of ways to use this tool. However, all these implementations have one thing in common: They all use static UDEs. In other words, the event that will be used to signal a particular occurrence is decided when the code is created and it never changes.
But what if you don’t know until runtime what event you want to use? For example, it is common with reentrant code that you won’t have a fixed set of UDEs because there isn’t a fixed set of VIs that are running. Sometimes you need to be able to send a message to one particular clone. In such a situation, you aren’t just sending general signals, but signals that are unique to a particular instance of a VI. The solution is to use UDEs that are dynamically generated in the same way that the reentrant VIs are.
To demonstrate this technique, the next couple posts will highlight this use case, starting this time with a discussion of some of the design considerations. Our into to this exploration is an application that I was once assigned to maintain.
The Problem Defined
The job asked me to expand an existing application that had been developed by a large LabVIEW consulting firms located here in the US. The problem is that the software wasn’t designed for expandibility. Specifically, a key part of the program was a subsystem that polled a user-defined list of Modbus registers at a rate that was also user-defined. Because the user-defined inputs could change at any time, the decision was made to make the acquisition loops event-driven, and create a separate “metronome” process that would fire an acquisition event at the user defined rate. So far, so good. The real issue is with the implementation of this concept.
Apparently, there was originally only going to be one timed interval but, as you might expect, a requirement was later added to create a second one. To meet this scope change, with as little effort as possible, the decision was made to simply duplicate all the VIs used for the first process while appending “2” to the end of the names – an expedient that is unfortunately common in code developed by “large LabVIEW consulting firms”. To make matters worse, the modularization was poor so the program was basically built around a huge cluster containing literally dozens of references for UDEs, notifiers and queues that ran through nearly every VI in the application.
In the end, the only way to implement the required functionality was to completely redesign and reimplement that portion of the code. The really ironic part is that it took me less time to implement the functionality correctly, than it did to do it poorly the first time. Using this description as a jumping-off point, I obviously won’t be discussing the solution that I implemented for the original application. What we will do is use it as “inspiration” for examining techniques that could cover a wide range of similar requirements.
Getting Moving in the Right Direction
First, we should recognize that while our earlier conversation incorporates a pretty good description of what the code basically needs to do, we do need to flesh it out a bit: At run time, we need to be able to create multiple independently-timed data reads with varying intervals between reads. In addition, the results from these timed acquisitions need to be “published” somehow so they can be used by the rest of the program.
With this broader functional definition in place, we can begin to consider the appropriate API for accessing that functionality. As I have said many times before, one of the corner stones of a good API is the concept of information hiding – the process of deciding what information to expose to the calling code, and what information to keep private. So like a politician running for reelection, our next job is to decide what to hide and where to hide it.
The basic principle in play is to hide any information that the calling code either doesn’t need to know, or which would be counter-productive for it to know. If we think about it, there are exactly three pieces of information that the calling VI actually needs to know in order to define and use an acquisition task:
- The Modbus address to read
- The interval between reads
- How to receive the published data
On the other hand however, there is a (much) large list of things that the calling code doesn’t need to know, among which are things like:
- How the Modbus is read
- How the timers work
- The signalling that the timers use
- How many acquisition processes there are
- How the timing is implemented
- Internal data structures
- etc…
Now that we have a handle on what we want to expose – and just as importantly, what we do not want seen – we can start designing the outward interface.
The Outside View
The obvious place to start is with the VI that will configure or setup the polling. Given that we have already decided that we only want to expose two parameters (addresses to read, and the read interval) we can go ahead and create a placeholder VI that provides the appropriate IO, but which is for now empty.
Note that with the exception of the error cluster, this VI has no other outputs. Remember that all this VI is doing is identifying a group of addresses that some hidden “something” is going to read at the specified interval. Consequently, the only response that this VI can give is whether or not the specification process was successful. The assumption is that other parts of the software will independently report errors that occur during the data reads. In the same way, we are also going to need a VI to tell the “something” that is doing the reading that we are no longer wishing to poll particular addresses.
The interface to this VI is even more basic than the one for starting the polling of addresses, and the reason is simple. At this point we don’t care what rate at which a given address is being polled, we just want it to stop. You could argue that knowing the polling interval would make it easier for the code to find the addresses to delete, while that is true, it would also mean that the calling code would have to keep track of the addresses that it is monitoring – which could quickly become an awful lot of redundant information for the caller to remember and manage.
Last but not least, the third interface VI that we need to specify is the data publication mechanism. To keep things simple, we should use a technique that is easy to implement. So I am picking the logical equivalent of the callback techniques apparent in other languages: a User Define Event. For this application, the event will return an array of address/value pairs that the calling application can use as it desires. Note that in this implementation, all the acquisition processes will be sending their data to the same place, so this can be a static UDE.
The Test Application
Finally, while we’re talking about interfaces, let’s also look at the calling application. Because all the “heavy lifting” will be encapsulated in subVIs, the calling code can be very simple. It has buttons for identifying addresses that we wish to poll, addresses we want to stop polling, and stopping the application. To display the results, the application’s front panel incorporates a table that shows the addresses in ascending order, and the last data value acquired for that address.
The block diagram is, likewise, pretty plain. It is event-driven with one event for each of the three buttons on the front panel, plus one to handle the UDE that publishes the data. You can check out its code in the source later.
Crawling Under the Covers
With the front end interfaces thus defined, we now can start thinking about code that will make the interfaces do something useful. The simple part is the UDE for publishing the results because it uses the same technique that we have used many times over the past year. To summarize the implementation, a library named for the event (Publish Data.lvlib
) has four subVIs: One (structured as a FGV) for creating/buffering a reference to the event, and one each for registering to receive the event, generating the event, and destroying the event. In addition, it incorporates a typedef that defines the event data.
The process for managing the addresses to read requires a bit more thought. To begin with, we know that there are only two address management operations: adding and removing addresses. However, we also know that if we are to conform to our API, we need to hide those explicit operations from the calling application. This situation is one of those development scenarios where the words that we use to talk about what we are doing can help or hinder our understanding of what we are trying to accomplish. To see what I mean, let’s consider a similar case that is part of LabVIEW itself: the logic for handling queues.
You may have noticed that with the built-in API you don’t “create” or “destroy” queues. Rather, you “acquire” and “release” references to the queue. While you may wonder what difference this wording makes, we need to remind ourselves that it isn’t simply a matter of an API developer running amuck with a thesaurus. It actually describes a very real and very important distinction. Instead of creating a queue, you are simply telling LabVIEW that you want to acquire a reference to a queue. Now, when you make this request, there are two possible situations:
- The queue doesn’t currently exist and LabVIEW needs to create one.
- The queue already exists so all you need is to get a reference to the one that is there.
Likewise, you don’t destroy a queue when you are done using it, you simply tell LabVIEW that you have no further need for it by releasing the reference you previously acquired. Because LabVIEW keeps track of how many open references are associated with each queue, LabVIEW can tell when the queue is no longer in use and destroy it automatically. Now consider for a moment the degree to which this hidden functionality simplifies your code. You no longer need to worry about what or who is using the queue, and when it is safe to destroy it. All that potentially complex logic is hidden in the way that the functionality is encapsulated, and the difference in terminology highlights that difference.
As we are designing our API, we need to adhere to the same idea. So instead of “adding” and “removing” address, let’s think about this problem in terms of “starting” and “stopping” acquisition from lists of addresses. To grasp the benefits that we can garner from this change in language, lets consider what actually needs to happen behind the scenes for each of these operations. Just to be clear, this complexity has nothing to do with how we choose to implement the functionality, it is inherent in what we are trying to do. This logic will have to be created regardless of how we structure the code.
Starting Acquisition: This might seem to be pretty easy, but what if users start acquiring data from an address at one rate, but then later changes their mind (or makes a mistake) and starts the same address at a different rate. There is no point to have the same address being read at two different rates. Likewise, it is not clear that this action should be considered an error. Therefore, to do what the user is requesting you first have to remove that address from the process that is currently polling it, and then reassign it to a different (perhaps new, perhaps preexisting) acquisition process. Taking the point further, what if the address you remove from a process is the only address that it is currently polling. Removing that address would leave that acquisition loop with nothing to do, and so we need some way to stop it.
Stopping Acquisition: For its part, stopping the acquisition of addresses can hide some complexity of its own. For example, say the user identifies a list of 4 addresses to be stopped. There is no guarantee that all the addresses are being polled by the same acquisition process – and even if they are all together, we don’t know which process is reading them. This fact implies a need to be able to search all the acquisition processes for a particular address. Plus, as before if we remove all the addresses associated with a particular interval we need to stop that acquisition task.
Remember, this functionality will always be needed, it is simply more robust (and therefore smarter) to hide it from the calling application by encapsulating it in our API.
Getting Down With the Acquisition
To this point we have described the acquisition processes as having two loops: One performs the acquisition and one is a “metronome” function that periodically fires an event that causes the acquisition loop to acquire one scan of all the channels contained in its current configuration. In addition, both of these processes need to be reentrant so multiple copies of each can be launched as needed. Now we need to refine that basic description be specifying the rules that will govern their operation.
First, let’s state that each process is self maintaining both in terms of its own operation and its data. What that requirement means is that each instance of the acquisition process will maintain for itself a list of the addresses that it is polling. Consequently, when a process receives a system message (via UDE) to stop polling on one or more addresses, it will examine its own list of addresses and remove any that are in the “stop polling” list.
Second, we will state that there will only one acquisition process running for each acquisition interval. Hence, if a process receives a system message specifying its own polling interval, it will add those addresses to the list of addresses it is already polling. For example, say there is a process running that is acquiring data from 4 addresses every 1000 msec and it receives a system message that the user wants to start an additional 5 addresses at that same sample interval. The code will add those 5 new addresses to the 4 that it is already polling.
Third, if an acquisition process receives a system start message that does not specify its polling interval, it will prevent duplicate polling by automatically search its configuration for the addresses in the message and delete any that it finds.
Fourth, if after stopping one or more addresses a process finds that its polling list is empty, it will shut itself down by firing an event that is unique to that particular instance.
Fifth, in the event that the user starts addresses for an interval that is not currently running, the logic incorporates a manager function that will start-up a new acquisition process to handle that interval.
Let’s Build Some Code
Finally we are ready to start writing some code to materialize what has to this point been mostly words. A good place to start this work is with the manager VI that will launch new acquisition processes for us. I like this approach because it will give us the opportunity look at how the pieces fit together.
How the Pieces Fit Together
The operational rules we listed earlier provide a number of clues as to where we are going next. To begin with, we talked a lot about messages. This information by itself is enough to let us design and implement the inner workings of the two interface routines we prototyped earlier. They are simply the event generation VIs of two more static UDEs (Start Addresses
and Stop Addresses
) – and we already know how to build those.
Next, because we have defined what the manager basically needs to do we can create an event-driven shell for it that can respond to two events: Stop Application
and Start Addresses
. Again, if you have been reading this blog for a while, the Stop Application
event is an “old friend” so we will concentrate on the manager’s response to the Start Addresses
event. Since this response will vary depending upon whether or not the specified interval already exists, we need to design a way for the code to make that determination, while continuing to bear in mind the principles of information hiding, to wit, the manager doesn’t need to know how the determination was made, just what the result was.
To support this functionality, we will create a VI (called Interval Registry.vi
) that the acquisition processes will maintain as they start and stop. This function will support three operations: Check
, Insert
and Delete
. For the Check
operation we see here, the routine checks to see if there is already a process running at the indicated sample interval. If there isn’t, the VI returns a false Boolean value that causes the manager to call a launcher VI (Process Launcher.vi
) that kicks off all the VIs needed to service the interval. If, however, the interval is already running there is nothing more for the manager to do, so the true case (not shown) does nothing.
Turning our attention now to the VI that launches the new process VIs, we have built this sort of launcher several times before, so we already know its basic structure. What we need clarity on is the details of the data that needs to be passed to the two VIs.
Starting with the simpler of the two (which we will call Metronome.vi
) it’s only purpose is to fire an acquisition event at some predefined interval. However, if it is going to stop when it has nothing more to do, it also needs to know what event will tell it to stop. Note that both of these events are specific to each interval that is created. Consequently, they both need to be created on the fly when the acquisition process is launched with their references passed into the new clone as a parameters.
In the same way, looking at the acquisition VI (we’ll call it Modbus Reader.vi
) we see that it is going to be receiving the acquisition event that the metronome fires and is also going to need to shut down like the metronome, so it will need to get references to the same to events. The only other messages it will need to receive are the static ones that we have discussed earlier, but because they are static we don’t have to be concerned with them here.
So we add in the event definition logic, and this is what our finished launcher VI looks like:
The Big Tease
The next thing we need to look into is the VIs that are being launched and the logic that resides inside Interval Registry.vi
, but this post is getting long so that will have to wait until next time.
Until Next Time…
Mike…