Implementing Dynamic User-Defined Events

In the previous post, we started looking at how to deal with the situation where you have reentrant VIs that need to have UDEs that are specific to particular instances of the VI. That post covers a lot of the basic theory and design issues, so if you haven’t read that post, please take a few moments to check it out as much of the following won’t make sense without that background.

Filling in the Blanks

The basic approach we took in designing our example consisted of first defining the program’s overall intended operation. With that theory sorted-out, we then began creating the program’s high-level structure, being sure to leave blanks or prototypes as placeholders for details that we hadn’t yet defined. At one time this sort of approach would have been considered heresy. The feeling was you hade to have everything worked out in detail before you wrote even one line of code or created even a single VI.

Clearly there are a lot of practical problems with this approach, but for one of the biggest (in terms of long-term impact on the project) is that it leads to a break-down in the walls that information hiding is trying so hard to erect. Think about it for a second, if an integral part of designing a process launcher includes figuring out the internal operation of the processes it is going to launch, it’s going to be very difficult to keep that knowledge from contaminating your launcher design. Or to put it another way, there will be a strong tendency to create code that depends on the process VIs behaving in a particular way.

You can see a better approach in the way we designed the launcher for our reentrant VI. Last time we defined a VI that (beyond the mechanics of how to launch a reentrant VI) only concerned itself with the data that the process VIs need to do their work. Based on a black-box description of what the routines were supposed to do, we able to create the logic for providing that data without worrying about what the routines were going to do with it. This is a Very Good Approach.

Getting Registered

One of the “blanks” that we left to be filled until now was a VI called DAQ Operations.lvlib:Interval Registry.vi. When we first considered this VI’s external persona we noted that, when passed an enumerated value Check and an interval, it would return a boolean flag indicating whether or not that interval already existed. However, we made no assumptions about how it might go about completing that task. Having taken, so to speak, a step inside the veil we can look at how it works.

Interval Registry - Check

This is the logic that completes the Check function, and as it turns out, there isn’t really very much to it. In fact, we see logic that implements a simple FGV. All the function in question really does is search an array of intervals to see if the one we asked about exists. Given this level of simplicity, a logical question would be, “Why bother with the subVI? The process manager has a loop, why not just use it to maintain the list of active intervals?”

This approach would handle half of the problem quit well – the half dealing with whether or not a particular interval was launched. But if you think about it, that isn’t what we really need to know. Operationally, we don’t care if a particular interval was ever launched. We want to know whether the interval is still running, and the interval’s current state is something of which the manager has no knowledge. Remember that when we defined the rules governing the high-level behavior of the acquisition subsystem, we said that when an acquisition process sees that it has no addresses left to poll it will shut itself down. How is the process manager supposed to know that happened?

Beyond this practical consideration, there is also a conceptual problem associated with storing the interval list in the process manager. You might not realize it, but all data has an associated “scope” that defines the area, or context, which owns the data. If we store the interval list in the process manager we are, conceptually, giving ownership of a piece of information that should belong to the subsystem as a whole, to one specific VI. Moreover, that one specific VI would then be required to manage all accesses to that data. Now while that functionality could certainly be implemented, it would be at the cost of additional complications in the form of additional signalling, and event handlers to respond to those messages.

By contrast, moving the array into a FGV that any VI in the subsystem can access, makes the data immediately available to the subsystem as a whole: No event handlers, no added signalling. As we get into the VI that performs the (simulated) Modbus IO Collect Data.vi, we see how the FGV’s remaining two functions work.

Keeping Time

Next, let’s look at the reentant VI (Metronome.vi) that is responsible for triggering acquisitions at a fixed interval. You will recall that the VI is passed three parameters when it is launched: The desired interval between acquisitions, the event to fire when the interval times out, and the event that will fire when the interval is shutting down. Here is the logic for initializing the VI.

Metronome - Initialization

We see that the interval value drives the Timeout node of an event loop, the shutdown event is registered, and the event that this VI will fire, is simply passed into the event loop. As you might expect, the event loop itself only has two events. Here is the Timeout event:

Metronome - Timeout

No surprises here: all the event needs to do is fire the acquisition event – which this code does with alacrity. The shutdown event (which handles both types of shutdown) is likewise uncomplicated:

Metronome - Stop Interval

It destroys the two dynamic UDEs and stops the event loop. The acquisition event is destroyed because once a shutdown is initiated, it is no longer needed. The interval stop event is destroyed because it is only fired in the acquisition VI, so by the time we get to this point in the code it has already been fired by the only other VI that needs it. The small delay between the two operations is to ensure that the acquisition VI has time to start shutting itself down.

Getting into the Publishing Business

The last code we need to look at is the reentrant VI that reads the simulated data and publishes it for use (Collect Data.vi). Again starting with the initialization logic, we see something very similar to the metronome VI:

Collect Data - Initialization

The main difference is that this VI needs to register to receive both dynamic UDEs. Although the VI will be generating the Delete Interval event it also needs to be able to respond to it because that context is the best place put the logic for deinitializing any acquisition logic that might be used. Another conceptual difference from the metronome is that this VI relates to the interval input in a way that is fundamentally different. For the metronome the interval is a number that has a specific meaning: it is the number of milliseconds between triggers. For this data collection routine however, it is simply a number that provides it way for it to identify itself. The broader point is that you need to remember to manage the expectations when different processes look at the same value and see different things.

Because this VI is going to be acquiring data, and performing other operations, its event loop is also more complex. Let’s look at the 5 events it will handle:

Timeout – This application is only generating simulated data, but in most other situations, there will need to be some sort of initialization performed, like opening DAQ references or establishing connections to one or more Modbus devices, and this event is a good place to handle such initialization. The way this logic is written, it is easy to make the initialization run just once, but still have it readily available if it ever needs to be run again.

Collect Data - Timeout

However, there is other initialization that needs to be performed even for simulated data. Specifically, if the initialization logic completes with no errors, we need to tell the rest of the application that this process is open and ready for business. To record that state change, we use our “repository” VI again, but this time running the Insert logic…

Interval Registry - Insert

…which is the very soul of simplicity.

Start Addresses – This event’s purpose is to maintain the process’ internal list of addresses in response to the user starting additional addresses. In completing this work, there are two cases that it will need to address.

Collect Data - Start Addresses - Adding

First there is the case where the interval number matches the value that the process is using to identify itself. In that situation the code needs to add the new addresses to the existing list, or more correctly, it needs to add addresses to the array that don’t already exist in the array. This logic protects the logic from what would be a very common operator mistake: adding addresses tht already exist. The second situation is one where the interval number does not match the value that the process is using to identify itself.

Collect Data - Start Addresses - Delete Case

In this case, we need to enforce the rule that only one process can be polling a given address. Hence, the logic needs to remove from its array any addresses any addresses contained in the new event. Of course this operation raises the spectre of the entire polling array being emptied out, with the contingent requirement that the interval shut itself down. The logic handles that scenario with a two-step process. First it tells the rest of the application that its stopping by calling the registry VI using the delete logic.

Interval Registry - Delete

After removing itself from the registry, the VI fires the Stop Interval UDE to close both itself and its associated metronome process.

Stop Addresses – This event has logic that is similar to the previous event’s delete logic, but is simpler because the only thing that matters is whether the indicated address is in the process’ address list.

Collect Data - Stop Addresses

Get Data – This event generates an array of simulated data and passes it to the static Publish Data UDE, along with an array of addresses associated with the data values. In addition for troubleshooting purposes, the code also writes the data to a front panel table.

Collect Data - Get Data

Stop Interval; Stop Application – Finally, this event stops this VI if either the interval or the application as a whole is shutting down.

Collect Data - Stop Interval

As the comment says, if the acquisition logic needs to be deinitialized, this in the place to put that logic. But why is the code interested in the front panel’s state? Although this VI usually runs in the background unseen, there are times that you want to be able to view its front panel so you can verify its operation. In this example, I implemented that functionality using the VI properties to force the front panel open when it starts running. This logic checks to see if the front panel is open and, if so, closes it.

Testing the Code

So with all the code implemented, here are the Subversion links to the application an the toolbox of reusable code:

Dynamic Registration – Release 1
Toolbox – Release 18

The first thing I would recommend after downloading the code, it to go through it while re-reading both this post and the previous one. Often times it is easier to understand things when looking at the code, that are otherwise a bit obscure when all you have are pictures of the code.

Next run the top-level VI (Dynamic Registration.vi). When the front panel opens, click on the Add Addresses button to define some addresses. For the purpose of this example, I created a simple dialog box that lets you specify a starting address, the number of consecutive addresses to collect, and the sample interval. The starting address needs to be between 40000 and 49999, the number of consecutive address must be less than 1000 and the sample interval needs to be between 300 and 5000 (milliseconds). These parameter limits are set in the dialog box code – feel free to change them as you desire. Likewise, the output of this dialog box is a list of addresses, so you can also change the selection interface if you so desire.

To get started, define 5 addresses starting at 40000 with a sample interval of 1000-msec. You should see the front panel of acquisition VI pop open showing the 5 addresses being updated once per second. In addition, the main GUI should show data from the same 5 addresses.

Click the Add Addresses button again, but this time define 5 addresses starting at 40008, and updating every 2000-msec. A second acquisition VI windows will open showing the 5 new addresses, and the main GUI will show a total of 10 addresses with the results changing at different rates.

Let’s next see what happens if you specify addresses that are already being polled. Click the Add Addresses button one more time and define 5 addresses starting at 40004, and updating every 3000-msec. This action defines a range where 40004 is already being polled once a second and 40008 is being polled every 2 seconds. In response you will see a third acquisition window open, but you will observe that one address is removed from each of the two existing polling lists, and that the main GUI shows a total of 13 addresses being polled.

Finally, to test the auto-shutdown operation click the Delete Addresses button and in the resulting dialog box, tell the system to delete 5 addresses starting at 40008. Because the 2000-msec interval is emptied out, the acquisition VI window associated with that interval will close. Finally, the polling list for the 3000-msec interval will be reduced by one.

The Big Tease

So that’s about all for now on this topic, but what’s in store for next time? One of the things that I like to talk about in this venue are things that can cause unexpected complications, so next time I’m going to discuss what happens when a DLL misbehaves as you are trying to close it.

Until Next Time…
Mike…

Dynamic UDEs: the Power for Reentrant Processes

If you have an application that you want to construct from multiple parallel processes, a key requirement is signalling – telling the various parts of the application that something important has happened, or is happening, in one of the other processes. When it comes to fulfilling that requirement, a valuable tool to have at hand is the User Defined Event, or UDE. In fact, over the past year this blog has considered a variety of ways to use this tool. However, all these implementations have one thing in common: They all use static UDEs. In other words, the event that will be used to signal a particular occurrence is decided when the code is created and it never changes.

But what if you don’t know until runtime what event you want to use? For example, it is common with reentrant code that you won’t have a fixed set of UDEs because there isn’t a fixed set of VIs that are running. Sometimes you need to be able to send a message to one particular clone. In such a situation, you aren’t just sending general signals, but signals that are unique to a particular instance of a VI. The solution is to use UDEs that are dynamically generated in the same way that the reentrant VIs are.

To demonstrate this technique, the next couple posts will highlight this use case, starting this time with a discussion of some of the design considerations. Our into to this exploration is an application that I was once assigned to maintain.

The Problem Defined

The job asked me to expand an existing application that had been developed by a large LabVIEW consulting firms located here in the US. The problem is that the software wasn’t designed for expandibility. Specifically, a key part of the program was a subsystem that polled a user-defined list of Modbus registers at a rate that was also user-defined. Because the user-defined inputs could change at any time, the decision was made to make the acquisition loops event-driven, and create a separate “metronome” process that would fire an acquisition event at the user defined rate. So far, so good. The real issue is with the implementation of this concept.

Apparently, there was originally only going to be one timed interval but, as you might expect, a requirement was later added to create a second one. To meet this scope change, with as little effort as possible, the decision was made to simply duplicate all the VIs used for the first process while appending “2” to the end of the names – an expedient that is unfortunately common in code developed by “large LabVIEW consulting firms”. To make matters worse, the modularization was poor so the program was basically built around a huge cluster containing literally dozens of references for UDEs, notifiers and queues that ran through nearly every VI in the application.

In the end, the only way to implement the required functionality was to completely redesign and reimplement that portion of the code. The really ironic part is that it took me less time to implement the functionality correctly, than it did to do it poorly the first time. Using this description as a jumping-off point, I obviously won’t be discussing the solution that I implemented for the original application. What we will do is use it as “inspiration” for examining techniques that could cover a wide range of similar requirements.

Getting Moving in the Right Direction

First, we should recognize that while our earlier conversation incorporates a pretty good description of what the code basically needs to do, we do need to flesh it out a bit: At run time, we need to be able to create multiple independently-timed data reads with varying intervals between reads. In addition, the results from these timed acquisitions need to be “published” somehow so they can be used by the rest of the program.

With this broader functional definition in place, we can begin to consider the appropriate API for accessing that functionality. As I have said many times before, one of the corner stones of a good API is the concept of information hiding – the process of deciding what information to expose to the calling code, and what information to keep private. So like a politician running for reelection, our next job is to decide what to hide and where to hide it.

The basic principle in play is to hide any information that the calling code either doesn’t need to know, or which would be counter-productive for it to know. If we think about it, there are exactly three pieces of information that the calling VI actually needs to know in order to define and use an acquisition task:

  1. The Modbus address to read
  2. The interval between reads
  3. How to receive the published data

On the other hand however, there is a (much) large list of things that the calling code doesn’t need to know, among which are things like:

  1. How the Modbus is read
  2. How the timers work
  3. The signalling that the timers use
  4. How many acquisition processes there are
  5. How the timing is implemented
  6. Internal data structures
  7. etc…

Now that we have a handle on what we want to expose – and just as importantly, what we do not want seen – we can start designing the outward interface.

The Outside View

The obvious place to start is with the VI that will configure or setup the polling. Given that we have already decided that we only want to expose two parameters (addresses to read, and the read interval) we can go ahead and create a placeholder VI that provides the appropriate IO, but which is for now empty.

Start Addresses

Note that with the exception of the error cluster, this VI has no other outputs. Remember that all this VI is doing is identifying a group of addresses that some hidden “something” is going to read at the specified interval. Consequently, the only response that this VI can give is whether or not the specification process was successful. The assumption is that other parts of the software will independently report errors that occur during the data reads. In the same way, we are also going to need a VI to tell the “something” that is doing the reading that we are no longer wishing to poll particular addresses.

Stop Addresses

The interface to this VI is even more basic than the one for starting the polling of addresses, and the reason is simple. At this point we don’t care what rate at which a given address is being polled, we just want it to stop. You could argue that knowing the polling interval would make it easier for the code to find the addresses to delete, while that is true, it would also mean that the calling code would have to keep track of the addresses that it is monitoring – which could quickly become an awful lot of redundant information for the caller to remember and manage.

Last but not least, the third interface VI that we need to specify is the data publication mechanism. To keep things simple, we should use a technique that is easy to implement. So I am picking the logical equivalent of the callback techniques apparent in other languages: a User Define Event. For this application, the event will return an array of address/value pairs that the calling application can use as it desires. Note that in this implementation, all the acquisition processes will be sending their data to the same place, so this can be a static UDE.

The Test Application

Finally, while we’re talking about interfaces, let’s also look at the calling application. Because all the “heavy lifting” will be encapsulated in subVIs, the calling code can be very simple. It has buttons for identifying addresses that we wish to poll, addresses we want to stop polling, and stopping the application. To display the results, the application’s front panel incorporates a table that shows the addresses in ascending order, and the last data value acquired for that address.

Main GUI

The block diagram is, likewise, pretty plain. It is event-driven with one event for each of the three buttons on the front panel, plus one to handle the UDE that publishes the data. You can check out its code in the source later.

Crawling Under the Covers

With the front end interfaces thus defined, we now can start thinking about code that will make the interfaces do something useful. The simple part is the UDE for publishing the results because it uses the same technique that we have used many times over the past year. To summarize the implementation, a library named for the event (Publish Data.lvlib) has four subVIs: One (structured as a FGV) for creating/buffering a reference to the event, and one each for registering to receive the event, generating the event, and destroying the event. In addition, it incorporates a typedef that defines the event data.

Publish Data Event Library

The process for managing the addresses to read requires a bit more thought. To begin with, we know that there are only two address management operations: adding and removing addresses. However, we also know that if we are to conform to our API, we need to hide those explicit operations from the calling application. This situation is one of those development scenarios where the words that we use to talk about what we are doing can help or hinder our understanding of what we are trying to accomplish. To see what I mean, let’s consider a similar case that is part of LabVIEW itself: the logic for handling queues.

You may have noticed that with the built-in API you don’t “create” or “destroy” queues. Rather, you “acquire” and “release” references to the queue. While you may wonder what difference this wording makes, we need to remind ourselves that it isn’t simply a matter of an API developer running amuck with a thesaurus. It actually describes a very real and very important distinction. Instead of creating a queue, you are simply telling LabVIEW that you want to acquire a reference to a queue. Now, when you make this request, there are two possible situations:

  1. The queue doesn’t currently exist and LabVIEW needs to create one.
  2. The queue already exists so all you need is to get a reference to the one that is there.

Likewise, you don’t destroy a queue when you are done using it, you simply tell LabVIEW that you have no further need for it by releasing the reference you previously acquired. Because LabVIEW keeps track of how many open references are associated with each queue, LabVIEW can tell when the queue is no longer in use and destroy it automatically. Now consider for a moment the degree to which this hidden functionality simplifies your code. You no longer need to worry about what or who is using the queue, and when it is safe to destroy it. All that potentially complex logic is hidden in the way that the functionality is encapsulated, and the difference in terminology highlights that difference.

As we are designing our API, we need to adhere to the same idea. So instead of “adding” and “removing” address, let’s think about this problem in terms of “starting” and “stopping” acquisition from lists of addresses. To grasp the benefits that we can garner from this change in language, lets consider what actually needs to happen behind the scenes for each of these operations. Just to be clear, this complexity has nothing to do with how we choose to implement the functionality, it is inherent in what we are trying to do. This logic will have to be created regardless of how we structure the code.

Starting Acquisition: This might seem to be pretty easy, but what if users start acquiring data from an address at one rate, but then later changes their mind (or makes a mistake) and starts the same address at a different rate. There is no point to have the same address being read at two different rates. Likewise, it is not clear that this action should be considered an error. Therefore, to do what the user is requesting you first have to remove that address from the process that is currently polling it, and then reassign it to a different (perhaps new, perhaps preexisting) acquisition process. Taking the point further, what if the address you remove from a process is the only address that it is currently polling. Removing that address would leave that acquisition loop with nothing to do, and so we need some way to stop it.

Stopping Acquisition: For its part, stopping the acquisition of addresses can hide some complexity of its own. For example, say the user identifies a list of 4 addresses to be stopped. There is no guarantee that all the addresses are being polled by the same acquisition process – and even if they are all together, we don’t know which process is reading them. This fact implies a need to be able to search all the acquisition processes for a particular address. Plus, as before if we remove all the addresses associated with a particular interval we need to stop that acquisition task.

Remember, this functionality will always be needed, it is simply more robust (and therefore smarter) to hide it from the calling application by encapsulating it in our API.

Getting Down With the Acquisition

To this point we have described the acquisition processes as having two loops: One performs the acquisition and one is a “metronome” function that periodically fires an event that causes the acquisition loop to acquire one scan of all the channels contained in its current configuration. In addition, both of these processes need to be reentrant so multiple copies of each can be launched as needed. Now we need to refine that basic description be specifying the rules that will govern their operation.

First, let’s state that each process is self maintaining both in terms of its own operation and its data. What that requirement means is that each instance of the acquisition process will maintain for itself a list of the addresses that it is polling. Consequently, when a process receives a system message (via UDE) to stop polling on one or more addresses, it will examine its own list of addresses and remove any that are in the “stop polling” list.

Second, we will state that there will only one acquisition process running for each acquisition interval. Hence, if a process receives a system message specifying its own polling interval, it will add those addresses to the list of addresses it is already polling. For example, say there is a process running that is acquiring data from 4 addresses every 1000 msec and it receives a system message that the user wants to start an additional 5 addresses at that same sample interval. The code will add those 5 new addresses to the 4 that it is already polling.

Third, if an acquisition process receives a system start message that does not specify its polling interval, it will prevent duplicate polling by automatically search its configuration for the addresses in the message and delete any that it finds.

Fourth, if after stopping one or more addresses a process finds that its polling list is empty, it will shut itself down by firing an event that is unique to that particular instance.

Fifth, in the event that the user starts addresses for an interval that is not currently running, the logic incorporates a manager function that will start-up a new acquisition process to handle that interval.

Let’s Build Some Code

Finally we are ready to start writing some code to materialize what has to this point been mostly words. A good place to start this work is with the manager VI that will launch new acquisition processes for us. I like this approach because it will give us the opportunity look at how the pieces fit together.

How the Pieces Fit Together

The operational rules we listed earlier provide a number of clues as to where we are going next. To begin with, we talked a lot about messages. This information by itself is enough to let us design and implement the inner workings of the two interface routines we prototyped earlier. They are simply the event generation VIs of two more static UDEs (Start Addresses and Stop Addresses) – and we already know how to build those.

Next, because we have defined what the manager basically needs to do we can create an event-driven shell for it that can respond to two events: Stop Application and Start Addresses. Again, if you have been reading this blog for a while, the Stop Application event is an “old friend” so we will concentrate on the manager’s response to the Start Addresses event. Since this response will vary depending upon whether or not the specified interval already exists, we need to design a way for the code to make that determination, while continuing to bear in mind the principles of information hiding, to wit, the manager doesn’t need to know how the determination was made, just what the result was.

Start Addresses Event Handler

To support this functionality, we will create a VI (called Interval Registry.vi) that the acquisition processes will maintain as they start and stop. This function will support three operations: Check, Insert and Delete. For the Check operation we see here, the routine checks to see if there is already a process running at the indicated sample interval. If there isn’t, the VI returns a false Boolean value that causes the manager to call a launcher VI (Process Launcher.vi) that kicks off all the VIs needed to service the interval. If, however, the interval is already running there is nothing more for the manager to do, so the true case (not shown) does nothing.

Turning our attention now to the VI that launches the new process VIs, we have built this sort of launcher several times before, so we already know its basic structure. What we need clarity on is the details of the data that needs to be passed to the two VIs.

Starting with the simpler of the two (which we will call Metronome.vi) it’s only purpose is to fire an acquisition event at some predefined interval. However, if it is going to stop when it has nothing more to do, it also needs to know what event will tell it to stop. Note that both of these events are specific to each interval that is created. Consequently, they both need to be created on the fly when the acquisition process is launched with their references passed into the new clone as a parameters.

In the same way, looking at the acquisition VI (we’ll call it Modbus Reader.vi) we see that it is going to be receiving the acquisition event that the metronome fires and is also going to need to shut down like the metronome, so it will need to get references to the same to events. The only other messages it will need to receive are the static ones that we have discussed earlier, but because they are static we don’t have to be concerned with them here.

So we add in the event definition logic, and this is what our finished launcher VI looks like:

Process Launcher

The Big Tease

The next thing we need to look into is the VIs that are being launched and the logic that resides inside Interval Registry.vi, but this post is getting long so that will have to wait until next time.

Until Next Time…
Mike…

Finishing the Configuration Management

For the last couple posts we have been looking at how to best utilize object-oriented programming methodologies. In that quest, we have taken as our example the goal of converting the parts of our testbed application that use stored configuration data to a configuration manager based on object classes. In this implementation, the classes represent the various types of potential data repositories.

Maximizing Flexibility by Leveraging Existing Code

When we stopped last time we had created all the basic code infrastructure and all we had to do was construct the dynamic dispatch methods that retrieve the data the application needs. In creating these methods, we have as our guiding principle reducing the amount of code we have to create by reusing as much code as we can. In other words we need to really spend some time thinking about how to structure our code such that it combines maximum reuse with maximum flexibility.

One good way of attaining that ideal is to allow for multiple levels of dynamic dispatch within the same method. Let’s say for the sake of argument that you have a method that will need to be accessible from 10 different subclasses. Furthermore, let’s say that 4 of those subclasses all need to do the same thing, an additional 4 are mostly the same but with a few differences, and the last 2 subclasses require fundamentally different logic to perform the same task. You could implement the logic that is common to the largest group of subclasses (the first 4) in the parent and let the remaining 6 override the parent to define their own solutions. This solution would work, but could potentially result in a lot of duplicated code. It depends on how similar the second group of 4 subclasses are to the first group of 4.

In creating the Initialize New method last week we saw a far better way to optimize the code: For the 4 subclasses that are similar, we could call the parent method in the child (thus taking advantage of that existing code) and then add a little logic to customize the functionality. This solution will work well in many situations, but one area where it will not is in scenarios where the common parts of the code need to pass data to the unique parts.

To address those situations, a very useful solution can be to write the parent method such that it has a subVI encapsulating the similar, but unique bits, that is itself a dynamic dispatch VI. By providing multiple levels of dynamic dispatch, you create a situation that is easy to understand and minimizes duplication of code. In our example, the 4 subclasses would use the parent implementations of the dynamic dispatch VI and subVI. The 4 subclasses that are similar, but a bit different would use the parent method, but override the dynamic dispatch subVI, and the two that are fundamentally different would override the parent method VI itself.

Getting Down to Business

So let’s start looking at what we need to do to finish our conversion — while remembering that most of this discussion will be about the reorganization and repurposing of existing code. For the most part, I won’t be going into how the basic functionality works since we discussed that code when it was originally introduced.

The basic internal structure of our four “blueprint” VIs will be largely similar. We will call the VI that creates the object we want, followed by a dynamic dispatch method that does what we need done. However, note that there isn’t (and shouldn’t be) a direct one-to-one correlation between the blueprint VI and the underlying method. We still need to be looking for ways to minimize redundant code. Back when we were setting up the testbed application originally we considered the need for unstructured configuration values — like you would usually store in an INI file. In fact, we implemented that capability and used it to store the default sample period. Consequently, one of the three methods we end up implementing will be used in a couple of different places now, and will be reusable in the future for other unstructured configuration data. So, let’s start with that one.

Get Misc Setups

Creating this method follows the same procedure as we have used before, except that this method has two input parameters and an output string. So the finished parent method has a front panel that looks like this:

misc read front panel

Another difference with this method is that if it is called for a subclass that does not override it, the parent VI does not provide any default functionality. To see what I mean, check out the VI’s block diagram. You see that it does nothing. This structure might seem rather pointless, but in reality it can at times be pretty handy. Say you have a method that is really only needed by one subclass. Without dynamic dispatch you would have to put a case structure around the code so the VI is only called in that one particular circumstance. However, with dynamic dispatch, this selection takes place automatically and without cluttering up the code with case structures.

With the parent VI (Read Misc Settings.vi) created and saved, we can now build the two subclass overrides. Starting with the Text version, to read the data from the INI file, we create an override VI in Config Data_Text.lvclass containing code that uses the built-in configuration VIs to fetch the data:

read misc - ini

But wait! This won’t work. You notice that the path to the INI file comes from the class data — which is fine except that we forgot to initialize it. I wanted to highlight this point because it is a very easy, and common mistake to make. Initializing the DVR is not the same as initializing the data in the DVR. All we have to do to fix this memory lapse is modify the class’ version of the Initialize New method, like so:

finish initializing the DVR - text

The subVI with the green banner builds the path to the application’s INI file programmatically. By the way, in case you’re wondering about whether the same issue applies to the other override to this method (the one for databases), the answer is “Yes”. However the solution there is a bit more complex, so we’ll deal with it in a moment.

The other thing to notice about the setup reading method’s code is that if the read doesn’t find the desired value in the INI file, it creates it and gives it a default value. Adding this extra bit is often helpful for recovering from situations where a uses has gotten in and mucked around with the INI file and deleted something they should not have. In any case, to see this code in action, we need to finish up the code in the Get Default Sample Period.vi “blueprint” VI:

get default sample period

You can now run this VI and get a valid result from the INI file. Of course, if you set the configuration data selector value in the INI file to Jet you will get a zero back. This results is because we haven’t provided an override for that subclass yet. Note that you do not get an error because not overriding a parent method is a perfectly valid thing to do.

Updating the Database Initialization

But we need the database data too, so let’s backtrack to the Initialize New VI in Config Data_DB.lvclass. The class data for this subclass is a string that the code will use in one way or another to connect to the database. However, there are three possible sources for this string:

  • Most ADO interfaces use a rather complex and specialized connection string that can identify logical names, network paths, security parameters and a lot more.
  • Although Jet does use ADO drivers, its connection string is much simpler. In fact, with the exception of the file path, it can be largely treated as a big string constant.
  • SQLite (which we aren’t going to support right now, but which exists in the class hierarchy) doesn’t use any ADO parameters. Keeping with its minimalist approach, all it needs is a file path.

So what are we going to do? Well, if you said, this sounds like a job for dynamic dispatch, you’re right! We need to create a method that will derive the correct string depending on which specific subclass is present. So let’s create a virtual folder and a subdirectory named _protected in Config Data_DB.lvclass with Protected access scope. Inside the virtual folder we will create a new VI using the dynamic dispatch template called Get Connection String.vi, and save it in the subdirectory. In addition to the signal IO the template provides, we need to add a string output:

get connection string

After saving this new the parent class method, we can fill in the overrides. However that code is pretty trivial, mostly scavenged from the original code, and has very little to do with OOP. Consequently, I won’t take time here to highlight it, but feel free to check it for yourself. Then with this work completed, we can finish the initialization code.

finish initializing the DVR - jet

Accessing the Database

Although we will be using a Jet database, note that the override for the Read Misc Settings method will reside in the Config Data_DB_ADO.lvclass subclass, not Config Data_DB_ADO_Jet.lvclass. The reason for this placement is that with the exception of the connection string (which we handle during initialization) the query logic is the same for Jet or most other ADO databases. By the way, what would we do if we ever did find a DBMS that needed something different? That’s right, we would just create a subclass for it and override the method in that new subclass.

In any case, most of the code we need now was lifted wholesale from the old configuration library:

read misc - jet

And with that we are done with the first of our blueprint VIs. However, this is also the procedure we will follow the VIs to read the error handling parameters, the startup processes and load the machine configurations for each state machine clone. Again, I won’t step through the creation of this code because from the standpoint of this post’s main topic (object oriented code development) there is nothing in these that is any different from the one we did go through. As a friend of mine used to say, “From here on, it’s just a matter of turnin’ the crank.”

Testing

To make the results of these changes easier to verify, I have made slight differences between the configuration in the INI file, and the configuration in the database. Notice that if you start the application with the INI file set to Text, the application launches with only two TC state machines running and the acquisition sample rate defaults to 500 msec. However, a setting of Jet produces the functionality we have seen before (3 state machines and a 1000 msec default sample rate).

Notice also that you have to make the INI file setting changes before starting the application. The logic could certainly have been written to make the settings reconfigurable on the fly, but it would have added another level of complication, so I will leave that as, “…as an exercise for the reader…”.

Before tagging this release of the code in SVN, I also went through the “recycled” code and made sure that the VIs were all saved in their proper locations and all the redundant code had been removed. A project window feature that made the latter task much easier was the ability to right-click on a folder and have LabVIEW list all the files with no callers. One thing to be careful about, however, is deleting “unused” files that are in classes. The logic behind this feature can’t identify VIs that are being dynamically linked — which pretty much describes dynamic dispatch VIs.

Finally, before closings out this post, I want to remind you of something important. I make no claim that this code is the best implementation for all possible situations. Although a lot of it is based on software that I developed for deliverable systems, the point on this blog is to provide you with examples, demonstrations, and models that you can then customize and mold to fit your customers’ specific needs. In music, there is the idea of “variations on a theme”. In fact, in a few cases the variations have become more famous than the original. So take the themes I offer here and feel free to deconstruct, rearrange and reassemble them into something that is new and exciting.

Testbed application Release 15
Toolbox Release 7

The Big Tease

What if I told you that LabVIEW incorporates a feature that can improve the quality of your code and reduce errors by helping to validate your math? Hey, this is a no-brainer! Everybody can use help validating their code. But, what would you say if I told you that most people never use this feature? Crazy…

Until Next Time…

Mike…

Objectifying the Testbed

Object-oriented programming as a technique promises a host of benefits, but suffers from the impression that it is in some way an “advanced” topic. In contrast, I feel that OOP is just a logical extension of the concepts that LabVIEW developers use every day. The basic problem has been with the way it has been taught. However, the various object-oriented frameworks that are overly complex and difficult to learn, haven’t helped matters. These bad “actors” often only serve to hide the inherent elegance of the OOP paradigm and scare off users that could benefit from it.

To help clear away some of the extraneous mystique, I have presented a brief introduction to the topic that provides a foundation sufficient to let us get into OOP by implementing a module for managing program configuration data that provides the calling application with a common interface regardless of how (or where) the data is stored.

Filling a Niche

Most of the work we will be doing will eventually replace the Configuration Management library. Now while this might sound like a major shift, it really is not because (like I repeatedly tell you) the whole point of good design is to make changes and upgrades like this possible. So let’s look at what this upgrade will need to do.

Normally a large part of any upgrade projects is defining the requirements, but due to the design work that was put in originally, we already have a pretty good handle on what the new class structure has to do. In terms of surface functionality, we know we have to be able to handle all the same information as before — with, of course, the ability to add more when we want or need that ability.

Designing the Structure

The trick is going to be sorting out what new functionality will be needed under the covers. At the most basic level we need to be able use either text files or databases to actually store the configuration data, so there we have two subclasses. But we need to consider whether each of those options needs to be broken down further.

On the “text file” side, the data might be coming from a standard INI file, or the code might be in using a custom text file format. Custom text configuration files are very common when some of the configuration data is tabular, since it is a pain to store tabular data in an INI file. However, regardless of the format of the contents, the basic mechanism for reading and writing text files remains the same so it probably won’t be valuable to have any subclasses under “text files”.

On the “database” side of things, however, the situation is very different. First of all, in terms of connectivity, you can access most databases through the standard ADO (ActiveX Data Objects, also sometimes called ole-db) interface. However, “most” is not the same as “all” and one common exception is SQLite. A popular, lightweight data management engine, SQLite can run on a variety of platforms — including some real-time systems. To keep its footprint small, SQLite utilizes a small custom DLL, rather than a large, but standardized, interface. So we need to make provisions for other types of connectivity by creating (for now) two subclasses below “database”: “ado” and “sqlite” — though we won’t be implementing the SQLite functionality right now.

Finally, what about “ado”? Can it be broken down further? Maybe. One of the advantages of ADO is that it, for the most part it does a pretty good job of hiding the differences between one database management system (or DBMS) and the next, but there are some variations it can’t paper over. These differences often relate to the version, or dialect, of SQL the DBMS speaks. However sometimes differences arise because some DBMS fundamentally don’t operate the same. For example, while most DBMS go to extraordinary lengths to hide exactly where and how the data is actually stored, Jet (the DBMS built into Windows) stores the data in a file you explicitly identify. Hence, while the connection to other DBMS might be defined in terms of network paths and logical names, with Jet you are connecting to a particular file.

To provide for these sorts of functional nuances, let’s create a subclass below “ado” for “jet” — understanding there could be others in the future.

Configuration Data Classes

This is what the hierarchy looks like so far, all drawn out.

Doing the Rough Framing

When you are building a house the first tradesmen to show up onsite are the carpenters to do the so-called “rough framing”. This process creates the skeletal form of the final house that is covered with rough exterior plywood. The idea is that later workers will fill in the details and fine tune the construction. And metaphorically speaking, that’s what we have to do now for our configuration data class.

For a class hierarchy, that framing consists of the directory structure and the class files themselves. Using the techniques I gave last time, I first create mirroring directory structures inside the project directory and the project itself…

Configuration Data Classes in Project

Note that I have also created some virtual folders inside the Config Data class which represents actual sub directories.

  • Interface
    The VIs in this folder will have public access scope. In fact they will be the only VIs in the library that are so scoped. Because these VIs are the only ones that outside callers will be able to call, they alone form the interface between the class hierarchy and the rest of the code.
  • _private
    As the folder title implies, the files that go in here will have private access scope. This assignment means that they will only be accessible from other VIs in the top-level class.
  • _protected
    This is another folder that specifies access scope for its contents. In the case of protected scope, the VIs in this folder will only be accessible from the top-level class, or any of its child classes.

Creating the Interface

With the functional scaffolding in place, we can start filling in the blanks. And the first thing we need to do is create what I referred to as the “blueprint” in the previous post — the VIs that the rest of the application will call. After going through the public VIs in the old library we see that there are really only 4 VIs that the rest of the application uses directly

  1. Get Default Sample Period.vi
  2. Get Error Handling Parameters.vi
  3. Get Processes to Launch.vi
  4. Load Machine Configuration.vi

Many of the others will still be used, but their presence will be hidden in subclasses. As I create these (for now) empty VIs, I make sure they have the same front panels and connector panes as the ones they will be replacing.

Adding Infrastructure

Getting back to our housebuilding analogy: After the framework is completed and the outside skin is on, the next job is to start installing some of the needed infrastructure, because without electric, water, sewer and perhaps gas connections, our new home is not much better than the cave dwellings that our prehistoric ancestors inhabited — and in some ways is far worse.

What we need to add to our nascent configuration management subsystem is some data handling, but not for our data. The data I’m talking is the private internal data that the subsystem needs to maintain in order to do its job.

Thinking about what our various bits of code need to do, we see first that there is certain data that will commonly be needed regardless of how you actually end up getting the data. What I’m thinking about here is the name of the operator and a password. Now, some subclasses might need both, while others might need only one or the other, and that’s fine. The important point is that if all subclasses could potentially need at least some of this data, it has to be data that is associated with the top-level class. However, this requirement for global availability causes a problem.

Remember how when we were defining terms in the previous post we said that in the LabVIEW world an object is a wire? LabVIEW wires, and data contained in them are by definition not global. If you want data from a wire you need to be connected to it. So how can we create wires that are separate, but which still share at least some of the same data? Well, one very good way of doing it would be to create one central data store that all the wires can access, and as it turns out LabVIEW incorporates a feature that is very efficient and so is perfect for such an implementation. I’m talking about the Data Value Reference, or DVR.

Our approach will be simple. We first create a DVR that is defined to hold the data we need and make a reference to that DVR the class data for our Config Data class. Then to access that data, we create a family of data access VIs to insert data into, or read data from, the DVR.

The first step on this process is to create another virtual folder in the top-level class named _dvr with privateaccess scope. Next, I create in that folder a typedef control named Config Mgr Data.ctl that consists of a cluster containing two strings, one for user name and one for password.

Config Mgr Data

When saving this control I create a subdirectory (called _dvr) to hold it. Likewise, I also create a VI in the same directory (and virtual folder) called Config Mgr DVR.vi that contains this code:

Config Mgr DVR

Two comments about this code. First, it has the basic form of a FGV where the variable value is the DVR reference. Because the case that generates the reference is only run once, this VI will always return a reference to the same DVR no matter how many times it is run. Second, the data for the DVR is the typedef we created. This point is critical. As with UDEs, if you define a DVR reference using a typedef, you can later change the typedef and it won’t break the reference.

To make this DVR available and inheritable through the class, I copy and paste the reference indicator into the class data cluster, like so.

Config Mgr DVR in Class Data

Then to provide access to the DVR contents, I create protected scope VIs that I store in a virtual folder and subdirectory both named _data access. Here is what the read VI for the user ID parameter looks like…

User ID Read

…and the write VI…

User ID Write

I next realize that the two main subclasses also have some data that will need to be held in common for their subclasses. So I repeat the process I just used to store a file path for the file subclass and the connection string that the db subclass needs. Here’s what the project looks like now.

project with basic infrastructure

Finally, before moving on we are going to need a way to initialize all the logic we have created, and to do that we will take our first foray into the exciting — though sometimes confusing — world of creating dynamic dispatch VIs. The goal is to create a method called Initialize New that causes the DVR in a new instance of a class to automatically initialize itself.

I start by right clicking on the _protected virtual folder in Config Data.lvclass and from the New sub menu selecting the option to create a new VI using the dynamic dispatch template. I leave the front panel of the resulting VI the way it is, but I change the connector pane, edit the icon and add this code to the block diagram.

Initialize top-level dvr

If the DVR in the class data is not valid, I call the DVR VI to get a valid reference and then use that reference to populate the class data. If the DVR reference is valid, the false case (not shown) does nothing. When I save this VI, I put it in a subdirectory named _protected.

To create the subclass versions of this method, I right-click on the subclass name and from the New sub menu select the item to create a new VI for override — which is the technical term for what we are doing. We are overriding the parent functionality with different functionality in the child.

However before we get a new VI, LabVIEW opens a dialog to ask us which parent method we want to override. After double clicking on Initialize New we get our new VI, also called Initialize New. After saving this new VI, (the default location LabVIEW picks is perfect) I modify the code to look like this:

initialize new - child

Most of this logic should be familiar because it is the same as we did for the parent. If the child DVR reference is not valid, we initialize it. Otherwise, we do nothing. But what is that funny looking VI in front of the initialization logic?

Object-oriented methodology recognizes that there are going to be times when a method in a child class will need to do what the parent method does, but perhaps a bit more or maybe do it a bit differently. One solution to this situation would be to simply duplicate all the parent code in the child. However that approach would be wasteful. What object-oriented logic does instead is it allows a child to directly call its parent’s version of the method. So here, the code first calls the parent’s version of the initialization VI and then executes the logic to initialize itself. In the end, both the parent and the child will get initialized.

Creating Objects

The time is now upon us to start tying this infrastructure together into an organized system. The first thing to sort out is how to create an object of a given type. One way is very similar to what you would do with a conventional datatype. If you wanted to create, say an I32 value, you would drop down a constant and start using it. In the same way, you can also drop down a class constant and wire to it, thus creating an object. The problem with this approach is that when LabVIEW instantiates a class it also loads into memory all the VIs associated with the class. What you can end up with is a situation where all the VIs for all the classes are loaded, even if you will never use some of the classes. The way to get around that problem is to load classes dynamically as you need them. This is the code I use to perform that operation.

create object dynamically

You will notice that the VI has no inputs, save the requisite error cluster. This is because the basic piece of information that specifies the specific class to be created will be loaded from the application INI file. But why the INI file? Isn’t the point of this exercise to get rid or configuration data in that file? Well yes, but there is a bit of a paradox at work here. Simply put, an application can’t go to a database it doesn’t know it has to find if it should look in the database to get its setup data. That basic piece of information has to be stored somewhere that will always be there, and on the Windows platform you have exactly two choices: the INI file and the Windows registry. Of the two, the INI file is much safer — you at least don’t have to worry about a user going wild and trashing their whole computer.

We are initially only going to support two options for managing configuration data, so we only need two settings (Text and Jet). The VI that communicates with the INI file reads one of these values from the Configuration Data key in the Data Management section and returns two values based on what it finds there: A relative path to the location of a class file, and the name of the file. Thanks to the naming convention we use, these two values are very closely related.

The remaining code loads the target class into memory and initializes it. In addition, the resulting object is buffered as in a FGV. For the details of how this process works, see the comments in the code. The final thing to notice is that the output from this VI is an indicator of the Config Data class datatype.

Building Out the Remaining Methods

All that’s left now is to implement the code that does what the testbed needs done, however this post is getting long and I have already passed on a lot of information for you to absorb — so we’ll leave that discussion (and the testing!) for next week.

Until Next Time…

Mike…

How to Make Dynamic SubVIs

The last two posts discussed different ways to use dynamic linking and calling in situations where you want the target VI to run in parallel with the rest of the application. Another major use case for dynamic calling is to create software plugins.

I use the term “plugin architecture” to indicate a technique that strives to simplify code by facilitating runtime changes to limited portions of the code while allowing the basic logic for the function as a whole to remain intact and unchanged. For example, say you are implementing a control algorithm that has, for the sake of argument, two inputs: a position and a load. As long as the readings are accurate, timely and in the proper units, most of the control algorithm doesn’t care how these inputs are measured. It would, therefore, simplify the code base as a whole to incorporate a technique that allows the reuse all the common code by simply “plugging in” different acquisition modules that support the various different types of sensors that can measure position and load.

This isn’t rocket science

This goal might sound lofty, but the hurdle for getting into it is actually pretty low. In fact, when teach the LabVIEW Core 1 and 2 week, I will sometimes introduce what LabVIEW can do by demonstrating a simple plugin architecture that only uses concepts introduced and discussed in those two classes.

I start the demo by opening an application that I have written to implement a simple calculator. It has two numeric inputs labeled “X” and “Y” and an indicator labeled “Output”. The only other control is a popup menu that lists the two math operations it can perform: addition and subtraction. I first show that you can use the simple program to add and subtract numbers.

I then comment that it would be nice if my program could multiply numbers as well. So I drag the program window to one side but leave it running. I then open a new window and add two inputs and an output (labeled like the program). On the block diagram I drop down a multiplier, wire it up, and save the VI with the name Multiply.vi. After closing that VI’s front panel, I tell the class that although they just watched me create the ability to multiply numbers, the program which is already running has already acquired it. At this point, I pull the program front panel back over and show that the menu which only moments before said “Add” and “Subtract”, now has a third option: “Multiply”. Moreover, the new selection does indeed allow me to now multiply numbers. Finally, I make the point that this capability to dynamically expand and change functionality was created using nothing but things that they will be learning in the coming week.

While the students at this point might not be cheering, they are awake and have some motivation to learn. You better believe that when I show them the code Friday afternoon and they can recognize how the program works, they are excited. So let’s see what I can do now to motivate and excite you…

It’s all about the packaging

As we begin to get into the following use cases you should notice that the actual code needed to implement each solution is not really very complicated. If fact, in the following sections we will be dealing with many of the same functions as when we were dynamically launching separate processes. The tricky part here is that we need to fit all the data management and launch logic in a space that would otherwise be occupied by a single VI. For this reason we need to be very careful when packaging this code. To demonstrate what I mean, let’s consider three common use cases that all call a simple test VI (called amazingly enough Test.vi) that simply returns a random number. I have ordered these cases such that each example builds on what went before:

System Initialization

I like to start here when explaining these concepts because it is the simplest structure and so is easy to understand.

System Initialization

Here you see laid out all the basic pieces required to dynamically call a subVI. The first VI (the one marked “Dummy”) is the one we will use through-out this post to represent the data management needed to get the path to the VI that will be called dynamically. Depending upon your application this could come from an INI file, database, or what have you. We won’t go into a lot of detail with that VI because we have discussed the various techniques pretty thoroughly in other posts.

You should recognize the next node (Open VI Reference) as we have already used it a lot. It opens a reference to the VI specified in the input VI name or path. This action also has the effect of loading the VI into memory. Just as statically subVIs can be reentrant, you can specify it here, as well, if needed.

Once the VI is open, we pass its reference to a node that looks a lot like the Start Asynchronous Call we have already used. This node is Call by Reference and like the prior one it expects a strict VI reference and in return provides a connector pane for passing data to the target. However, unlike the asynchronous call, this node always waits until the target finishes executing before continuing, hence its representation of the target’s connector pane also allows us to get data from the target.

Finally, in order to remove the target VI from memory, we need to close the reference when we are done with it — which is what the last node does.

When using this technique it is important to keep in mind that, because it loads, executes and unloads the VI all at once, it is very inefficient when used is a situation where it will be run multiple times. Sort of like trying to use an Express VI inside a loop — but worse.

However, one place where this technique can be used very effectively is doing things like system initialization. Its not uncommon for large and complex applications to require an equally large and complex initialization process, especially the first time they are run. Although you could just write a VI to do this initialization and install it in the program, why should you burden your program with a bunch of code that may only execute one time — ever? This technique allows you to only load and execute a VI if you actually need it.

Function Substitution

This use case is perhaps the most common of the three. It uses the same nodes we just saw, but because it can be used essentially anywhere, the packaging has to be very efficient. A solution I often use is to create a mini state-machine that has states for loading, executing and unloading the target VI, as well as one more for deciding what to do first. The user interface has one (enumerated) control that allows you to specify whether you want to Load the target, Run the target, or Unload the target.

The inputs are pretty simple, and the state-machine logic mirrors that simplicity. If the function requested is Load, the Startup state transitions directly to the Open VI state which opens and buffers the VI reference. Here is the code for these two states:

Function substitution - Startup - Load

Function substitution - Open VI

If the function requested is Run (the input’s default value, by the way), the Startup state first checks to see if the VI reference is valid and if it is, transitions to the Execute state.

Function substitution - Startup - Run

Function substitution - Execute

If the reference is not valid, execution instead continues with the Open VI state which returns to the Execute state after opening the reference.

Finally, if the function requested is Unload, the Startup state transitions directly to the Close VI state:

Function substitution - Startup - Unload

Function substitution - Close VI

The result of all this work is a flexible VI that can be used in a variety of different ways depending upon system requirements. For example, if the target VI loads quickly or the calling process doesn’t have tight timing constraints, you can just install it and let the first call to both load the VI into memory and run it. Alternately, if you don’t want the first call to be delayed by that initial load, you can call it once in a non-time-critical part of the code to just Load the target and then Run it as many times as you like later.

In the same way, the Unload function, which normally isn’t needed, can be used to release the memory resources used by a large dynamic subVI when you know that you aren’t going to be needing it for a while.

This approach could even be extended to create a simple test executive that dynamically loads and unloads whole sets of VIs. In such a situation, though, you probably don’t want to be tied to a single connector pane, so you should consider changing the VI reference to non-strict and modifying the Execute state to use our old friend the Run VI method like so.

Function substitution - Execute - Run VI Method

Interprocess Communication

This use case is in many ways the most expansive of the three because it supports communications between different processes regardless of their physical location. In terms of code, there is very little difference between this case and the previous one. In fact, the only change that is really required is to the Load state. This is what the network-enabled version looks like:

Function substitution - Open VI - IPC

That new icon in front of Open VI Reference is the one that makes the magic. Its name is Open Application Reference and its job is to return a reference to the instance of LabVIEW that is hosting the VI that you want to access. To make this connection, you need to know the host-name or IP address of the computer running the LabVIEW application, and the port number the instance is using for incoming connections. If the application that you are wanting to access in on your own computer, you can either leave the host name string empty or use the name localhost. The port can be any number that isn’t already being used. One common people mistake is to simply accept the default value, which is the port that LabVIEW listens to by default. This causes problems when they go to test their application because now there are two processes (the LabVIEW development environment and their application) trying to use the same port.

An important point to remember is that while the Call by Reference node passes data back and forth, the called VI actually executes on the remote system. Hence, this sort of operation works independent of the target system’s operating system or even the version of LabVIEW that it is running.

The tasks for setting up a system to use this technique involves properly configuring the application to which you are going to be connecting — though it often doesn’t require any code changes. This amazing benefit is possible because the underlying networking is handled by the runtime engine, not the application code you write. I have on a couple occasions come into a place and added remote control functionality to an old application that was not even designed with that capability in mind.

You will, however, have to make changes to the target application’s INI file to enable networking, And you, obviously, will need to have access to the application’s source code so you know what VIs to call. Likewise, if you are accessing a computer outside your local network there can be a variety of network communications details to sort out that are beyond the scope of this post. One other thing to bear in mind is that it is always easier to connect to a remote VI if it is already in memory. The reason for this fact is that if the VI is already in memory all you need to know is the VI’s name. If you are also loading it into memory, you have to know the complete path — the annotation of which can change between platforms.

But assuming you establish the connection, what exactly can you do with it? The answer to that lies in what VIs you choose to execute from the remote process. If you execute a VI that fires an event, that event gets fired on the remote system, so you can use it as a channel for controlling that application.

Alternately, say you chose to execute a VI that is a function global variable (FGV). Depending on whether you are reading from or writing to the FGV, you are now either passing data to the remote system, or collecting data from it. By the way, this is the method I still typically use for passing data over a network between LabVIEW applications — not network-enabled shared variables. Unlike this later “enhancement”, the dynamic calling of a FGV doesn’t need to be “deployed”, is more memory efficient, has a smaller code footprint, is far easier to troubleshoot if there is a problem, and works across all releases of LabVIEW back to Version 6.

The only real limit to what you can do with this technique is your imagination.

So there are the three major use cases for the dynamic linking of subVIs. This discussion is by no means a complete consideration of the topic, but hopefully it will whet your appetite to experiment a bit. The link below is to an archive containing all three examples to get you started.

Dynamic Linking Examples

Next time: Command Line Parameters are not a relic of the past.

If you have spent any time poking into some of the more esoteric corners of the application builder, you may have noticed a checkbox in the Advanced section labeled, “Pass all command line arguments to application”, and wondered what that is all about. Well, wonder no more, that little checkbox is what we are going to discuss next time out — and while we’re at it I’ll cover a really neat use for it.

Until next time…

Mike…

Raising the Bar on Dynamic Linking Even Further

Important: Before we get started this week, if any of you have downloaded the code from last week and have had problems with it, please update you working copy with what is currently in SVN. In working on this post, I found some “issues” — including the significant problem that the database that defines everything didn’t get included in the repository release. The current contents of the repository should fix the problems, and I am sorry for any hair-loss and/or gnashing of teeth these problems might have caused.

Also, be sure that if you plan to run this code in the development environment that you name the directory where it is located “testbed”.

These days, the most common way of implementing the dynamic calling of VIs in LabVIEW is through the Start Asynchronous Call node. In addition to being efficient, this technique is also very convenient in terms of passing data to the VI that is being called. Because it replicates the VI’s connector pane on the node, all you have to do is wire to terminals. However, this convenience comes at a price. For this type of call to work you have to know ahead of time what the connector pane of the VI being called looks like. This constraint can be a problem because it is not uncommon for situations to arise where you are wanting to dynamically call code the connector pane of which varies, is irrelevant (because no data is being passed), or is unknown. As you should by now be coming to expect, LabVIEW has you covered for those situations as well.

Where There’s a Will, There’s a Way Method

Over the years as LabVIEW developed as a language, its inherent object orientation began to become more obvious. For example, when VI Server was introduced it provided a very structured way of interacting with various objects within LabVIEW, as well as LabVIEW itself. With VI Server you can control where things appear on the front panel, how they look and even how LabVIEW itself operates. Although it didn’t reach its full expression until the Scripting API was released, the potential even existed to create LabVIEW code that wrote LabVIEW code.

We won’t be needing anything that complex, however, to accomplish our goal of dynamically launching a VI where we don’t have advance knowledge of its connector pane. In fact the part of VI Server that we will be looking at here is one of the oldest and most stable — the VI object interface. Just as you can get references to front panel controls, indicators and decorations, you can also get references to VIs as a whole.

Like control references, VI references come in two basic forms: strict and non-strict. To recap, a strict control reference contains added information that allows it to represent a particular instance of the given type of control. For example, a strict cluster control reference knows the structure, or datatype, of a particular cluster. By contrast, a non-strict cluster reference knows the control is a cluster, but can’t directly tell you what the various items are that make up the cluster.

In the same way, strict VI references, like we have been using to dynamically launch VIs, know a great deal about a specific class of VI, including the structure of its connector pane. This is why the Start Asynchronous Call node can show the connector pane of the target VI. However, as stated earlier, this nice feature only works with VIs that have connector panes exactly matching the prototype. As you might suspect, the solution to this problem is to use a non-strict VI reference, but that means we need to change our approach to dynamic launching a bit. Instead of using a special node, we’ll use standard VI Server methods to interact with and run VIs.

Mix and Match

To see how this discussion applies to our testbed code base, consider that to this point we have used a single technique to launch all the processes associated with the application. Of course to make that approach work required one teeny tiny hack. Remember when we added to the data source processes an input that tells them “who they are”? Well, that modification necessitated a change to the VIs’ connector panes, and because we were launching all the processes the same way, I had to make the same change to all the processes — even those that didn’t need the added input, like the GUI and the exception handler.

So big deal, right? It was only one control, and it only affected 2 VIs. Well maybe in this case it isn’t a huge issue, but what if it weren’t 2 VIs that needed to be changed, but 5 or 6? Or what if all the various processes needed different things to allow them to initialize themselves? Now we have a problem.

The first step to address this situation was actually taken some time ago when the launcher was designed to support more than one launch methodology. You’ll remember last week when creating the dynamically launched clones, we didn’t have to modify the launcher because it was written from day one to support reentrant VIs. What we have to do now is expand on this existing ability to mix and match VIs with launch methodologies to include two new options in the Process Type.ctl typedef. Here’s what the code for the first addition looks like:

run method - nonreentrant

As before, we start by opening a reference to the target VI, but this time it’s a non-strict reference. Next, we invoke the Run VI method, which has two inputs. The first input specifies whether we want to wait the target VI to finish executing before continuing, and we set it to false. The second parameter is named somewhat obscurely, Auto Dispose Ref. What it does is specify what to do with the VI reference after the VI is launched. In its default state (false) the calling VI retains the VI reference and is responsible for closing it to remove the target VI from memory. In addition, if the calling VI retains the reference to the dynamic VI, when the caller quits, the dynamic VI is also aborted and removed from memory. On the other hand, when this input is set to true, the reference is handed off to the target VI so the reference doesn’t close until the dynamic VI quits — which is what we want.

The other new launch option is like the first, except it connects the option constant that tells the Open VI Reference to open a reference to a reentrant clone. Other than that, it works exactly the same.

run method - reentrant

So with these two new launch methodologies created, all we have to do is change the database configuration for the GUI and Exception Handler processes to use the nonreentrant version of the Run VI method and we are done right? Well not quite…

One of the “quirks” of the Run VI method is that although it does start a VI executing, if that VI is configured to open its front panel when run (like our GUI is), the open operation never gets triggered and the front panel will stay closed. The result is that the VI will be open and running, you just won’t be able to see it.

To compensate for this effect (and the corresponding effect that the front panel won’t automatically close when the VI finishes), we need to add to the GUI a couple VIs from the toolbox that manage the opening and closing of the GUI’s the front panel.

open front panel

That’s the opener there, the last one in line after all the initialization code. This placement is important because it means that nearly all the interface initialization will be completed before the front panel opens. The result is much more professional looking. By the way, this improved appearance is why I rarely use the option to automatically open a VI’s front panel when it is run.

close the front panel

And here is the closer. The input parameter forces the front panel closed in the runtime, but allows it to stay open during development — a helpful feature if there was an error.

Where do we go from here?

So that is the basics of this technique, but there is one more point that needs to be covered. Earlier I talked about flexibility in passing data, so how do you pass data with this API? Well, we ran the VI using a method, so as you would expect, there is other methods that allow you to read or set the value of front panel controls. This is what the interface to the Control Value Set method looks like.

the set control value method

It has two input parameters: a string that is the label of the control you want to manipulate, and a variant that accepts the control’s new data value. Note that because LabVIEW has no way of knowing a priori what the datatype should be, you can get a runtime error here if you pass an incorrect datatype. Obviously, using this method your code can only set one control value at a time so unless you only have 1 or 2 controls that you know you will need to set, this method will often end up inside a loop like so:

set control value in a loop

…but this brings up an interesting, and perhaps exciting, idea. Where can we get that array of control name and value pairs? Would it not be a simple process to create tables in our database to hold this information? And having done that would you have not created a system that is supremely (yet simply) reconfigurable. This technique also works well with processes that don’t need any input parameters to be set. The loop for configuring control values passes the VI reference and error cluster through on shift registers and auto-indexes on the array of control name/value pairs. Consequently, if a given VI has no input parameters, the array will be empty and the loop will execute 0 times — effectively bypassing the loop with no added logic needed. By the way, this is an important principle to bear in mind at all times: Whenever possible avoid “special cases” that have to be managed by case structures or other artificial constructs.

More to Come

Over two consecutive posts, when have now covered two major use cases for using dynamic linking: VIs that will run as separate processes. But there is another large use case that we will look at the next time we get together: How do you dynamically link code that isn’t a separate process, but logically is a subVI?

Testbed Application &mdash Release 12

Toolbox — Release 6

Until next time…

Mike…

Taking Flexibility to the Next Level — Dynamic Linking

When people start using LabVIEW they always find fascinating the way it allows them to assemble complex functionality from simple building blocks, called subVIs. However there are a variety of use cases where — for very legitimate reasons — the usual technique for interconnecting subVIs with wires just won’t do. One thing that many of these use cases have in common is the desire to put off until runtime the decision of what code we want the program to execute next. Noting the power of being able to add functionality to our computers by plugging in a new card, or USB dongle, a common wish is to find a way to do do the same thing in software, a way to create software plugins.

In fact the potential benefits of this sort of structure were so great that in the early days of LabVIEW’s existence, developers would go to absurd lengths to realize even a small bit of it. For example, the only way originally of creating software plugins was to take advantage of the fact that LabVIEW links to subVIs by name and manually change the names of files to select the one you wanted to use before opening the top-level VI. If you’re wondering how that impacted building an application, that wasn’t a problem — there was no application builder!

Thankfully, today we have many different options for creating a true plugin architecture through dynamically calling and linking VIs. To explore those options I’m going consider over the next few posts a few of the basic use cases, and demonstrate the techniques that serves best for each one.

What We Have Already Seen

A good place to start this exploration is a use case with which we are already familiar — the one embodied in our testbed application. When I originally presented the idea of structuring the program as a group of independent processes that were dynamically launched, I spoke about it in terms of promoting a kind of modularization that simplified the code, made the application as a whole more robust, and fostered reusability. While all that is true, there is another side to that coin. By providing you with the ability to select at runtime the capabilities of the software, it opens the door for dramatically improved scalability. To see how that would work, consider the following scenario.

The most recent change to the testbed was to create a state-machine temperature controller that was (as I said in the post) only suitable for controlling the temperature of a “…hen-house, dog house or out house…”. But let’s say you have more than 1 hen-house, dog house or out house that needs temperature control. How should we handle that? One solution that you often see used is to simply duplicate that one VI, but then what if the number of “houses” changes? You could be left in the position of constantly modifying the application to add or subtract resources. Let me show you a better way, a way to reuse the exact same VI as many (or as few) times as might be needed, but without needing to modify the code to change the number if instances created. The trick is to make the code reentrant.

The basic idea behind a reentrant VI is that each time it is called in a program, LabVIEW reuses the same code, but gives each instance its own memory space. The result is the same as if you had multiple copies of the same VI. This basic operating principle is the same regardless of whether the reentrant VI is linked into your program statically, or is being called dynamically. When making a call to a reentrant VI, the resulting instance running in memory is called a clone. It’s important to remember that while all the clones of a given VI will use the same common code, they each have (in addition to their own memory space) their own block diagram for debugging and front panel for user interaction. In some ways it’s as though you are dynamically creating new VIs out of thin air!

Bring in the clones…

So what do you need to do to embed this magic in your program, you ask? Well, perhaps, not as much as you would think…

Reentrancy

The first thing, obviously, is that in order to be cloneable, the VI must be reentrant. But for many people, this feature can be at first confusing and more than a bit intimidating (I know it was for me). The challenge from the perspective of LabVIEW is that, unlike most languages, LabVIEW actually supports two kinds of reentrancy. The easy one is the one that creates “Preallocated” clones, it works the way you see it described if you look up the term “reentrant” online. For many, the confusing part is the other kind of reentrancy, the “Shared” clones. I know the first time I saw the term, it stuck me as an oxymoron of the first order. As I understood reentrancy, the whole point of reentrant execution was that clones used preallocated memory spaces that didn’t get shared. So shared clones, what is that about? In order to answer that, we need to consider how LabVIEW execution works.

Normally, VI execution is blocking. By that I mean that two instances of the same VI cannot run at the same time because they share the same memory space. However, because they have their own memory spaces, multiple instances of reentrant VIs can execute in parallel. Consequently, when writing very low-level VIs that are going to be used in many, many places throughout your code, one would like to make them reentrant so the instances are not all blocking each other. However if you did that willy-nilly, you could develop a problem: memory. You could end up preallocating a lot of memory for dozens or even hundreds of instances that don’t actually run very often, or perhaps ever. To address that problem, LabVIEW created the concept of shared reentrancy that splits the difference between nonreentrant execution on the one hand, and normal reentrant execution (which LabVIEW now calls preallocated), on the other.

Shared cloning creates a pool of a predefined number of sharable memory spaces that the reentrant code can use. Now whenever a particular reentrant function is called, LabVIEW can go to its pool of memory spaces and use a memory spaces that isn’t busy right then. If all the copies in the pool are busy, LabVIEW creates a new memory space for the VI, adds it to the pool and then uses it. All in all, it’s a pretty nice feature that we will see the need to use later…

Parameterized Identity

The next thing clones need is a way tell themselves apart. As a case in point, you can have 3 houses and 3 clones of our control process, but if all 3 clones try to control the same house, you still haven’t accomplished anything useful. Clone 1 has to be able to tell that it is responsible for house 1. Likewise, clone 2 has to know that it is handling house 2, and clone 3 must direct its efforts to controlling house 3. The way to accomplish this assignment is through a process called parameterization, or passing to the clone an input parameter that it can then use to obtain the configuration information it needs to operate.

This area is one of those places where having an adequate data management infrastructure in place (read: database) really pays big dividends. Ideally, you should be able to provide a clone with an identity through a single value, like a name. Other times, the various clones may be controlling specific test systems, so perhaps all you would need is the ID number of the system assigned to the clone. Regardless of the exact nature of that identifying value, with a database in place the clone can use that name or number to query the database for the specific setup information that it needs.

Functional Completeness

A good clone should be functionally complete. By that, I mean that a clone needs to be a complete package unto itself, not needing a lot of detailed external interactions. A few high-level controls are fine, but you don’t want to expose too much detail.

As an example of what I mean, consider the following situation from life. Say, you have a new employee in your department that needs to fill out some sort of form from HR. While it might be reasonable to explain what specific information is needed in one field or another, you shouldn’t have to take the time to explain that the side of the form to fill out is the one with the printing on it. You also shouldn’t have to describe the color of ink to use or the hand in which the person should hold the pen — or for that matter, that they should hold the pen in their hand and not between their toes! The only instruction that should be needed is, “Go fill out this form.”

Likewise with a clone, you should be able to say things like, “This is the code that does the out house temperature control”, or, “This is the process that interfaces with the database.” A good way of judging how well you are doing in meeting this “completeness” goal is to consider whether you can change the details of what the clone does without impacting the rest of the application.

A principle that can be supportive of this sort of completeness is to make sure the clone’s code isn’t burdened with extraneous functionality like extensive GUIs. Remember that adding in nonessential functionality limits the applicability of that code to only those situations where that added functionality is needed. As a practical matter, you particularly don’t want clones to have the ability to open dialog boxes. In case you aren’t clear on the reasoning behind that injunction, consider the pandemonium that could result if you have 15 or 20 clones running and something happens that causes all of them to start opening dialog boxes simultaneously.

Managing Common Resources

Finally, because you can potentially have a large number of clones running at the same time, you need to carefully consider parallel access to common resources. Interfaces that might work well for 2 or 3 clones could run into bandwidth problems when there are a couple dozen. Moreover, as the number of simultaneous accesses increase, so does the potential for access conflicts and race conditions. For example, I can’t tell you how many times I have seen systems crash and burn over something as simple as two processes trying to write to the same file at the same time.

If you have a common resource that could become a bottleneck or produce a race condition, a solution that often works well is to create a separate process with the sole responsibility of managing that resource.

Cloning our Temperature Controller

So what do we need to do in order to close our temperature controller? We know that in order to be cloned, the VI needs to be reentrant, so Step One is to turn that feature on by going to the Execution page of the VI Properties and selecting the option for Preallocated Clones.

reentrant setting

That one setting handles the top-level, but what about the subVIs? They could still block each other and prevent the cloned controllers from running unobstructed. To prevent that possibility, you should go through the subVIs looking for routines that are called regularly, and make them shared clones. In the current code, there is one exception to this advice — and we’ll cover it next.

With the applicable code set to being reentrant, we need to consider the identity issue. That matter is, to an extent, already being handled in the existing code because even nonreentrant VIs sometimes need to be told who they are. All we need to do is expand our usage of the My Name label we are already passing to the dynamically launched VIs by using it to look-up the configuration data for each clone.

The nonreentrant version of our temperature controller state-machine already had a VI for looking up the state-machine setups in the database, all we have to do is modify it so it accepts the My Name value as an input. Here is the modified version of the look-up VI.

load state machine settings

This is the VI that I said earlier that we do not want to make a shared clone, and there are two reasons. First, the VI only gets called once so there would be no benefit. Second, when you consider what the VI is designed to do, making it a shared clone would actually be counterproductive. See the shift registers? They are there to buffer the results of the queries to reduce the number of database accesses that the application has to make. In other words they are there specifically to created a shared memory space, so making it a clone would defeat that purpose.

Next we need to address the dialog box in the data saving routine. It really isn’t a good idea to give users the responsibility to correctly save data files anyway, so let’s change it.

modified save data

This modification saves the file with a dynamically generated name, to a predefined location, in this case the “Documents” directory associated with the current Windows user. Note that part of the file name is a timestamp consisting of the year, month, day, hour, minutes and second the file was created. I like this filename format because (assuming a 4-digit number for year and 2 digits for everything else, and a 24 hour clock for hours), files named in this way will always sort by name in correct chronological order.

Beyond that we are pretty much done. The existing code was already concise and had no user interface to speak of — just a few controls that we would want to have for troubleshooting purposes anyway. The only other thing needed is to define the clones that we want in the database. When you run the modified code, you will see three clones with differing update rates — but you can create as many as you want by simply adding them (and their setup parameters) to the database.

We are almost done for now, so hopefully you see that in some ways dynamic linking isn’t what this post is really about. Oh, I have presented a little bit of technique, and a few code modification, but the real lesson is that expanding the reach of you code in this way is not hard, as long as the code you are working with is well-designed from the beginning. Often you will hear people talking about all the modifications that they had to make in order to make their code cloneable, but if you really look at it you see that most of the modifications were in actuality fixing mistakes that they should never have made in the first place.

What Now

This should be enough for everybody to digest for now, so next week we’ll look at the other way to launch dynamic VIs — and why you would want to use it. (Hint: It addresses a problem that right now you don’t even know you have.)

Testbed Application — Release 11

Toolbox — Release 5

Until next time…

Mike…