Taking Flexibility to the Next Level — Dynamic Linking

When people start using LabVIEW they always find fascinating the way it allows them to assemble complex functionality from simple building blocks, called subVIs. However there are a variety of use cases where — for very legitimate reasons — the usual technique for interconnecting subVIs with wires just won’t do. One thing that many of these use cases have in common is the desire to put off until runtime the decision of what code we want the program to execute next. Noting the power of being able to add functionality to our computers by plugging in a new card, or USB dongle, a common wish is to find a way to do do the same thing in software, a way to create software plugins.

In fact the potential benefits of this sort of structure were so great that in the early days of LabVIEW’s existence, developers would go to absurd lengths to realize even a small bit of it. For example, the only way originally of creating software plugins was to take advantage of the fact that LabVIEW links to subVIs by name and manually change the names of files to select the one you wanted to use before opening the top-level VI. If you’re wondering how that impacted building an application, that wasn’t a problem — there was no application builder!

Thankfully, today we have many different options for creating a true plugin architecture through dynamically calling and linking VIs. To explore those options I’m going consider over the next few posts a few of the basic use cases, and demonstrate the techniques that serves best for each one.

What We Have Already Seen

A good place to start this exploration is a use case with which we are already familiar — the one embodied in our testbed application. When I originally presented the idea of structuring the program as a group of independent processes that were dynamically launched, I spoke about it in terms of promoting a kind of modularization that simplified the code, made the application as a whole more robust, and fostered reusability. While all that is true, there is another side to that coin. By providing you with the ability to select at runtime the capabilities of the software, it opens the door for dramatically improved scalability. To see how that would work, consider the following scenario.

The most recent change to the testbed was to create a state-machine temperature controller that was (as I said in the post) only suitable for controlling the temperature of a “…hen-house, dog house or out house…”. But let’s say you have more than 1 hen-house, dog house or out house that needs temperature control. How should we handle that? One solution that you often see used is to simply duplicate that one VI, but then what if the number of “houses” changes? You could be left in the position of constantly modifying the application to add or subtract resources. Let me show you a better way, a way to reuse the exact same VI as many (or as few) times as might be needed, but without needing to modify the code to change the number if instances created. The trick is to make the code reentrant.

The basic idea behind a reentrant VI is that each time it is called in a program, LabVIEW reuses the same code, but gives each instance its own memory space. The result is the same as if you had multiple copies of the same VI. This basic operating principle is the same regardless of whether the reentrant VI is linked into your program statically, or is being called dynamically. When making a call to a reentrant VI, the resulting instance running in memory is called a clone. It’s important to remember that while all the clones of a given VI will use the same common code, they each have (in addition to their own memory space) their own block diagram for debugging and front panel for user interaction. In some ways it’s as though you are dynamically creating new VIs out of thin air!

Bring in the clones…

So what do you need to do to embed this magic in your program, you ask? Well, perhaps, not as much as you would think…

Reentrancy

The first thing, obviously, is that in order to be cloneable, the VI must be reentrant. But for many people, this feature can be at first confusing and more than a bit intimidating (I know it was for me). The challenge from the perspective of LabVIEW is that, unlike most languages, LabVIEW actually supports two kinds of reentrancy. The easy one is the one that creates “Preallocated” clones, it works the way you see it described if you look up the term “reentrant” online. For many, the confusing part is the other kind of reentrancy, the “Shared” clones. I know the first time I saw the term, it stuck me as an oxymoron of the first order. As I understood reentrancy, the whole point of reentrant execution was that clones used preallocated memory spaces that didn’t get shared. So shared clones, what is that about? In order to answer that, we need to consider how LabVIEW execution works.

Normally, VI execution is blocking. By that I mean that two instances of the same VI cannot run at the same time because they share the same memory space. However, because they have their own memory spaces, multiple instances of reentrant VIs can execute in parallel. Consequently, when writing very low-level VIs that are going to be used in many, many places throughout your code, one would like to make them reentrant so the instances are not all blocking each other. However if you did that willy-nilly, you could develop a problem: memory. You could end up preallocating a lot of memory for dozens or even hundreds of instances that don’t actually run very often, or perhaps ever. To address that problem, LabVIEW created the concept of shared reentrancy that splits the difference between nonreentrant execution on the one hand, and normal reentrant execution (which LabVIEW now calls preallocated), on the other.

Shared cloning creates a pool of a predefined number of sharable memory spaces that the reentrant code can use. Now whenever a particular reentrant function is called, LabVIEW can go to its pool of memory spaces and use a memory spaces that isn’t busy right then. If all the copies in the pool are busy, LabVIEW creates a new memory space for the VI, adds it to the pool and then uses it. All in all, it’s a pretty nice feature that we will see the need to use later…

Parameterized Identity

The next thing clones need is a way tell themselves apart. As a case in point, you can have 3 houses and 3 clones of our control process, but if all 3 clones try to control the same house, you still haven’t accomplished anything useful. Clone 1 has to be able to tell that it is responsible for house 1. Likewise, clone 2 has to know that it is handling house 2, and clone 3 must direct its efforts to controlling house 3. The way to accomplish this assignment is through a process called parameterization, or passing to the clone an input parameter that it can then use to obtain the configuration information it needs to operate.

This area is one of those places where having an adequate data management infrastructure in place (read: database) really pays big dividends. Ideally, you should be able to provide a clone with an identity through a single value, like a name. Other times, the various clones may be controlling specific test systems, so perhaps all you would need is the ID number of the system assigned to the clone. Regardless of the exact nature of that identifying value, with a database in place the clone can use that name or number to query the database for the specific setup information that it needs.

Functional Completeness

A good clone should be functionally complete. By that, I mean that a clone needs to be a complete package unto itself, not needing a lot of detailed external interactions. A few high-level controls are fine, but you don’t want to expose too much detail.

As an example of what I mean, consider the following situation from life. Say, you have a new employee in your department that needs to fill out some sort of form from HR. While it might be reasonable to explain what specific information is needed in one field or another, you shouldn’t have to take the time to explain that the side of the form to fill out is the one with the printing on it. You also shouldn’t have to describe the color of ink to use or the hand in which the person should hold the pen — or for that matter, that they should hold the pen in their hand and not between their toes! The only instruction that should be needed is, “Go fill out this form.”

Likewise with a clone, you should be able to say things like, “This is the code that does the out house temperature control”, or, “This is the process that interfaces with the database.” A good way of judging how well you are doing in meeting this “completeness” goal is to consider whether you can change the details of what the clone does without impacting the rest of the application.

A principle that can be supportive of this sort of completeness is to make sure the clone’s code isn’t burdened with extraneous functionality like extensive GUIs. Remember that adding in nonessential functionality limits the applicability of that code to only those situations where that added functionality is needed. As a practical matter, you particularly don’t want clones to have the ability to open dialog boxes. In case you aren’t clear on the reasoning behind that injunction, consider the pandemonium that could result if you have 15 or 20 clones running and something happens that causes all of them to start opening dialog boxes simultaneously.

Managing Common Resources

Finally, because you can potentially have a large number of clones running at the same time, you need to carefully consider parallel access to common resources. Interfaces that might work well for 2 or 3 clones could run into bandwidth problems when there are a couple dozen. Moreover, as the number of simultaneous accesses increase, so does the potential for access conflicts and race conditions. For example, I can’t tell you how many times I have seen systems crash and burn over something as simple as two processes trying to write to the same file at the same time.

If you have a common resource that could become a bottleneck or produce a race condition, a solution that often works well is to create a separate process with the sole responsibility of managing that resource.

Cloning our Temperature Controller

So what do we need to do in order to close our temperature controller? We know that in order to be cloned, the VI needs to be reentrant, so Step One is to turn that feature on by going to the Execution page of the VI Properties and selecting the option for Preallocated Clones.

reentrant setting

That one setting handles the top-level, but what about the subVIs? They could still block each other and prevent the cloned controllers from running unobstructed. To prevent that possibility, you should go through the subVIs looking for routines that are called regularly, and make them shared clones. In the current code, there is one exception to this advice — and we’ll cover it next.

With the applicable code set to being reentrant, we need to consider the identity issue. That matter is, to an extent, already being handled in the existing code because even nonreentrant VIs sometimes need to be told who they are. All we need to do is expand our usage of the My Name label we are already passing to the dynamically launched VIs by using it to look-up the configuration data for each clone.

The nonreentrant version of our temperature controller state-machine already had a VI for looking up the state-machine setups in the database, all we have to do is modify it so it accepts the My Name value as an input. Here is the modified version of the look-up VI.

load state machine settings

This is the VI that I said earlier that we do not want to make a shared clone, and there are two reasons. First, the VI only gets called once so there would be no benefit. Second, when you consider what the VI is designed to do, making it a shared clone would actually be counterproductive. See the shift registers? They are there to buffer the results of the queries to reduce the number of database accesses that the application has to make. In other words they are there specifically to created a shared memory space, so making it a clone would defeat that purpose.

Next we need to address the dialog box in the data saving routine. It really isn’t a good idea to give users the responsibility to correctly save data files anyway, so let’s change it.

modified save data

This modification saves the file with a dynamically generated name, to a predefined location, in this case the “Documents” directory associated with the current Windows user. Note that part of the file name is a timestamp consisting of the year, month, day, hour, minutes and second the file was created. I like this filename format because (assuming a 4-digit number for year and 2 digits for everything else, and a 24 hour clock for hours), files named in this way will always sort by name in correct chronological order.

Beyond that we are pretty much done. The existing code was already concise and had no user interface to speak of — just a few controls that we would want to have for troubleshooting purposes anyway. The only other thing needed is to define the clones that we want in the database. When you run the modified code, you will see three clones with differing update rates — but you can create as many as you want by simply adding them (and their setup parameters) to the database.

We are almost done for now, so hopefully you see that in some ways dynamic linking isn’t what this post is really about. Oh, I have presented a little bit of technique, and a few code modification, but the real lesson is that expanding the reach of you code in this way is not hard, as long as the code you are working with is well-designed from the beginning. Often you will hear people talking about all the modifications that they had to make in order to make their code cloneable, but if you really look at it you see that most of the modifications were in actuality fixing mistakes that they should never have made in the first place.

What Now

This should be enough for everybody to digest for now, so next week we’ll look at the other way to launch dynamic VIs — and why you would want to use it. (Hint: It addresses a problem that right now you don’t even know you have.)

Testbed Application — Release 11

Toolbox — Release 5

Until next time…

Mike…

Leave a Reply