Command Line Arguments aren’t a relic of the past

Back when phones were something with wires attached to them, computer programs had nearly non-existent user interfaces. However, the need still existed to be able to pass data to programs. The standard way that developers used for accomplishing this vital task was through cryptic codes called command line arguments. These codes which were cryptic by necessity (a data entry line couldn’t be more than 80 characters), were simply appended to the name of the program the user was wanting to run.

Despite the advances that have come along over the years, operating systems still support command line arguments — and LabVIEW applications still sometimes have a need for them.

How they work

Even today, commands still get sent to operating systems in the form of strings, and curiously, the primary delimiter still used to separate the name of the program from other parameters is the space. This is why if you look at the shortcut for a program, the path to said program is almost always in double quotes: path and program names contain spaces.

The way the process works is that anything before the first space not inside double quotes is considered to be the name of a program and is sent to the part of the OS that is responsible for launching stuff. Everything after that first space is sent to the program that is being launched, which is then responsible for determining whether the remaining string contains valid information or trash.

When LabVIEW (or the LabVIEW runtime engine) parses the remaining text, the first thing it looks for is an initial delimiter in the form of two hyphens in a row, like this:

command line arguments

Anything after the hyphens are returned to the program through an application property as an array of strings. Note that the double hyphens have a space on either side.

But what’s it good for?

Unfortunately, while it may be easy to describe what this feature does, it can be tough to identify a good use case. To help us identify where this feature might be useful, let’s consider some of the technique’s major attributes.

  • It’s not very convenient
    Because the technique depends on creating what is essentially a custom command for launching your application, it can only be used within the context of a Windows shortcut — and did I mention that you have to create the shortcut manually?

  • It can still be pretty cryptic
    Although you have complete freedom in defining how the arguments are formatted and what they do, you still have a maximum line length constraining how verbose your complete command line can be.

  • It’s not secure
    The command line arguments are unprotected and can be changed by anyone with even a smidgen of technical savvy. Moreover, they can be bypassed completely if the user simply chooses to launch the program by double-clicking the application itself.

  • It’s error-prone
    There is no way to error check arguments before they get passed to the application. Consequently, the application needs to be very careful and validate all inputs.

Doesn’t sound too promising does it? Clearly command line arguments not suitable for anything critical, so the application needs to run just as well if the arguments aren’t there. Likewise, you usually don’t want to have users needing to muck around with them. Well, believe it or not there is at least one bit of functionality that falls well within this technique’s capability. Check it out.

Tracking execution

A common problem can arise when you deploy your application as an executable and it doesn’t work on the customer’s computer. Although, LabVIEW provides facilities for remote debugging that are really nice, it sometimes isn’t usable. First of all, to use it requires you to make a special build of your application that includes the remote debug capability. Second, this debugging technique assumes a level of network access that might not be available. Third, because you don’t want to sit for hours connected to a customer’s computer waiting for something bad to happen, it is only useful for problems that you can quickly and easily duplicate. However many problems are difficult to recreate on command.

One of the earliest software troubleshooting techniques can be particularly useful in isolating this type of intermittent error. The technique involved inserting code in your application that simply prints program values to a log file. These log values can range from critical internal values like the result of an intermediate calculation, to an electronic version of bread crumbs that simply says, “I made it to the error checking state”. But if you are going to include this sort of logic, you are also going to need some way to control it. After all, we are trying to avoid creating special “debug” versions of our code, but every installation doesn’t need to be generating debug files all the time — this where command line arguments comes into play.

The basic idea is to go through you code and identify places where information is available that would be of potential value in debugging, and then create VIs that will write that information to a file. To save memory, you can even make these trace VIs dynamic so they are only loaded if they are enabled. But how do you enable them? A structure I often use is to assign each potential trace operation one bit of a U32 number. I pass the bit-mapped number into the application using a command line parameter, and store it in a FGV. Then each trace operation looks at its bit in the number and only generates its trace if the bit is set.

Putting theory into practice

To see how this approach would actually work, let’s add some trace capability to our test bed application. The following figure shows the mappings of trace operations to bits:

bit mappings

Note that we will actually be using 10 bits because three of the trace operations are in our reentrant temperature controller state machine, and we want to be able to control the trace for each clone individually. Next, we want to define the syntax for the command line parameter. To keep it simple, we will use the structure I showed above, where the value following the “d” is the bit-mapped number that will provide our enables. To generate this number, simply total the numeric values for each on the trace options you wish enabled. For example, the argument “d1” would turn on just the Sample Period Change trace, while the argument “d146” (2+16+128) would turn on the Fan State Change trace for all three temperature controllers.

In case you’re wondering how users are supposed to generate these magic numbers, that’s easy — they don’t. The intended use is that a support person will tell the customer what to enter with the instruction, “When you see the problem we are troubleshooting, go to the directory where the application is installed. You will find there a file named ‘trace.log’. Email it to us.”

Reading the arguments

In the LabVIEW environment, you gain access to the command-line arguments by reading the Command Line Arguments application property. The data is returned as a 1D array of string that LabVIEW creates using the space as a delimiter between elements. This property is always available in the development environment, but in a runtime system you have to enable it in the Advanced section of the application builder parameters. To retrieve our debugging parameter I have built a VI that searches this array for the first element that starts with the letter “d” and then converts the remainder of the string into a number.

initialize debugging

Once it has isolated the number, the code converts it into a Boolean array and stores the result in a FGV for later use.

Creating the traces

With the array of Boolean values safely tucked away in a FGV, we can begin building the code that implements the trace functionality. However, not being stupid, the first thing I do is create a subVI that will format the trace entries in a standard way. Assuming there were no errors generated before the trace operation is called, each entry will consist of a timestamp, the name of the trace point generating the data, and the trace data itself.

save trace data

If an error did occur earlier in the error chain, the trace subVI instead saves the error. Ideally, all errors should be saved to the error log as part of your program’s normal operation — but mistakes in propagating errors can easily happen.

save trace data - error

With that foundation laid, we move up a layer in the code hierarchy, and the first application-specific trace option we create is one that will record changes in the sample period that the two acquisition processes use to pace their operation.

sample period trace option

Thanks to the subVI created a moment ago, you can see that all the trace VI really has to do is check to make sure that its bit is set in the array of Boolean enables from the FGV and (if it is) write pass the trace data to the subVI. To simplify things, I also built a subVI to encapsulate the indexing process and return an enumeration giving the trace state.

The remaining options we want to implement are for the temperature controller. First, we want to save a trace message whenever the fan or cooler state changes. This is the trace VI for the Fans State Change trace. The Cooler State Change trace is almost identical:

fan state change trace option

A new challenge that this code needs to address is that for it to work properly, the VI needs to know which of three clones is calling it. To enable this new requirement, I added a new parameter to the data that is passed to the clones when they are launched. Called Launch Order it tells each clone its ordinal position in the launch sequence. With that number and a little basic math, the trace VI can calculate the correct index for its enable.

The last trace operation (also in the temperature controller) records the state machine’s state transitions. Due to its similarity to the others, it’s not worth showing its code. However, this trace option is a good example of a type of operation that can be very helpful, but which you want to be sure isn’t left enabled for an extended period of time. Every state change will be recorded — an operation that can generate a lot of entries really fast. Consequently, if you put in this sort of trace you should think about augmenting it, for example, with the ability to only report certain state transitions, or perhaps only save the last 100 trace messages.

Testing our work

To see this code in action you can build an executable, or run it from within the development environment. In either case you need to start by creating a shortcut on your desktop that points to either LabVIEW.exe or Testbed.exe. Next, edit the shortcut to add in a debug argument with a numeric value of you choosing.

Now use the shortcut to launch LabVIEW or the testbed application and monitor the project directory. A trace.log file will appear and be updated as the application executes. A handy tip is if you are going to be testing a number of set configurations, it can save time to create a number of shortcuts with different debug configurations preconfigured.

Testbed Application Release 14
Toolbox Release 7

So this is one use I came up with for this technique. Where does you intuition and imagination say to try it? In the mean time, the requisite teaser for the next post…

Objectifying LabVIEW

The National Instruments training course on object-oriented programming recommends trying it in small doses as you are learning — an attitude I heartily endorse. Next time we are going to take a small step in that direction by developing a more general solution for at least part of our data management. Along the way we’ll discover that while it may not be the ultimate programming paradigm that will revolutionize civilization as we know it (as some Object-Orient Fanboys proclaim) it can still be real useful — even in small doses.

Until Next Time…

Mike…

How to Make Dynamic SubVIs

The last two posts discussed different ways to use dynamic linking and calling in situations where you want the target VI to run in parallel with the rest of the application. Another major use case for dynamic calling is to create software plugins.

I use the term “plugin architecture” to indicate a technique that strives to simplify code by facilitating runtime changes to limited portions of the code while allowing the basic logic for the function as a whole to remain intact and unchanged. For example, say you are implementing a control algorithm that has, for the sake of argument, two inputs: a position and a load. As long as the readings are accurate, timely and in the proper units, most of the control algorithm doesn’t care how these inputs are measured. It would, therefore, simplify the code base as a whole to incorporate a technique that allows the reuse all the common code by simply “plugging in” different acquisition modules that support the various different types of sensors that can measure position and load.

This isn’t rocket science

This goal might sound lofty, but the hurdle for getting into it is actually pretty low. In fact, when teach the LabVIEW Core 1 and 2 week, I will sometimes introduce what LabVIEW can do by demonstrating a simple plugin architecture that only uses concepts introduced and discussed in those two classes.

I start the demo by opening an application that I have written to implement a simple calculator. It has two numeric inputs labeled “X” and “Y” and an indicator labeled “Output”. The only other control is a popup menu that lists the two math operations it can perform: addition and subtraction. I first show that you can use the simple program to add and subtract numbers.

I then comment that it would be nice if my program could multiply numbers as well. So I drag the program window to one side but leave it running. I then open a new window and add two inputs and an output (labeled like the program). On the block diagram I drop down a multiplier, wire it up, and save the VI with the name Multiply.vi. After closing that VI’s front panel, I tell the class that although they just watched me create the ability to multiply numbers, the program which is already running has already acquired it. At this point, I pull the program front panel back over and show that the menu which only moments before said “Add” and “Subtract”, now has a third option: “Multiply”. Moreover, the new selection does indeed allow me to now multiply numbers. Finally, I make the point that this capability to dynamically expand and change functionality was created using nothing but things that they will be learning in the coming week.

While the students at this point might not be cheering, they are awake and have some motivation to learn. You better believe that when I show them the code Friday afternoon and they can recognize how the program works, they are excited. So let’s see what I can do now to motivate and excite you…

It’s all about the packaging

As we begin to get into the following use cases you should notice that the actual code needed to implement each solution is not really very complicated. If fact, in the following sections we will be dealing with many of the same functions as when we were dynamically launching separate processes. The tricky part here is that we need to fit all the data management and launch logic in a space that would otherwise be occupied by a single VI. For this reason we need to be very careful when packaging this code. To demonstrate what I mean, let’s consider three common use cases that all call a simple test VI (called amazingly enough Test.vi) that simply returns a random number. I have ordered these cases such that each example builds on what went before:

System Initialization

I like to start here when explaining these concepts because it is the simplest structure and so is easy to understand.

System Initialization

Here you see laid out all the basic pieces required to dynamically call a subVI. The first VI (the one marked “Dummy”) is the one we will use through-out this post to represent the data management needed to get the path to the VI that will be called dynamically. Depending upon your application this could come from an INI file, database, or what have you. We won’t go into a lot of detail with that VI because we have discussed the various techniques pretty thoroughly in other posts.

You should recognize the next node (Open VI Reference) as we have already used it a lot. It opens a reference to the VI specified in the input VI name or path. This action also has the effect of loading the VI into memory. Just as statically subVIs can be reentrant, you can specify it here, as well, if needed.

Once the VI is open, we pass its reference to a node that looks a lot like the Start Asynchronous Call we have already used. This node is Call by Reference and like the prior one it expects a strict VI reference and in return provides a connector pane for passing data to the target. However, unlike the asynchronous call, this node always waits until the target finishes executing before continuing, hence its representation of the target’s connector pane also allows us to get data from the target.

Finally, in order to remove the target VI from memory, we need to close the reference when we are done with it — which is what the last node does.

When using this technique it is important to keep in mind that, because it loads, executes and unloads the VI all at once, it is very inefficient when used is a situation where it will be run multiple times. Sort of like trying to use an Express VI inside a loop — but worse.

However, one place where this technique can be used very effectively is doing things like system initialization. Its not uncommon for large and complex applications to require an equally large and complex initialization process, especially the first time they are run. Although you could just write a VI to do this initialization and install it in the program, why should you burden your program with a bunch of code that may only execute one time — ever? This technique allows you to only load and execute a VI if you actually need it.

Function Substitution

This use case is perhaps the most common of the three. It uses the same nodes we just saw, but because it can be used essentially anywhere, the packaging has to be very efficient. A solution I often use is to create a mini state-machine that has states for loading, executing and unloading the target VI, as well as one more for deciding what to do first. The user interface has one (enumerated) control that allows you to specify whether you want to Load the target, Run the target, or Unload the target.

The inputs are pretty simple, and the state-machine logic mirrors that simplicity. If the function requested is Load, the Startup state transitions directly to the Open VI state which opens and buffers the VI reference. Here is the code for these two states:

Function substitution - Startup - Load

Function substitution - Open VI

If the function requested is Run (the input’s default value, by the way), the Startup state first checks to see if the VI reference is valid and if it is, transitions to the Execute state.

Function substitution - Startup - Run

Function substitution - Execute

If the reference is not valid, execution instead continues with the Open VI state which returns to the Execute state after opening the reference.

Finally, if the function requested is Unload, the Startup state transitions directly to the Close VI state:

Function substitution - Startup - Unload

Function substitution - Close VI

The result of all this work is a flexible VI that can be used in a variety of different ways depending upon system requirements. For example, if the target VI loads quickly or the calling process doesn’t have tight timing constraints, you can just install it and let the first call to both load the VI into memory and run it. Alternately, if you don’t want the first call to be delayed by that initial load, you can call it once in a non-time-critical part of the code to just Load the target and then Run it as many times as you like later.

In the same way, the Unload function, which normally isn’t needed, can be used to release the memory resources used by a large dynamic subVI when you know that you aren’t going to be needing it for a while.

This approach could even be extended to create a simple test executive that dynamically loads and unloads whole sets of VIs. In such a situation, though, you probably don’t want to be tied to a single connector pane, so you should consider changing the VI reference to non-strict and modifying the Execute state to use our old friend the Run VI method like so.

Function substitution - Execute - Run VI Method

Interprocess Communication

This use case is in many ways the most expansive of the three because it supports communications between different processes regardless of their physical location. In terms of code, there is very little difference between this case and the previous one. In fact, the only change that is really required is to the Load state. This is what the network-enabled version looks like:

Function substitution - Open VI - IPC

That new icon in front of Open VI Reference is the one that makes the magic. Its name is Open Application Reference and its job is to return a reference to the instance of LabVIEW that is hosting the VI that you want to access. To make this connection, you need to know the host-name or IP address of the computer running the LabVIEW application, and the port number the instance is using for incoming connections. If the application that you are wanting to access in on your own computer, you can either leave the host name string empty or use the name localhost. The port can be any number that isn’t already being used. One common people mistake is to simply accept the default value, which is the port that LabVIEW listens to by default. This causes problems when they go to test their application because now there are two processes (the LabVIEW development environment and their application) trying to use the same port.

An important point to remember is that while the Call by Reference node passes data back and forth, the called VI actually executes on the remote system. Hence, this sort of operation works independent of the target system’s operating system or even the version of LabVIEW that it is running.

The tasks for setting up a system to use this technique involves properly configuring the application to which you are going to be connecting — though it often doesn’t require any code changes. This amazing benefit is possible because the underlying networking is handled by the runtime engine, not the application code you write. I have on a couple occasions come into a place and added remote control functionality to an old application that was not even designed with that capability in mind.

You will, however, have to make changes to the target application’s INI file to enable networking, And you, obviously, will need to have access to the application’s source code so you know what VIs to call. Likewise, if you are accessing a computer outside your local network there can be a variety of network communications details to sort out that are beyond the scope of this post. One other thing to bear in mind is that it is always easier to connect to a remote VI if it is already in memory. The reason for this fact is that if the VI is already in memory all you need to know is the VI’s name. If you are also loading it into memory, you have to know the complete path — the annotation of which can change between platforms.

But assuming you establish the connection, what exactly can you do with it? The answer to that lies in what VIs you choose to execute from the remote process. If you execute a VI that fires an event, that event gets fired on the remote system, so you can use it as a channel for controlling that application.

Alternately, say you chose to execute a VI that is a function global variable (FGV). Depending on whether you are reading from or writing to the FGV, you are now either passing data to the remote system, or collecting data from it. By the way, this is the method I still typically use for passing data over a network between LabVIEW applications — not network-enabled shared variables. Unlike this later “enhancement”, the dynamic calling of a FGV doesn’t need to be “deployed”, is more memory efficient, has a smaller code footprint, is far easier to troubleshoot if there is a problem, and works across all releases of LabVIEW back to Version 6.

The only real limit to what you can do with this technique is your imagination.

So there are the three major use cases for the dynamic linking of subVIs. This discussion is by no means a complete consideration of the topic, but hopefully it will whet your appetite to experiment a bit. The link below is to an archive containing all three examples to get you started.

Dynamic Linking Examples

Next time: Command Line Parameters are not a relic of the past.

If you have spent any time poking into some of the more esoteric corners of the application builder, you may have noticed a checkbox in the Advanced section labeled, “Pass all command line arguments to application”, and wondered what that is all about. Well, wonder no more, that little checkbox is what we are going to discuss next time out — and while we’re at it I’ll cover a really neat use for it.

Until next time…

Mike…

Raising the Bar on Dynamic Linking Even Further

Important: Before we get started this week, if any of you have downloaded the code from last week and have had problems with it, please update you working copy with what is currently in SVN. In working on this post, I found some “issues” — including the significant problem that the database that defines everything didn’t get included in the repository release. The current contents of the repository should fix the problems, and I am sorry for any hair-loss and/or gnashing of teeth these problems might have caused.

Also, be sure that if you plan to run this code in the development environment that you name the directory where it is located “testbed”.

These days, the most common way of implementing the dynamic calling of VIs in LabVIEW is through the Start Asynchronous Call node. In addition to being efficient, this technique is also very convenient in terms of passing data to the VI that is being called. Because it replicates the VI’s connector pane on the node, all you have to do is wire to terminals. However, this convenience comes at a price. For this type of call to work you have to know ahead of time what the connector pane of the VI being called looks like. This constraint can be a problem because it is not uncommon for situations to arise where you are wanting to dynamically call code the connector pane of which varies, is irrelevant (because no data is being passed), or is unknown. As you should by now be coming to expect, LabVIEW has you covered for those situations as well.

Where There’s a Will, There’s a Way Method

Over the years as LabVIEW developed as a language, its inherent object orientation began to become more obvious. For example, when VI Server was introduced it provided a very structured way of interacting with various objects within LabVIEW, as well as LabVIEW itself. With VI Server you can control where things appear on the front panel, how they look and even how LabVIEW itself operates. Although it didn’t reach its full expression until the Scripting API was released, the potential even existed to create LabVIEW code that wrote LabVIEW code.

We won’t be needing anything that complex, however, to accomplish our goal of dynamically launching a VI where we don’t have advance knowledge of its connector pane. In fact the part of VI Server that we will be looking at here is one of the oldest and most stable — the VI object interface. Just as you can get references to front panel controls, indicators and decorations, you can also get references to VIs as a whole.

Like control references, VI references come in two basic forms: strict and non-strict. To recap, a strict control reference contains added information that allows it to represent a particular instance of the given type of control. For example, a strict cluster control reference knows the structure, or datatype, of a particular cluster. By contrast, a non-strict cluster reference knows the control is a cluster, but can’t directly tell you what the various items are that make up the cluster.

In the same way, strict VI references, like we have been using to dynamically launch VIs, know a great deal about a specific class of VI, including the structure of its connector pane. This is why the Start Asynchronous Call node can show the connector pane of the target VI. However, as stated earlier, this nice feature only works with VIs that have connector panes exactly matching the prototype. As you might suspect, the solution to this problem is to use a non-strict VI reference, but that means we need to change our approach to dynamic launching a bit. Instead of using a special node, we’ll use standard VI Server methods to interact with and run VIs.

Mix and Match

To see how this discussion applies to our testbed code base, consider that to this point we have used a single technique to launch all the processes associated with the application. Of course to make that approach work required one teeny tiny hack. Remember when we added to the data source processes an input that tells them “who they are”? Well, that modification necessitated a change to the VIs’ connector panes, and because we were launching all the processes the same way, I had to make the same change to all the processes — even those that didn’t need the added input, like the GUI and the exception handler.

So big deal, right? It was only one control, and it only affected 2 VIs. Well maybe in this case it isn’t a huge issue, but what if it weren’t 2 VIs that needed to be changed, but 5 or 6? Or what if all the various processes needed different things to allow them to initialize themselves? Now we have a problem.

The first step to address this situation was actually taken some time ago when the launcher was designed to support more than one launch methodology. You’ll remember last week when creating the dynamically launched clones, we didn’t have to modify the launcher because it was written from day one to support reentrant VIs. What we have to do now is expand on this existing ability to mix and match VIs with launch methodologies to include two new options in the Process Type.ctl typedef. Here’s what the code for the first addition looks like:

run method - nonreentrant

As before, we start by opening a reference to the target VI, but this time it’s a non-strict reference. Next, we invoke the Run VI method, which has two inputs. The first input specifies whether we want to wait the target VI to finish executing before continuing, and we set it to false. The second parameter is named somewhat obscurely, Auto Dispose Ref. What it does is specify what to do with the VI reference after the VI is launched. In its default state (false) the calling VI retains the VI reference and is responsible for closing it to remove the target VI from memory. In addition, if the calling VI retains the reference to the dynamic VI, when the caller quits, the dynamic VI is also aborted and removed from memory. On the other hand, when this input is set to true, the reference is handed off to the target VI so the reference doesn’t close until the dynamic VI quits — which is what we want.

The other new launch option is like the first, except it connects the option constant that tells the Open VI Reference to open a reference to a reentrant clone. Other than that, it works exactly the same.

run method - reentrant

So with these two new launch methodologies created, all we have to do is change the database configuration for the GUI and Exception Handler processes to use the nonreentrant version of the Run VI method and we are done right? Well not quite…

One of the “quirks” of the Run VI method is that although it does start a VI executing, if that VI is configured to open its front panel when run (like our GUI is), the open operation never gets triggered and the front panel will stay closed. The result is that the VI will be open and running, you just won’t be able to see it.

To compensate for this effect (and the corresponding effect that the front panel won’t automatically close when the VI finishes), we need to add to the GUI a couple VIs from the toolbox that manage the opening and closing of the GUI’s the front panel.

open front panel

That’s the opener there, the last one in line after all the initialization code. This placement is important because it means that nearly all the interface initialization will be completed before the front panel opens. The result is much more professional looking. By the way, this improved appearance is why I rarely use the option to automatically open a VI’s front panel when it is run.

close the front panel

And here is the closer. The input parameter forces the front panel closed in the runtime, but allows it to stay open during development — a helpful feature if there was an error.

Where do we go from here?

So that is the basics of this technique, but there is one more point that needs to be covered. Earlier I talked about flexibility in passing data, so how do you pass data with this API? Well, we ran the VI using a method, so as you would expect, there is other methods that allow you to read or set the value of front panel controls. This is what the interface to the Control Value Set method looks like.

the set control value method

It has two input parameters: a string that is the label of the control you want to manipulate, and a variant that accepts the control’s new data value. Note that because LabVIEW has no way of knowing a priori what the datatype should be, you can get a runtime error here if you pass an incorrect datatype. Obviously, using this method your code can only set one control value at a time so unless you only have 1 or 2 controls that you know you will need to set, this method will often end up inside a loop like so:

set control value in a loop

…but this brings up an interesting, and perhaps exciting, idea. Where can we get that array of control name and value pairs? Would it not be a simple process to create tables in our database to hold this information? And having done that would you have not created a system that is supremely (yet simply) reconfigurable. This technique also works well with processes that don’t need any input parameters to be set. The loop for configuring control values passes the VI reference and error cluster through on shift registers and auto-indexes on the array of control name/value pairs. Consequently, if a given VI has no input parameters, the array will be empty and the loop will execute 0 times — effectively bypassing the loop with no added logic needed. By the way, this is an important principle to bear in mind at all times: Whenever possible avoid “special cases” that have to be managed by case structures or other artificial constructs.

More to Come

Over two consecutive posts, when have now covered two major use cases for using dynamic linking: VIs that will run as separate processes. But there is another large use case that we will look at the next time we get together: How do you dynamically link code that isn’t a separate process, but logically is a subVI?

Testbed Application &mdash Release 12

Toolbox — Release 6

Until next time…

Mike…

Taking Flexibility to the Next Level — Dynamic Linking

When people start using LabVIEW they always find fascinating the way it allows them to assemble complex functionality from simple building blocks, called subVIs. However there are a variety of use cases where — for very legitimate reasons — the usual technique for interconnecting subVIs with wires just won’t do. One thing that many of these use cases have in common is the desire to put off until runtime the decision of what code we want the program to execute next. Noting the power of being able to add functionality to our computers by plugging in a new card, or USB dongle, a common wish is to find a way to do do the same thing in software, a way to create software plugins.

In fact the potential benefits of this sort of structure were so great that in the early days of LabVIEW’s existence, developers would go to absurd lengths to realize even a small bit of it. For example, the only way originally of creating software plugins was to take advantage of the fact that LabVIEW links to subVIs by name and manually change the names of files to select the one you wanted to use before opening the top-level VI. If you’re wondering how that impacted building an application, that wasn’t a problem — there was no application builder!

Thankfully, today we have many different options for creating a true plugin architecture through dynamically calling and linking VIs. To explore those options I’m going consider over the next few posts a few of the basic use cases, and demonstrate the techniques that serves best for each one.

What We Have Already Seen

A good place to start this exploration is a use case with which we are already familiar — the one embodied in our testbed application. When I originally presented the idea of structuring the program as a group of independent processes that were dynamically launched, I spoke about it in terms of promoting a kind of modularization that simplified the code, made the application as a whole more robust, and fostered reusability. While all that is true, there is another side to that coin. By providing you with the ability to select at runtime the capabilities of the software, it opens the door for dramatically improved scalability. To see how that would work, consider the following scenario.

The most recent change to the testbed was to create a state-machine temperature controller that was (as I said in the post) only suitable for controlling the temperature of a “…hen-house, dog house or out house…”. But let’s say you have more than 1 hen-house, dog house or out house that needs temperature control. How should we handle that? One solution that you often see used is to simply duplicate that one VI, but then what if the number of “houses” changes? You could be left in the position of constantly modifying the application to add or subtract resources. Let me show you a better way, a way to reuse the exact same VI as many (or as few) times as might be needed, but without needing to modify the code to change the number if instances created. The trick is to make the code reentrant.

The basic idea behind a reentrant VI is that each time it is called in a program, LabVIEW reuses the same code, but gives each instance its own memory space. The result is the same as if you had multiple copies of the same VI. This basic operating principle is the same regardless of whether the reentrant VI is linked into your program statically, or is being called dynamically. When making a call to a reentrant VI, the resulting instance running in memory is called a clone. It’s important to remember that while all the clones of a given VI will use the same common code, they each have (in addition to their own memory space) their own block diagram for debugging and front panel for user interaction. In some ways it’s as though you are dynamically creating new VIs out of thin air!

Bring in the clones…

So what do you need to do to embed this magic in your program, you ask? Well, perhaps, not as much as you would think…

Reentrancy

The first thing, obviously, is that in order to be cloneable, the VI must be reentrant. But for many people, this feature can be at first confusing and more than a bit intimidating (I know it was for me). The challenge from the perspective of LabVIEW is that, unlike most languages, LabVIEW actually supports two kinds of reentrancy. The easy one is the one that creates “Preallocated” clones, it works the way you see it described if you look up the term “reentrant” online. For many, the confusing part is the other kind of reentrancy, the “Shared” clones. I know the first time I saw the term, it stuck me as an oxymoron of the first order. As I understood reentrancy, the whole point of reentrant execution was that clones used preallocated memory spaces that didn’t get shared. So shared clones, what is that about? In order to answer that, we need to consider how LabVIEW execution works.

Normally, VI execution is blocking. By that I mean that two instances of the same VI cannot run at the same time because they share the same memory space. However, because they have their own memory spaces, multiple instances of reentrant VIs can execute in parallel. Consequently, when writing very low-level VIs that are going to be used in many, many places throughout your code, one would like to make them reentrant so the instances are not all blocking each other. However if you did that willy-nilly, you could develop a problem: memory. You could end up preallocating a lot of memory for dozens or even hundreds of instances that don’t actually run very often, or perhaps ever. To address that problem, LabVIEW created the concept of shared reentrancy that splits the difference between nonreentrant execution on the one hand, and normal reentrant execution (which LabVIEW now calls preallocated), on the other.

Shared cloning creates a pool of a predefined number of sharable memory spaces that the reentrant code can use. Now whenever a particular reentrant function is called, LabVIEW can go to its pool of memory spaces and use a memory spaces that isn’t busy right then. If all the copies in the pool are busy, LabVIEW creates a new memory space for the VI, adds it to the pool and then uses it. All in all, it’s a pretty nice feature that we will see the need to use later…

Parameterized Identity

The next thing clones need is a way tell themselves apart. As a case in point, you can have 3 houses and 3 clones of our control process, but if all 3 clones try to control the same house, you still haven’t accomplished anything useful. Clone 1 has to be able to tell that it is responsible for house 1. Likewise, clone 2 has to know that it is handling house 2, and clone 3 must direct its efforts to controlling house 3. The way to accomplish this assignment is through a process called parameterization, or passing to the clone an input parameter that it can then use to obtain the configuration information it needs to operate.

This area is one of those places where having an adequate data management infrastructure in place (read: database) really pays big dividends. Ideally, you should be able to provide a clone with an identity through a single value, like a name. Other times, the various clones may be controlling specific test systems, so perhaps all you would need is the ID number of the system assigned to the clone. Regardless of the exact nature of that identifying value, with a database in place the clone can use that name or number to query the database for the specific setup information that it needs.

Functional Completeness

A good clone should be functionally complete. By that, I mean that a clone needs to be a complete package unto itself, not needing a lot of detailed external interactions. A few high-level controls are fine, but you don’t want to expose too much detail.

As an example of what I mean, consider the following situation from life. Say, you have a new employee in your department that needs to fill out some sort of form from HR. While it might be reasonable to explain what specific information is needed in one field or another, you shouldn’t have to take the time to explain that the side of the form to fill out is the one with the printing on it. You also shouldn’t have to describe the color of ink to use or the hand in which the person should hold the pen — or for that matter, that they should hold the pen in their hand and not between their toes! The only instruction that should be needed is, “Go fill out this form.”

Likewise with a clone, you should be able to say things like, “This is the code that does the out house temperature control”, or, “This is the process that interfaces with the database.” A good way of judging how well you are doing in meeting this “completeness” goal is to consider whether you can change the details of what the clone does without impacting the rest of the application.

A principle that can be supportive of this sort of completeness is to make sure the clone’s code isn’t burdened with extraneous functionality like extensive GUIs. Remember that adding in nonessential functionality limits the applicability of that code to only those situations where that added functionality is needed. As a practical matter, you particularly don’t want clones to have the ability to open dialog boxes. In case you aren’t clear on the reasoning behind that injunction, consider the pandemonium that could result if you have 15 or 20 clones running and something happens that causes all of them to start opening dialog boxes simultaneously.

Managing Common Resources

Finally, because you can potentially have a large number of clones running at the same time, you need to carefully consider parallel access to common resources. Interfaces that might work well for 2 or 3 clones could run into bandwidth problems when there are a couple dozen. Moreover, as the number of simultaneous accesses increase, so does the potential for access conflicts and race conditions. For example, I can’t tell you how many times I have seen systems crash and burn over something as simple as two processes trying to write to the same file at the same time.

If you have a common resource that could become a bottleneck or produce a race condition, a solution that often works well is to create a separate process with the sole responsibility of managing that resource.

Cloning our Temperature Controller

So what do we need to do in order to close our temperature controller? We know that in order to be cloned, the VI needs to be reentrant, so Step One is to turn that feature on by going to the Execution page of the VI Properties and selecting the option for Preallocated Clones.

reentrant setting

That one setting handles the top-level, but what about the subVIs? They could still block each other and prevent the cloned controllers from running unobstructed. To prevent that possibility, you should go through the subVIs looking for routines that are called regularly, and make them shared clones. In the current code, there is one exception to this advice — and we’ll cover it next.

With the applicable code set to being reentrant, we need to consider the identity issue. That matter is, to an extent, already being handled in the existing code because even nonreentrant VIs sometimes need to be told who they are. All we need to do is expand our usage of the My Name label we are already passing to the dynamically launched VIs by using it to look-up the configuration data for each clone.

The nonreentrant version of our temperature controller state-machine already had a VI for looking up the state-machine setups in the database, all we have to do is modify it so it accepts the My Name value as an input. Here is the modified version of the look-up VI.

load state machine settings

This is the VI that I said earlier that we do not want to make a shared clone, and there are two reasons. First, the VI only gets called once so there would be no benefit. Second, when you consider what the VI is designed to do, making it a shared clone would actually be counterproductive. See the shift registers? They are there to buffer the results of the queries to reduce the number of database accesses that the application has to make. In other words they are there specifically to created a shared memory space, so making it a clone would defeat that purpose.

Next we need to address the dialog box in the data saving routine. It really isn’t a good idea to give users the responsibility to correctly save data files anyway, so let’s change it.

modified save data

This modification saves the file with a dynamically generated name, to a predefined location, in this case the “Documents” directory associated with the current Windows user. Note that part of the file name is a timestamp consisting of the year, month, day, hour, minutes and second the file was created. I like this filename format because (assuming a 4-digit number for year and 2 digits for everything else, and a 24 hour clock for hours), files named in this way will always sort by name in correct chronological order.

Beyond that we are pretty much done. The existing code was already concise and had no user interface to speak of — just a few controls that we would want to have for troubleshooting purposes anyway. The only other thing needed is to define the clones that we want in the database. When you run the modified code, you will see three clones with differing update rates — but you can create as many as you want by simply adding them (and their setup parameters) to the database.

We are almost done for now, so hopefully you see that in some ways dynamic linking isn’t what this post is really about. Oh, I have presented a little bit of technique, and a few code modification, but the real lesson is that expanding the reach of you code in this way is not hard, as long as the code you are working with is well-designed from the beginning. Often you will hear people talking about all the modifications that they had to make in order to make their code cloneable, but if you really look at it you see that most of the modifications were in actuality fixing mistakes that they should never have made in the first place.

What Now

This should be enough for everybody to digest for now, so next week we’ll look at the other way to launch dynamic VIs — and why you would want to use it. (Hint: It addresses a problem that right now you don’t even know you have.)

Testbed Application — Release 11

Toolbox — Release 5

Until next time…

Mike…

Building a Proper LabVIEW State Machine Design Pattern – Pt 2

Last week’s post was rather long because (as is often the case in this work) there was a lot we had to go over before we could start writing code. That article ended with the presentation of a state diagram describing a state machine for a very simple temperature controller. This week we have on our plate the task of turning that state diagram into LabVIEW code that implements the state machine functionality.

The first step in that process is to consider in more detail the temperature control process that the state diagram I presented last week described in broad terms. By the way, as we go through this discussion please remember that the point of this exercise is to learn about state machines — not temperature control techniques. So unless your application is to manage the internal temperature of a hen-house, dog-house or out-house; don’t use this temperature control algorithm anywhere.

How the demo works

The demonstration is simulating temperature control for an exothermic process — which is to say, a process that tends to warm or release heat into the environment over time. To control the temperature, the system has two resources at its disposal: an exhaust fans and a cooler. Because the cooler is actively putting cool air into the area, it has a very dramatic effect on temperature. The fan, on the other hand, has a much smaller effect because it is just reduces the heat through increased ventilation.

When the system starts, the state machine is simply monitoring the area temperature and as long as the temp is below a defined “High Warning Level” it does nothing. When that level is exceeded, the system turns on the fan, as shown by the fan light coming on. In this state, the temperature will continue to rise, but at a slower rate.

Eventually the temperature will, exceed the “High Error Level” and when it does, the system will turn on the cooler (it has a light too). The cooler will cause the temperature to start dropping. When the temperature drops below the “Low Warning Level” the fan will turn off. This action will reduce the cooling rate, but not stop it completely. When the temperature reaches the “Low Error Level”, the cooler will turn off and the temperature will start rising again.

So let’s look at how our state machine will implement that functionality.

“State”-ly Descriptions

As I stated last week, the basic structure is an event structure with most of the state machine functionality built into the timeout case. To get us started with a little perspective, this is what the structure as a whole looks like.

State Machine Overview

Obviously, from the earlier description, this state machine will need some internal data in order to perform its work. Some of the data (the 4 limit values and the sample interval) is stored in the database, while others are generated dynamically as the state machine executes. Regardless of its source, all of this data is stored in a cluster, and you can see two of the values that it contains being unbundled for use. Timeout is the timeout value for the event structure and is initialized to zero. The Mode value is an enumeration with two values. In the Startup case is the logic that implements startup error checking and reads the initial setup values from the database. When it finishes, it sets Mode to its other value: Run. This is where you will find the bulk of the state machine logic. Note that while I don’t implement it here, this logic could be expanded to provide the ability to do such things as pause the state machine.

The following sections describe the function of each state and shows the code that implements it.

Initialization

This state is responsible for getting everything initialized and ready to run.

Initialization State

In a real system, the logic represented here by a single hardware initialization VI would be responsible for initializing the data acquisition, verifying that the system is capable of communicating with the fan and cooler, and reading their operational states. Consequently, this logic might be 1 subVI or there might be 2 or 3 subVIs. The important point is to not show too much detail in various states. Use subVIs. Likewise, do not try to expand on the structure by adding multiple initialization states.

Finally, note that while the selection logic for the next state may appear to be a default transition, it isn’t. The little subVI outside the case structure actually creates a two-way branch in the logic. If the incoming error cluster is good, the incoming state transitions (in this case, Read Input) is passed through unmodified. However, if an error is present, the state machine will instead branch to the Error state where the problem can be addressed as needed.

Read Input

This state is responsible for reading the current temperature (referred to as the Process Variable) and updating the state machine data cluster.

Read Input State

In addition to updating the last value, there are a couple other values that it sets. Both of these values relate to how the acquisition delay is implemented in the state machine. The first of these is the Timeout value, and since we want no delay between states, we set this to zero. The other value is the Last Sample Time. It is a timestamp indicating when the reading was made. You’ll see in a bit how these values are used.

This state also updates two front panel indicators, the graph and a troubleshooting value.

Test Fan Limits

This state incorporates a subVI that analyzes the data contained in the state machine data to determine whether or not the fan needs to change state.

Test Fan Limits State

The selector in this state results in what is the potential for a 3-way branch. If a threshold has been crossed, the next state will be Set Fan, if it has not, the next state will be Test Cooler Limits, and if an error has occurred, the next state will be Error.

Set Fan

Since the fan can only be on or off, all this state needs to do is reverse its current operating condition.

Set Fan State

In addition to the subVI toggling the fan on or off, the resulting Fan State is unbundled from the state machine data and the value is used to control the fan LED.

Test Cooler Limits

This state determines whether the cooler needs to be changed. It can also only be on or off.

Test Cooler Limits State

The logic here is very similar to that used to test the fan limits.

Set Cooler

Again, not unlike the corresponding fan control state, this state changes the cooler state and sets the cooler LED as needed.

Set Cooler State

Acquisition TO Wait

This state handles the part of the state machine that is in often the Achilles Heel of an implementation. How do we delay starting the control sequence again without incurring the inefficiency of polling?

Acquisition TO Wait State

The answer is to take advantage of the timeout that event structures already have. The heart of that capability is the timeout calculation VI, shown here:

Calculate Time Delay

Using inputs from the state machine data, the VI adds the Sample Interval to the Last Sample Time. The result is the time when the next sample will need to be taken. From that value, the code subtracts the current time and converts the difference into milliseconds. This value is stored back into the state machine data as the new timeout. This technique is very efficient because it requires no polling.

But wait, you say. This won’t work if some other event happens before the timeout expires! True. But that is very easy to handle. As an example, here is the modified event handler for the save data button.

Handling Interrupting Events

It looks just as it did before, but with one tiny addition. After reading, formatting and saving the data, the event handler calls the timeout calculation VI again to calculate a new timeout to the intended next sample time.

Error

This state handles errors that occur in the state machine. That being the case, it is the new home for our error VI.

Error State

Deinitialize

Finally, this state provides the way to stop the state machine and the VI running it. To reach this state, the event handler for the UDE that shuts down the application, will branch to this state. Because it is the last thing to execute before the VI terminates, you need to be sure that it includes everything you need to bring the system to a safe condition.

Deinitialize State

With those descriptions done, let’s look at how the code works.

The Code Running

When you look at the new Temperature Controller screen you’ll notice that in addition to the graph and the indicators showing the states of the fan and cooler, there are a couple of numbers below the LEDs. The top one is the amount of elapsed between the last two samples, the bottom one is the delay calculated for the acquisition timeout.

If you watch the program carefully as its running, you’ll notice something a bit odd. The elapsed time indicator shows a constant 10 seconds between updates (plus or minus a couple of milliseconds — which is about all you can hope for on a PC). However, the indicator showing the actual delay being applied is never anywhere near 10,000 milliseconds. Moreover, if you switch to one of the other screens and the back, the indicated delay can be considerably less than 10,000 milliseconds, but the elapsed time never budges from 10 seconds. So what gives?

What you are seeing in action is the delay recalculation, we talked about earlier. In order to better simulate a real-world system, I put a delay in the read function that pauses between 200- and 250-msec. Consequently when execution reaches the timeout calculation, we are already about 1/4 of a second into the 10 second delay. The calculation, however, automatically compensates for this delay because the timeout is always referenced to the time of the last measurement. The same thing happens if another event comes between successive data acquisitions.

On Deck

As always, if you have any questions, concerns, gripes or even (gasp!) complements, post ’em. If not feel free to use any of this logic as you see fit — and above all, play with the code see how you might modify it to do similar sorts of things.

Stay tuned. Next week we will take a deeper look at something we have used before, but not really discussed in detail: Dynamically Calling of VIs. I know there are people out there wondering about this, so it should be fun.

Until next time…

Mike…