Using VI Server to Interact with Executables

We all want to be able to reuse code and a good way of accomplishing that goal is by repurposing executables that you wrote for other projects. The problem is how you control them. Last week we started addressing this challenge by looking at some of the general tools that are at our disposal for manipulating executables — regardless of where you got them. This time out we will complete the discussion by looking at some of the things you can do that are specific to LabVIEW-created executables.

First, we need an executable

As the title says, if we are going to talk about making VI Server calls to an executable, the first thing we need is an executable — and an executable, we have. Although the functionality it implements is, to be honest, rather sparse, it is sufficient to demonstrate what we need. Here’s what its front panel looks like:

A Small Test Executable - FP

Starting in the top left, it sports an indicator where you can see the command line that was used to launch it. Immediately below that string is an indicator showing the current time, a button for stopping the program manually, and a pair of LEDs that indicate when two different events are triggered: Application Instance Close? and Panel Close?.

To the right is a path indicator that displays one of two different paths depending on the state of the checkbox that is next to it. Below the path indicator are two numerics. One is the PID of the instance that is running. The other is the TCP/IP port number that was assigned to the executable when it was launched. Note that if you don’t provide a port number in the command line parameters, the executable will terminate almost immediately — though you may see it briefly appear in the Windows Task Manager.

Handling the front panel

Most of the code that makes this interface work is pretty straight forward, so I won’t take the time to describe it. The one exception is this bit in the initialization logic:

Intialization Logic

The reason that I’m pulling it aside for special attention is that it illustrates (at least part of) the solution for a problem that you will encounter the first time you create a VI that is designed to run entirely in the background. The heart of this problem is mismatched expectation: When LabVIEW runs an executable it expects to open the window associates with the top-level VI. You, on the other hand, wanting the executable is run unseen in the background, expect the window to stay closed. Consequently, what happens is that LabVIEW opens the window and starts the VI running in that order and your code immediately hides the front panel. What the user sees is a windows that open and then immediately closes without explanation, and if there is one things that worries users more than things happening too slowly, its things happening too fast — like windows flashing open and then closing.

The solution lies in a VI property called Transparency. The setting to control it can be found in the custom appearance dialog.

Transparency Dialog

When the box is checked and the percentage is set to 100, the window will be open but totally transparent. Hence, when the runtime engine launches the application, the window will still open but it will be invisible. A moment later, the code above will hide the window and set the transparency to 0 so that when we do decide to open it, we will be able to see it.

VI Server Operations

Last time, I presented this block of settings from the application’s INI file. Before we continue, we need to take a moment all consider what these settings mean — at least to the extent that anyone knows what they mean…

server.tcp.enabled=True
; server.tcp.port=3363
; server.tcp.serviceName=""
server.tcp.acl="290000000A000000010000001D00000003000000010000002A10000000030000000000010000000000"
server.vi.access="+*"
server.vi.callsEnabled=True
server.app.propertiesEnabled=True
server.vi.propertiesEnabled=True
server.control.propertiesEnabled=True

The first four parameters in this list control overall access to the application via TCP/IP. Consequently, they are the four that you are most likely need to muck with:

  • server.tcp.enabled=True: This setting enables the TCP/IP interface that VI Server uses. If this setting is False, nothing is happening.
  • ; server.tcp.port=3363: This setting specifies the port that the associated TCP/IP listener will be monitoring for a connection. Note that I have this line commented out because we will be assigning this value via command line parameter.
  • ; server.tcp.serviceName="": Also commented out, this optional parameter allows you to define a name that you can then use to reference the application, instead of a Port number.
  • server.tcp.acl=???: This setting defines the TCP/IP access control list (ACL) — or who is allowed to connect to the application. Already I can hear you wondering, what is the deal with that long string? Well, if you ever find out be sure and let me know. The bottom line is that the original interface included an ACL that simply listed the IP addresses that were and were not allowed to access the application. For reasons unknown, NI decided to change this common sense approach to something more enigmatic. So how do you generate this string? Glad you asked. According to LabVIEW’s documentation, you need to set up your development environment to have the same access list as you want your application to have, and then copy and paste the resulting string from LabVIEW’s INI file to your application’s INI file, seriously…

The remaining parameters specify in one way or another the specific resources that the remote program can access in the target application.

  • server.vi.access="+*": This parameter contains a list of VIs that are accessible through the VI Server interface. The default value shown allows access to all VIs.
  • server.vi.callsEnabled=True: This parameter specifies whether the remote program is allowed to run VIs contained in the executable. We leave this True so I can demonstrate that ability.
  • server.app.propertiesEnabled=True: This parameter gives the remote program access to the executable’s application-class properties — like the names of all the VIs currently in memory.
  • server.vi.propertiesEnabled=True: This parameter gives the remote program access to the VI-class properties of individual VIs. This category include things like a reference to the VI’s front panel.
  • server.control.propertiesEnabled=True: This parameter gives the remote program access to the properties associated with individual controls on VI front panels. For example, you need to have this parameter enabled to do things like programatically set the value of a control. This value is True as I will be demonstrating this ability as well.

Finally, I want to state one thing that I hope is obvious. All these “security” settings are contained in a plain text file that can be edited by anyone who knows how to use a simple text editor. The point here is that while recent versions of Windows are making it harder and harder to modify files in the “Programs” directories, it is not by any stretch of the imagination bullet-proof. Hence, if there are truly sensitive things that need restricted access, don’t depend on these settings.

What we can do with these controls

So let’s put some of what we have been learning about into action. If you download the code from the SVN repository you will find, in addition to the source code, a compiled executable. For the following tests, you can either run the executable I have included, or compile it on your own — it’s up to you. You will also want to be sure to update your copy of the toolbox as I have added a couple useful VIs. One of the executable management VIs is where we will start:

Launching the Executable

Start by opening small test executable.lvproj and then open the routine Launch 3x.vi. It’s job is to launch three copies of the test application (small test executable.exe) so on the front panel click the path browser button next to the path control and navigate to and then select the test application. Now, run the VI.

When it finishes, launch the Windows Task Manager. Nothing new under Apps, or Programs (depending on your version of Windows), so look in the Background Processes. Ah, there’s the executable, but why is it listed here? And why is there only one instance? The VI clearly looped 3 times, and there were no errors. Go to the directory where the test application is located and open its INI file. There are your answers. There is only one instance running because the INI file has multiple instances turned off, and the one executable that did launch shows up as a background process because the INI file also says to hide the root window. Leave the root window setting the way it is, but change the AllowMutipleInstances to True, and save and close the file.

Now back in the Task Manager, abort the one instance of the test application that is running now, and rerun Launch 3x.vi. You should see when it finishes that there are now 3 instances of the test application running. The instances were given sequential TCP/IP port numbers from 3365 to 3367.

Firing Remote Events

The next thing I want to do is open the front panels of the 3 instances so we can observe their operation. Now if you look at the test application’s source code, there is a UDE that will make the front panel visible, so all I need to do is fire that event. But wait, those are three instances of a compiled executable — you can’t fire events in other executables! Well actually, you can. To see how, open the test VI, Open the Executable’s Window.vi.

Open Executable Front Panel

No magic here. All the code is doing is dynamically running a VI. But check out the function before Open VI Reference, it’s called Open Application Reference. Its job is to open a reference to a copy of LabVIEW or the LabVIEW runtime engine that is running somewhere else. That “somewhere else” is defined in terms of a machine name and a port number or service name. The machine name can be a DNS name, an IP address or (as in our case here) localhost to point to the local computer.

By the way, if you think it sounds like I just said that you could make this same code access an executable residing on a remote computer by simply changing localhost to an IP address, your right. I did just say that.

But as cool as that feature might be, how does it allow me to fire an event in a compiled executable? Look at the name of the file being run: Open Window.lvlib:Generate Event.vi. It’s the VI that fires the event, and since VIs called in this way actually execute in the remote LabVIEW environment, the event gets fired in the targeted executable.

To see this code in action, run it three times with the port number 3365, 3366 and 3367. Three windows will open.

Setting Control Values

Another way of interacting with an executable is to directly manipulate controls on its front panel. However, if the target VI is event-driven like our test application, we need to remember that there is a difference between setting a value and firing any value change events associated with that control. If all you need to do is set a value, there is a VI method called Control Value:Set that will do the job nicely. However, if you want to fire the value change event you have to set the control’s Value(Signaling) property — which frankly is a bit more work.

set the selector control value with signalling

This picture is the block diagram of the test VI Toggle the selector checkbox.vi, but the good news is that for this little bit of extra effort, you can set (or read) any control property that can be changed while a VI is running.

Shutting Down the Executable

Finally, we need to be able to stop an application that is running. But the problem here is figuring out how to test it such that we can see that it really did what it was supposed to do. The solution is to turn to the trace technique we discussed a while back when we were learning about command line arguments. I have written the code such that if the executable is run with the argument “d1” in the command line, the code will write a line to the trace file saying how the instance was stopped. And to help demonstrate how this works, I have created a test VI (Stop the Executable.vi) that can execute some of these termination paths.

To start off, leave both controls in their default state, and run the VI. This example stops the targeted executable by clicking the Stop button on its front panel. The instance with the port number 3365 will immediately close.

Now increment the Port Number to 3366 and set the Method control to Windows shutdown - Forced. This example stops the targeted executable by telling to Windows to abort it. The instance with the port number 3366 will immediately close.

Finally, we want to test the remaining instance’s response to the Application Instance Close event. To do that, restart your computer now. (That’s right, restart your computer. Don’t stop anything, don’t shut anything down — just restart.) When your computer is restarted and you are logged back in, go to the directory where the test application is installed and open the trace file. You will see two lines that look something like this:

07:29:39 05/24/2015 -- Shutdown 3365 -- Just Stop
07:42:52 05/24/2015 -- Shutdown 3367 -- Appl Inst Close

The second line shows that when you restarted your computer Windows did in fact generate the Application Instance Close? event and the application caught the event. You’ll note that there is no entry for the instance with the port number 3366. Remember, we stopped it by forcing an abort and a Windows abort is very much like clicking the red abort button when LabVIEW is running a VI: It just stops. No orderly shutdown. No deinitialization.

A Small Test Executable — Release 1
Toolbox — Release 9

The Big Tease

So that was, I hope, interesting. Starting next time I’m going to start delving into how to use LabVIEW code as the data collection and control backend for an application that has as its only customer-facing interface a web site. While there are many companies offering options that claim to be developer-friendly I have found that many of the marketing claims are largely based on FUD (Fear, Uncertainty and Doubt). Simply put, they build up a “strong man” of supposed complexity and complication, and then tell you that the best (and perhaps, only) way to get past this obstacle is to buy their product. The truth, however, is that their “strong man” is really made of straw, and if you understand how it all fits together, doing it yourself isn’t really very hard.

Until Next Time…

Mike…

Expanding Data Processing Bandwidth — Automatically

Well-written software can typically deal with any performance requirement pretty easily as long as the requirement is constant. It’s when requirements change over time that things can get dicey. For example, if your test system generates a new data packet to process every second and it consistently takes 5 seconds for the data to be processed, a little simple math will tell you how much processing bandwidth you need to create to keep up with the flow of data. But how are you to properly size things when variability is inserted into the process? What if the time between data packets can vary between 100 msec and several minutes? Or what if the data processing time can change dramatically due to things like network traffic?

These are the kind of situations where the processing needs to be more than simply “flexible”, it has to be able to automatically maintain its own operation and reconfigure itself on the fly. To demonstrate one possible implementation of this “advanced” technique, we will build on the simple pieces that we have learned in the past. In some ways, good software design techniques are like Lego blocks. Each one by itself is not very impressive, but when you stick them together, magic happens. But before we can stick anything together, we need to understand…

…what we’re going to do.

You’ll notice that any of the bandwidth management challenges that I mentioned earlier can be addressed by either adding more data processing clones, or removing existing ones that are being underutilized. Consequently, the question of how to implement this self-maintenance functionality really gets down to a matter of how to dynamically manage the number of data processing clones that are currently available — which in turn boils down to answering two very simple questions:

1. How do we know we need more?

Given that the whole point of the exercise is to manage a queue, the current state of that queue will give us all the information that we need to answer this question. Specifically, we can know when more processing bandwidth is needed by monitoring how many items are currently in the queue waiting processing. When the code starts to see the depth going steadily up, it can launch additional processes to handle the data backlog. Of course, this functionality assumes that there is a process that is constantly monitoring the queue and managing that aspect of its operation — which we actually have already in the test code from last week (Data Processing Queue Handler.vi). All we have to do is repurpose this VI to be a permanent part of the final application.

2. How do we know a clone is no longer needed?

The one part of the system that knows whether a clone is being under-utilized is, in fact, the clone itself. As a part of its normal operation, it knows and can keep track of how often is it being used. Having said that, there are (at least) a couple of ways to quantify how much a clone is being utilized. We could, for instance, consider how much time the clone is spending processing data versus how much time it spends waiting to receive data to process. If the utilization percentage drops below a given limit, the clone could then shut itself down. However, for this demonstration, I’m going to use a much simpler criteria that, quite frankly, works pretty well. The code will simply keep track of how many times in a row it goes to the queue and doesn’t find any data.

Code Modifications

Before I start describing the changes that will we will need to make in order to fashion this new ability, I want to consider for a moment the thing that won’t have to change: Queue Test.vi. You might be tempted to say, “Well big deal. All it does is stuff some data into the queue every so often. Who cares if it doesn’t have to change? It’s not even deliverable code”

While that is undoubtedly true, the fact of the matter is that this test routine is important, but not because of what it is or what it does. Rather we care about Queue Test.vi because in our little test environment, it represents the rest of our application — or at least that part of it that is generating data. Consequently, the fact that it doesn’t need modification means that your main application, likewise, won’t need modification if you decide to upgrade from a data processing environment that uses a fixed number of data processors to one that dynamically manages itself.

Data Processing Queue Handler.vi

First, note that previously this routine’s primary job was to simply report how deep the data queue was — a bit of functionality that would likely have not been needed in a real application. Now however, this routine is going to be taking an active part in the process, so I started the modifications by adding the error reporting VI that will transfer errors it generates to the exception handler.

New Queue Handler Timeout Case

In addition, because the software will initially only start a single data processing clone, I also modified the timeout event handler that performs the VI’s initialization, by removing the loop around the clone launching subVI.

The remainder of the modifications to this routine occurs in the event handler for the Check Queue Size UDE. Previously this event only reported how deep the queue was getting. While it still performs that function, that queue depth information in addition now drives the logic that determines whether we have enough processing bandwidth online.

New Queue Handler Queue Size Check Case

Note that the queue depth is compared to a new configuration value called Max Queue Size that defines how large the queue can grow before a new data processing clone is launched. Regardless of whether it launches a new clone, event handler calls another new subVI that returns the number clones that are currently running. As you will see in a moment, one of the modifications to the data processing VI is the addition of logic that keeps track of the names of the clones that are running. The subVI that we are calling here returns a count of the number of names that have been recorded so far.

Data Processor.vi

Turning now the data processing code itself, the first stop is in the state-machine’s Initialize state. Here we have all the same logic that existed before, but with a couple minor additions

New Data Processor Initialize

First, there is a new subVI that registers a clone is starting up. This subVI writes the clone’s name to the FGV that is maintaining the clone count. Second, there is also a new shift register carrying a cluster of internal data that clone will need to do its work. All that is needed during initialization is to set a timestamp value. The Check for Data state is next and it has likewise seen some tweaks — the most significant of which is moving the logic for responding to the dequeue operation into a subVI.

New Data Processor Check for Data

The justification for this move lies in the fact that this logic is now also responsible for determining whether or not the clone is being adequately utilized. As I stated before, each time the clone goes to the queue and comes up empty, the logic will increment a counter that is being carried in the new shift register’s data. If this count exceeds a new configuration value Clone Idle Count, the code will branch to a new state that will shutdown the clone. Likewise, anytime the clone does get data to process, it will reset the count to 0. The changes to the Process Data state, which comes next, are pretty trivial.

New Data Processor Process Data

All that happens here is that the timestamp extracted from the data to be “analyzed” updates the new cluster data — as well as the indicator on the front panel. Finally, there is the new state: Self Shutdown

New Data Processor Self Shutdown

…which simply calls a subVI that removes the clone’s name from FGV maintaining a list of all running clones, and stops the event loop.

Let’s talk about “Race Conditions”

All we have left to do now is test this work and see the differences that it makes, but before we can do that, we need to have a short conversation about race conditions. Very often developers and instructors (myself included) will talk about the necessity of avoiding race conditions. The dirty little secret is that as long as you have multiple things happening in parallel, race conditions will always be present. The real point that these admonitions attempt to make is that you should avoid the race conditions that are unrecognized and potentially problematic.

I bring this point up because as you do the following testing, you may get the chance to see this concept in action. The way it will appear is that the system will launch a new clone immediately after one kills itself off for being underutilized. The reason for this apparent logical lapse is that a race condition exists between the part of the code that is checking to see if another clone is needed and the several places where the clones are deciding whether or not they are being used. There are two causes for this race condition, one we can ameliorate a bit and one over which we have no possible control.

Starting with the cause we can’t control, a simple immutable law of nature is that no matter how sophisticated our logic or algorithms might be, they can not see so much as a nanosecond into the future. Consequently, the first source of a race condition is that when the queue checking logic sees that there are three elements enqueued, it has no way of knowing that a currently active clone will be available in a few milliseconds. While it is true that under certain circumstances it might be possible to provide this logic with a bit of “foresight”, there is no generalized solution to this aspect of the problem. Consequently, this is an issue that we may just have to live with.

The news, however, is better for the second cause. Here the problem is that with all the clones having the same timeout between data checks, it is probable that sooner or later one of the clones is going to become “synchronized” with the others such that it is always checking the queue just after it was emptied by one of the others. However the solution to this problem lies in its very definition. The cure is to see to it that the clones do not have constant timeouts from one check to the next. To implement this concept in our test code I modified the routine that returns the delay to add a small random difference that changes each time it’s called.

The bottom line is that while completely removing all race conditions is not possible, they can be managed such that their impacts are minimized.

New Tests for New Code

The testing of the modified code starts the same as it did before: open and launch Data Processing Queue Handler.vi and Queue Test.vi. The first difference that you will notice is that only 1 clone is initially launched, but at the default data rate, 1 clone is more than adequate.

Now decrease the delay between data packets to 2 seconds. Here the queue depth will bounce around a bit but the clone count will stabilize at 3 or 4.

Finally, take the delay all the way down to 1 second. Initially the clone count may shoot up to 8 or 9, but on my system the clone count eventually settled down to 6 or 7.

At this point, you can begin increasing the delay again and the slowly the clones will start dropping out from disuse. Before you shutdown the test, however, you might want to set the delay back to 2 seconds and leave the code running while you go about whatever else you have to do today. It could be instructive to notice how other things you are doing on the same computer effect the queue operation. You might also want to rerun the test but start Queue Test.vi first and let it run for a minute or so before you startData Processing Queue Handler.vi — just to see what happens.

Further Enhancements?

So we have our basic scalable system completed, but are there things we could still do to improve its operation? Of course. For example, we know that due to timing issues which we can only partially control, the number of clones that is running at one time can vary a bit, even if the data rate is constant. One thing that could be done to improve efficiency would be to change the way clones are handled. For example, right now a data processor is either in memory and running or it is closed. One thing you could do is create a new state that a clone could be in — like loaded into memory, but inactive. You could implement this zombie state by setting the timeout to 0, thus effectively turning off the state machine.

It might also be helpful to change to queue depth limit at which a new clone is created by making it softer. Instead of launching a new clone anytime the queue depth exceeds 3, it might be useful in some situations to maintain a running average and only create a new clone if the average queue depth over the last N checks is greater than 3.

Who knows? Some of you might think of still other modifications and enhancements. The point is to experiment and see what works best for your specific application.

Parallel Data Processing — Release 2

The Big Tease

So what is in store for next time? Well in the past we have discussed how to dynamically launch and use VIs that run as separate processes. But what if the code you want to access dynamically like this happens to exist in a process that you have already compiled into a standalone application? If this application is working you don’t want to risk breaking something by modifying it. As it turns out there are ways to manage and reuse that code as well, even if it was created in an older version of LabVIEW. Next time we’ll start exploring how to do it.

Until Next Time…

Mike…

Expandable Data Processing

When creating an application a common approach to managing application computer bandwidth is to structurally isolate the portions of the application that are doing the data processing from those implementing the data acquisition. The goal of this segmentation is to prevent the operation of the one from impacting the speed of the other. But if we think about what we learned from the posts preceding this one, we can see that this use case is just a specialized case of a general principle. Namely, that allowing processes to run in parallel reduces the likelihood that their execution will impede the operation of their peers by improving the overall utilization of the computer’s resources. Consequently, it behooves us to consider how we can allow data processing to run in parallel with the rest of the application’s code — and so garner a few of the many advantages that this approach offers.

Parallel Data Processing

It first should be noted that we already have most of the tools and concepts that we will need to build this parallel data processing capability. In fact, one possible implementation is not so different from our general approach to building state machines. So with that idea in mind, let’s consider our perspective data processor’s high-level functional requirements.

Just the One?

First, since the idea is to improve efficiency by allowing more things to happen in parallel, we need to ask ourselves the obvious question, “Why stop at just one data processor?” This question is particularly urgent if you already know that your data processing takes longer than the time required to acquire another dataset. It is an appealing idea to have 2 or more data processors that can share the processing load. However, if you are going to be running multiple instances simultaneously, the process has to be reentrant. But that’s not a problem, we’ve done reentrant processes before and know how to make that work.

State-ly Expandability

Second, we need to consider the data processor’s internal structure. I have given this point away already by talking about state-machines, so let’s consider why this choice is a good one. At first glance it might seem like using a state-machine might be overkill for this application, after all you might figure that there is at most 2 states: Wait for Data and Process Data. This line of reasoning is valid, but it assumes that the data processing is a one-step process, so are there circumstances where this assumption isn’t valid?

If your processing is going to involve the complex analysis of potentially large datasets, you might not want to spend a lot of time processing flawed or invalid data. Hence before jumping right into the full-blown analysis it would be reasonable to have a state that does a quick sanity check on the data — and then another to handle the error that results if the data is bad. Next, consider what you are doing with the data when you are done with the analysis. There could be added limit checking of the results, and perhaps transfer of the results to a database or automated report generator — all tasks that could (and probably should) be expressed as separate states. Looking back now, by my count we are now up to 6 states — and we are still assuming that the analysis process itself is a single atomic operation that you will not want or need to breakup.

My point is that if you start with the basic outline of a state-machine and end up never having more than the two states, there’s no harm done because that basic structure is pretty lean. However if, as your project progresses, you discover that more states are needed, having that infrastructure already be in place can save a lot of time.

By the way, as a quick aside, I have often had the experience of having a customer request a significant change and then have them be amazed in how little time it takes to implement the modification. I have even had customers comment on how “lucky” I was that the code was structured in such a way that the change was even possible. Well let me tell you in the strongest terms possible that if luck was involved, it was luck of my own making. The better you design your code up front, the more “lucky” you will be when the time comes to make modifications.

The Right Kind of Communications

Third, we have to look at how we are going to communicate with these cloned state-machines. In past work on our testbed application, we have used UDEs and FGVs for communications between processes, and they might work here as well, but there would be problems. If you have one UDE or FGV feeding data to all the cloned data processors, you would have the problem of deciding which clone handles the each new update. While you don’t want to miss any updates, you certainly don’t want to process the same dataset twice either. The whole process would be fraught with opportunities for errors and race conditions. Of course you could solve that problem by creating a separate UDE or FGV for each clone, but then scalability goes down the tubes as the process(es) generating the data must keep track of how many clones there are any which one will get the next dataset.

Our way out of this conundrum is to properly apply a structure that people so often misuse: The Queue. This is the use case for which queues were born. The key feature that makes them so good in this sort of situation, is that if you have multiple receivers waiting to dequeue data from the same queue, LabVIEW guarantees that each new item inserted will only be seen by one of the receivers, and that enqueued data will be distributed to all the receivers on a first come, first served, basis. Moreover, because queues can be named, different processes running in the same instance of LabVIEW don’t need the queue reference to be sent to them. All they need to know is the name of the queue and they can get their own reference.

Yes, queues do still require polling, but at least there’s only one part of your code involved, and not the whole application. Moreover, there are two other mitigating factors present. First, in this sort of application the poll rate can often be much slower, on the order of once every few seconds. Second, because the state machine resides inside an event structure, it would take but a moment to implement logic that would allow the polling to be turned off altogether.

Let’s Look at the Test Code

As we start considering the code to implement this functionality, let’s first look at the test routines that we will use to evaluate the queue functionality. The VI Data Processing Queue Handler.vi has the responsibility of starting everything off by initializing the data processing queue and launching a predetermined number of data processing clones. This logic takes place in the VI’s Timeout event handler.

Queue Handler Timeout Case

You will notice that to simplify the process of obtaining a queue reference, I have encapsulated that logic in a set of subVIs for interacting with the queue. The other event of significance is the handler for the Check Queue Size UDE. While a test is running, we want to be able to monitor the number of items in the queue, but rather than simply polling the queue status, I created a UDE that flags the handler every time a new item is enqueued.

Queue Handler Queue Size Check Case

When the event fires, the handler calls the built-in queue function that returns, in addition to some other stuff that we don’t need, the number of items that are currently in the queue. Next, to feed data to the queue, I created a second test VI called Queue Test.vi. It’s whole job is to wait a delay period specified on the front panel, and then enqueue an item into the data processing queue.

Queue Test

You can see that at this point, the only value in the queue data is a timestamp, but the data is defined as a typedef. Remember! Any time you are creating a datatype that will be accessed by reference, whether it be a UDE, a queue, a notifier or an event, always make the data structure a typedef.

After the data value is enqueued, the code fires the event that tells the handler to check the queue size.

Introducing the Data Processor

Turning finally to the reentrant data processing state-machine (Data Processor.vi) we see that in addition to an event for stopping, the VI’s Timeout event handler includes the logic for three states, the first of which is Initialize.

Data Processor Initialize

This state’s job is to get the clone ready to start processing data. Consequently, it initializes the shift register holding the queue reference, and a second shift-register carrying a boolean value that we will discuss in a moment. The state also sets the next state to be executed to Check for Data, and retains a timeout value of 0 ms so, assuming that there are no errors during initialization, the state machine will immediately start waiting for data to process. Note that the Initialize state also opens the VI’s front panel. You probably would not want this feature in deliverable code except as, perhaps, a debugging option. I have included it here to make it easier for you to see the code at work.

Data Processor Check for Data

The Check for Data state starts by calling the queue subVI that is responsible for dequeuing an item. Inside this subVI, the dequeuing function is given a timeout of 0 ms so if there is not any data immediately available, the call will terminate with the timeout flag asserted. This Boolean value is inverted and passed out of the subVI to indicate to the calling code whether there is any data that needs processing. If the Check for Data state logic finds this bit set, the code sets the next state for execution to Process Data and sets the timeout value of 0 ms. If the bit indicating that data is available is not set, the code retains Check for Data as the next state to execute but sets the timeout to a longer value (5 sec) read from the application’s INI file.

Before we go on to talk about the Process Data state, we need to have a quick conversation about the boolean shift register. Normally, when the standard Stop Application event fires, a VI wants to immediately stop what it’s doing and abort. However in one significant way, this is not a “normal” VI. In order to protect the data that has been acquired, this process should only stop if the queue driving it is empty. To create that functionality, the VI incorporates deferred shutdown logic in the form of this shift-register. Because the value is initialized to false the loop will, during normal operation, continue regardless of whether data is available or not. However, when a shutdown is requested, the event handler does not immediately stop the loop, but instead sets the shift-register value to true and branches to the Check for Data state with a 0 ms timeout. If the queue is empty, the process will end at that time. However, if the queue is not empty, the VI will continue toggling between the Check for Data and Process Data states until the queue contents are exhausted.

Data Processor Process Data

As you would expect, the Process Data state basically consists of processing the last data dequeued and branching back to the Check for Data state to look for more. However, given that the only data in our test queue is a timestamp, you have probably guessed that the actual data processing to be done isn’t very expansive — and you’re right. In fact, the “data processing” consists largely of a wait, the duration of which varies at random between 4 and 6 seconds.

Putting it to the Test

To test this code, open and run Data Processing Queue Handler.vi. You will immediately see the front panels of three data processing clones open. Move them so they aren’t overlapping each other or anything else.

Now open and run Queue Test.vi. After a few seconds it will enqueue an item and then enqueue a new one every 6 seconds. Note that as each clone handles an item it will display that item’s timestamp on its front panel. Note also that the indicated queue depth never exceeds 1.

Now change the delay on Queue Test.vi from 6 sec to 2 sec. You will notice that the queue depth chart is now updating faster. Likewise, the queue depth will begin to show momentary increases to a depth of two, but the chart will always drop back to 1. In other words, there might be slight delays now and again, but for the most part three clones can keep up with the flow of data.

Finally, drop the delay to 1.5 seconds. With data coming at this rate, the queue depth will continue to go up and down, but now it is always going up more than it is going down. This overall upward trend shows us that we have reached the point where three clones are getting overwhelmed by amount of data that is being enqueued.

Queue Overrun

Now if you increase the delay back to 2 seconds, the queue depth will gradually begin to decrease as the slower flow of new data allows the clones to begin catching up on the backlog. Alternatively, if you just click the stop button, the two test VIs will stop and close immediately, but the clones will continue running until the queue empties out.

Parallel Data Processing — Release 1

The Big Tease

So we have learned the basics behind creating an environment for an application that supports an expandable data processing capability. For many applications this simple structure will be more than adequate, but (as we have seen) if the data starts coming too fast the queue can grow without limit. Of course this isn’t necessarily a problem if the periods of high data generation are interspersed with periods of comparative idleness. However this sort of variability can be a bit of a two-edged sword. The periods of low data throughput can give the system time to recover from a large backlog of data, but this variability can also make it difficult to estimate how many copies of the data processing process will be needed. Pick a number that is too high, and you’re wasting computer resources. Pick a number that is too low and you could still end up with a situation where too much data is being queued — perhaps to the point of running out of memory.

Well, the next time we get together we’ll look at how to modify the basic structure we have created thus far to add the ability for the software to decide on the fly when more clones are needed, and when to kill off existing ones that aren’t being used.

Until Next Time…

Mike…

Objectifying the Testbed

Object-oriented programming as a technique promises a host of benefits, but suffers from the impression that it is in some way an “advanced” topic. In contrast, I feel that OOP is just a logical extension of the concepts that LabVIEW developers use every day. The basic problem has been with the way it has been taught. However, the various object-oriented frameworks that are overly complex and difficult to learn, haven’t helped matters. These bad “actors” often only serve to hide the inherent elegance of the OOP paradigm and scare off users that could benefit from it.

To help clear away some of the extraneous mystique, I have presented a brief introduction to the topic that provides a foundation sufficient to let us get into OOP by implementing a module for managing program configuration data that provides the calling application with a common interface regardless of how (or where) the data is stored.

Filling a Niche

Most of the work we will be doing will eventually replace the Configuration Management library. Now while this might sound like a major shift, it really is not because (like I repeatedly tell you) the whole point of good design is to make changes and upgrades like this possible. So let’s look at what this upgrade will need to do.

Normally a large part of any upgrade projects is defining the requirements, but due to the design work that was put in originally, we already have a pretty good handle on what the new class structure has to do. In terms of surface functionality, we know we have to be able to handle all the same information as before — with, of course, the ability to add more when we want or need that ability.

Designing the Structure

The trick is going to be sorting out what new functionality will be needed under the covers. At the most basic level we need to be able use either text files or databases to actually store the configuration data, so there we have two subclasses. But we need to consider whether each of those options needs to be broken down further.

On the “text file” side, the data might be coming from a standard INI file, or the code might be in using a custom text file format. Custom text configuration files are very common when some of the configuration data is tabular, since it is a pain to store tabular data in an INI file. However, regardless of the format of the contents, the basic mechanism for reading and writing text files remains the same so it probably won’t be valuable to have any subclasses under “text files”.

On the “database” side of things, however, the situation is very different. First of all, in terms of connectivity, you can access most databases through the standard ADO (ActiveX Data Objects, also sometimes called ole-db) interface. However, “most” is not the same as “all” and one common exception is SQLite. A popular, lightweight data management engine, SQLite can run on a variety of platforms — including some real-time systems. To keep its footprint small, SQLite utilizes a small custom DLL, rather than a large, but standardized, interface. So we need to make provisions for other types of connectivity by creating (for now) two subclasses below “database”: “ado” and “sqlite” — though we won’t be implementing the SQLite functionality right now.

Finally, what about “ado”? Can it be broken down further? Maybe. One of the advantages of ADO is that it, for the most part it does a pretty good job of hiding the differences between one database management system (or DBMS) and the next, but there are some variations it can’t paper over. These differences often relate to the version, or dialect, of SQL the DBMS speaks. However sometimes differences arise because some DBMS fundamentally don’t operate the same. For example, while most DBMS go to extraordinary lengths to hide exactly where and how the data is actually stored, Jet (the DBMS built into Windows) stores the data in a file you explicitly identify. Hence, while the connection to other DBMS might be defined in terms of network paths and logical names, with Jet you are connecting to a particular file.

To provide for these sorts of functional nuances, let’s create a subclass below “ado” for “jet” — understanding there could be others in the future.

Configuration Data Classes

This is what the hierarchy looks like so far, all drawn out.

Doing the Rough Framing

When you are building a house the first tradesmen to show up onsite are the carpenters to do the so-called “rough framing”. This process creates the skeletal form of the final house that is covered with rough exterior plywood. The idea is that later workers will fill in the details and fine tune the construction. And metaphorically speaking, that’s what we have to do now for our configuration data class.

For a class hierarchy, that framing consists of the directory structure and the class files themselves. Using the techniques I gave last time, I first create mirroring directory structures inside the project directory and the project itself…

Configuration Data Classes in Project

Note that I have also created some virtual folders inside the Config Data class which represents actual sub directories.

  • Interface
    The VIs in this folder will have public access scope. In fact they will be the only VIs in the library that are so scoped. Because these VIs are the only ones that outside callers will be able to call, they alone form the interface between the class hierarchy and the rest of the code.
  • _private
    As the folder title implies, the files that go in here will have private access scope. This assignment means that they will only be accessible from other VIs in the top-level class.
  • _protected
    This is another folder that specifies access scope for its contents. In the case of protected scope, the VIs in this folder will only be accessible from the top-level class, or any of its child classes.

Creating the Interface

With the functional scaffolding in place, we can start filling in the blanks. And the first thing we need to do is create what I referred to as the “blueprint” in the previous post — the VIs that the rest of the application will call. After going through the public VIs in the old library we see that there are really only 4 VIs that the rest of the application uses directly

  1. Get Default Sample Period.vi
  2. Get Error Handling Parameters.vi
  3. Get Processes to Launch.vi
  4. Load Machine Configuration.vi

Many of the others will still be used, but their presence will be hidden in subclasses. As I create these (for now) empty VIs, I make sure they have the same front panels and connector panes as the ones they will be replacing.

Adding Infrastructure

Getting back to our housebuilding analogy: After the framework is completed and the outside skin is on, the next job is to start installing some of the needed infrastructure, because without electric, water, sewer and perhaps gas connections, our new home is not much better than the cave dwellings that our prehistoric ancestors inhabited — and in some ways is far worse.

What we need to add to our nascent configuration management subsystem is some data handling, but not for our data. The data I’m talking is the private internal data that the subsystem needs to maintain in order to do its job.

Thinking about what our various bits of code need to do, we see first that there is certain data that will commonly be needed regardless of how you actually end up getting the data. What I’m thinking about here is the name of the operator and a password. Now, some subclasses might need both, while others might need only one or the other, and that’s fine. The important point is that if all subclasses could potentially need at least some of this data, it has to be data that is associated with the top-level class. However, this requirement for global availability causes a problem.

Remember how when we were defining terms in the previous post we said that in the LabVIEW world an object is a wire? LabVIEW wires, and data contained in them are by definition not global. If you want data from a wire you need to be connected to it. So how can we create wires that are separate, but which still share at least some of the same data? Well, one very good way of doing it would be to create one central data store that all the wires can access, and as it turns out LabVIEW incorporates a feature that is very efficient and so is perfect for such an implementation. I’m talking about the Data Value Reference, or DVR.

Our approach will be simple. We first create a DVR that is defined to hold the data we need and make a reference to that DVR the class data for our Config Data class. Then to access that data, we create a family of data access VIs to insert data into, or read data from, the DVR.

The first step on this process is to create another virtual folder in the top-level class named _dvr with privateaccess scope. Next, I create in that folder a typedef control named Config Mgr Data.ctl that consists of a cluster containing two strings, one for user name and one for password.

Config Mgr Data

When saving this control I create a subdirectory (called _dvr) to hold it. Likewise, I also create a VI in the same directory (and virtual folder) called Config Mgr DVR.vi that contains this code:

Config Mgr DVR

Two comments about this code. First, it has the basic form of a FGV where the variable value is the DVR reference. Because the case that generates the reference is only run once, this VI will always return a reference to the same DVR no matter how many times it is run. Second, the data for the DVR is the typedef we created. This point is critical. As with UDEs, if you define a DVR reference using a typedef, you can later change the typedef and it won’t break the reference.

To make this DVR available and inheritable through the class, I copy and paste the reference indicator into the class data cluster, like so.

Config Mgr DVR in Class Data

Then to provide access to the DVR contents, I create protected scope VIs that I store in a virtual folder and subdirectory both named _data access. Here is what the read VI for the user ID parameter looks like…

User ID Read

…and the write VI…

User ID Write

I next realize that the two main subclasses also have some data that will need to be held in common for their subclasses. So I repeat the process I just used to store a file path for the file subclass and the connection string that the db subclass needs. Here’s what the project looks like now.

project with basic infrastructure

Finally, before moving on we are going to need a way to initialize all the logic we have created, and to do that we will take our first foray into the exciting — though sometimes confusing — world of creating dynamic dispatch VIs. The goal is to create a method called Initialize New that causes the DVR in a new instance of a class to automatically initialize itself.

I start by right clicking on the _protected virtual folder in Config Data.lvclass and from the New sub menu selecting the option to create a new VI using the dynamic dispatch template. I leave the front panel of the resulting VI the way it is, but I change the connector pane, edit the icon and add this code to the block diagram.

Initialize top-level dvr

If the DVR in the class data is not valid, I call the DVR VI to get a valid reference and then use that reference to populate the class data. If the DVR reference is valid, the false case (not shown) does nothing. When I save this VI, I put it in a subdirectory named _protected.

To create the subclass versions of this method, I right-click on the subclass name and from the New sub menu select the item to create a new VI for override — which is the technical term for what we are doing. We are overriding the parent functionality with different functionality in the child.

However before we get a new VI, LabVIEW opens a dialog to ask us which parent method we want to override. After double clicking on Initialize New we get our new VI, also called Initialize New. After saving this new VI, (the default location LabVIEW picks is perfect) I modify the code to look like this:

initialize new - child

Most of this logic should be familiar because it is the same as we did for the parent. If the child DVR reference is not valid, we initialize it. Otherwise, we do nothing. But what is that funny looking VI in front of the initialization logic?

Object-oriented methodology recognizes that there are going to be times when a method in a child class will need to do what the parent method does, but perhaps a bit more or maybe do it a bit differently. One solution to this situation would be to simply duplicate all the parent code in the child. However that approach would be wasteful. What object-oriented logic does instead is it allows a child to directly call its parent’s version of the method. So here, the code first calls the parent’s version of the initialization VI and then executes the logic to initialize itself. In the end, both the parent and the child will get initialized.

Creating Objects

The time is now upon us to start tying this infrastructure together into an organized system. The first thing to sort out is how to create an object of a given type. One way is very similar to what you would do with a conventional datatype. If you wanted to create, say an I32 value, you would drop down a constant and start using it. In the same way, you can also drop down a class constant and wire to it, thus creating an object. The problem with this approach is that when LabVIEW instantiates a class it also loads into memory all the VIs associated with the class. What you can end up with is a situation where all the VIs for all the classes are loaded, even if you will never use some of the classes. The way to get around that problem is to load classes dynamically as you need them. This is the code I use to perform that operation.

create object dynamically

You will notice that the VI has no inputs, save the requisite error cluster. This is because the basic piece of information that specifies the specific class to be created will be loaded from the application INI file. But why the INI file? Isn’t the point of this exercise to get rid or configuration data in that file? Well yes, but there is a bit of a paradox at work here. Simply put, an application can’t go to a database it doesn’t know it has to find if it should look in the database to get its setup data. That basic piece of information has to be stored somewhere that will always be there, and on the Windows platform you have exactly two choices: the INI file and the Windows registry. Of the two, the INI file is much safer — you at least don’t have to worry about a user going wild and trashing their whole computer.

We are initially only going to support two options for managing configuration data, so we only need two settings (Text and Jet). The VI that communicates with the INI file reads one of these values from the Configuration Data key in the Data Management section and returns two values based on what it finds there: A relative path to the location of a class file, and the name of the file. Thanks to the naming convention we use, these two values are very closely related.

The remaining code loads the target class into memory and initializes it. In addition, the resulting object is buffered as in a FGV. For the details of how this process works, see the comments in the code. The final thing to notice is that the output from this VI is an indicator of the Config Data class datatype.

Building Out the Remaining Methods

All that’s left now is to implement the code that does what the testbed needs done, however this post is getting long and I have already passed on a lot of information for you to absorb — so we’ll leave that discussion (and the testing!) for next week.

Until Next Time…

Mike…

Building a LabVIEW Toolbox

In some ways, reusable LabVIEW code is sort of like motherhood and apple pie. Everybody agrees with the ideal, but sometimes the reality doesn’t quite measure up to the hype. While nobody will openly talk down code reusability, it often times ends up as one of those things we’ll get to, “one day”. No one questions that reusable code simplifies work and improves the quality of finished applications, but how do you create this toolbox in the first place? Well, dear readers, is what we are going to deal with this time: Where to look to get the VIs for the toolbox, and where to put them so they are available.

Every Toolbox Needs a Home

So let’s start with where we are going to store our reusable VIs, and the first thing to observe is that there are a couple basic requirements that any potential location has to be able to meet. To begin with, it should be a set location that doesn’t need to change from one project to the next. This is why I recommend the conventions that I presented a couple of weeks ago: Setting up your workspace as I suggested creates a directory structure that doesn’t need to change from one project to the next. Likewise, each functional area of your toolbox should, have its own subdirectory that contains a LabVIEW library with the same name as the directory where it is located. All VIs associated with a given functional area should be in that area’s directory and linked to its library. This organization creates a unique namespace that can help clarify what a given VI does and reduce the likelihood of naming conflicts.

Directory-Structure

The second requirement for a reuse code storage location is that it has to be easy to find during development. While having the toolbox located just a couple clicks off the root of a hard drive is a step in the right direction, there’s a lot more we can do. For example, on the function menu that you get when you right-click in free space on the block diagram, there is a palette selection called User Library and in theory it is the place where your toolbox would go. The only problem is that by default it points to the user.lib subdirectory in the LabVIEW installation directory. This directory association is a problem because with each release, Windows is getting more restrictive about being able to modify the contents of program installation directories. Luckily, that default association can be changed that the new association will automatically update to show new files that you add to the toolbox.

To proceed, with just LabVIEW open go to the Advanced submenu on the Tools menu, and select the Edit Palette Set… option. In the Functions palette window, you should see User Libraries icon. Right click on it and you will see an item with a check mark next to it called Synchronize With Directory — click on it once to deselect it and then select it a second time to reselect it. When you reselect it, LabVIEW will open a dialog box asking you to select the directory that you want to associate with the icon. Navigate to the directory where you are going to be creating your toolbox (by convention, C:/Workspace/Toolbox) and click the Select Folder Button. At this point, the icon will be linked to your new toolbox directory, plus as you add items to your toolbox, they will automatically show-up in the menu the next time you restart LabVIEW.

While we are here, there is one other palette modification you might want to make, and that is to add a copy of the User Libraries menu to the Programming palette. Because that palette is often open by default, it makes it handy to have the link to your toolbox there as well. To add it, click on the Programming icon to open the palette, and then right-click in any open space. From the resulting menu, go to the Insert submenu and select Subpalette… . The first thing you will see is the Insert Subpalette dialog box. From the available options, select Link to a directory and click the OK button. In the resulting dialog box, navigate to the directory where LabVIEW is installed, and then open the user.lib subdirectory. Finally, click the Select Folder Button.

To save your work and return to the LabVIEW development environment, click the Save Changes button in the Edit Controls and Functions Palette Set window.

Where to get the VIs

Now that we have a place for the VIs to go, let’s get some code to put there. The first place to look is with your existing code — in our case, the test bed application. However, don’t start first thing looking for large involved chunks of code to reuse. That level of reuse is important, and it will come, but let’s get our feet wet looking for small simple pieces. Remember that small things you use a lot provide greater time savings than large complex VIs that you reuse once or twice. So what do you look for to make reusable routines?

Here’s a few things to consider:

  1. Things you do a lot
    A good case in point here is the delay that follows the launching logic in testbed.vi. I have a routine in my toolbox that wraps a conditional structure around a wait, and it is probably one of the most commonly used routines in my toolbox. It’s very simple, but it seems that I am constantly running into situations where I need either the error clusters to create a data dependency that defines exactly when the delay runs; or the error case to allow me to bypass long errors.However, you will likely want to modify this snippet before reusing it — otherwise all you will have is a reusable 2 second delay. Before turning the code into subVI, bring the constant outside the case structure so the delay time becomes a parameter that is passed into the routine. Carrying this idea a step further, turning this code into a subVI can also be an opportunity to recreate the API to better-fit your needs. For instance you may want to be able to wait until a particular clock time, or you may want to be able to specify the delay is something other than milliseconds. In a future post we’ll revisit this subVI talking about using LabVIEW units to simplify code.
    For another example of how I used the repackaging for reuse to tweak the API to make it more to my liking, check-out the changes I made to the logic for closing the launcher front panel. As it was, it would always close the windows regardless of what you were doing, or wanting to do. However, there are times during development when it would be better to leave it open. To address this need, I modified the code so it has an additional input that allows you to specify when you want it to close the front panel. The enumeration has two values: Always and Not in Development.
  2. Low-level functions that should have error clusters, but don’t
    Repackaging the Wait (ms) function is a good example of this point too, but I also have a whole family of routines that put wrappers around things like the built-in one- and two-button dialog box functions. Remember, just because a given function might not be able to generate an error, it doesn’t mean that the routine might not reasonably need to be able to respond to an error.If an earlier stage of a process has failed, do you really want to keep prompting the user to do things? To do so confuses the problem by hiding (from the user’s perspective) where the error occurred. If you don’t think the user’s perceptions matter, remember that a confused user can’t give you good feedback, and efficient troubleshooting and debugging is dependent upon good feedback.
  3. Functions that, while not difficult, took you a while to figure out
    Often times you will run into a situation where it takes you a while to figure out how to do something rather simple, like using .NET calls to perform a network ping more efficiently than using the system executive function. Such code is perfect to turn into reusable code. Just remember to give a little thought to what should be input parameters to make the code flexible.
  4. Functions that implement an agreed-upon convention
    I spend a lot of time talking about conventions and the efficiency they can bring to a development process. An easy way of implementing some conventions is to embody them in code. For example, you can create a VI that builds the standardized directory structure you use, or one that builds paths to the application INI file based on the conventions that your organization uses.
    Note that this point can also serve as your avenue for reuse of larger sections of code. For example, you may to want collect and report errors is a certain way, or store test results in a particular format or in a particular location. Developing a standalone process that implements these techniques can significantly simplify the work needed to standardize on the techniques across all your projects.

Obviously, there are a lot of other places to look for reusable code, but this will get you started. Just don’t be surprised when we come back to this in the future.

A testbed and a toolbox

So where does all this work leave us? Going through what we have built so far, I have implemented a toolbox that currently has two VIs in it, and then used those VIs in the testbed. By the way, while I was there, it occurred to me that many of the projects I will be creating will use the same basic structures for stopping the application and reporting errors, so I also moved the UDEs associated with those operations to a section in the toolbox called Standard UDEs. You can find the modified testbed code in our subversion repository at:

http://svn.notatamelion.com/blogProject/testbed application/Tags/Release 3

while the toolbox is at:

http://svn.notatamelion.com/blogProject/toolbox/Tags/Release 1

Be assured that going forward we will be adding more code to our toolbox, but for now this will do. In terms of using the toolbox, you’ll note also that I did not include the reuse libraries in the project. To do so can create its own kind of cross linking problems, and is unnecessary. All you need to do is simply use the VI’s you need and let LabVIEW maintain the libraries as included in the Dependencies section of the project.

Oh yes, one more thing for you to think about: If I am going to be using the same error collection technique in the future, why didn’t I make the Exception Handler.vi a part of the reuse library? And why is it named “Exception Handler” anyway, it’s just for errors, right?

Until next time…

Mike…

PS: Happy Birthday to me… Yes, it is official: I am older than dirt.