It’s a big interconnected world – and LabVIEW apps can play too

If I were asked to identify one characteristic that set modern test systems apart from their predecessors, my answer would be clear: distributed processing. In the past, test applications were often monolithic programs – but how things have changed! In the past we have talked several times about how the concept of distributed processing has radically altered our approach to system architecture. In fact, the internal design of our Testbed Application is a very concrete expression of that architectural shift. However, the move away from monolithic programs has had a profound impact in another area as well.

In the days of yore, code you wrote rarely had to interact with outside software. The basic rule was that if you wanted your program to do something you had to implement the functionality yourself. You had to assume this burden because there was no alternative. There were no reusable libraries of software components that you could leverage, and almost no way to pass data from one program to another. About all you could hope for were some OS functions that would allow you to do things like read and write disk files, or draw on the screen. In this blog we have looked at ways that our LabVIEW applications can use outside resources through standardized interfaces like .NET or ActiveX. But what if someone else wants to write a program that uses our LabVIEW code as a drop in component? How does that work?

As it turns out, the developers saw this possibility coming and have provides mechanisms that allow LabVIEW application to provide the same sort of standardized interface that we see other applications present. Over the next few posts, we are going to look at how to use incorporate basic remote interfaces ranging from traditional TCP/IP-based networking to building modern .NET assemblies.

What Can We Access?

However, before we can dig into all of that, we need to think about what these interfaces are going to access. In our case, because we have an existing application that we will be retrofitting to incorporate this functionality, we will be looking at the testbed to identify some basic “touchpoints” that a remote application might want to access. By contrast, if you are creating a new application, the process of identifying and defining these touchpoints should be an integral part of you design methodology from day one.

The following sections present the areas where we will be implementing remote access in our testbed application. Each section will describe the remote interface we desire to add and discuss some of the possible implementations for the interface’s “server” side.

Export Data from Plugins

The obvious place to start is by looking at ways of exporting the data. This is, after all, why most remote applications are going to want to access our application: They want the data that we are gathering. So the first thing to consider is, where does the data reside right now? If you go back and look at the original code, you will see that, in all cases, the primary data display for a plugin is a chart that is plotting one new point at a time. Here is what the logic looked like in the Acquire Sine Data.vi plugin.

Simple Acquisition and Charting

As you can see, the only place the simulated data goes after it is “acquired” is the chart. Likewise, if you looked at the code for saving the data, you would see that it was getting the data by reading the chart’s History property.

Save Chart Data to File

Now, we could expand on that technique to implement the new export functionality, but there is one big consequence to that decision. Approaching the problem in this way would have the side-effect of tying together the number of data points that are saved to the chart’s configuration. Hence, because the amount of data that a chart buffers can’t be changed at runtime, you would have to modify the LabVIEW code to change the amount of data that is returned to the calling application.

A better solution is to leverage what we have learned recently about DVRs and in-place structures to create a storage location the size of which we can control without modifying the application code. A side-effect of this approach is that we will be able to leverage it to improve the efficiency of the local storage of plugin data – yes, sometimes side-effects are good.

To implement this logic we will need three storage buffers: One for each of the two “acquisition” plugins and one for the reentrant “temperature controller” plugin. The interface to each storage buffer will consist of three VIs, the first one of which is responsible for initializing the buffer:

Initialize Buffer

This screenshot represents the version of the initialization routine that serves the Ramp Signal acquisition process. The basic structure of this code is to create a circular buffer that will save the last N samples – where “N” is a reconfigurable number stored in the database. To support this functionality, the DVR incorporates two values: The array of datapoints and a counter that operates as a pointer to track where the current insertion point is in the buffer. These VIs will be installed in the initialization state of the associated plugin screen’s state machine. With the buffer initialized, we next need to be able to insert data. This is typical code for performing that operation:

Insert Data Point

Because the DVR data array is initialized with the proper number of elements at startup, all this logic has to do is replace an existing value in the array with a newly acquired datapoint value, using the counter of course to tell it which element to replace. Although we have a value in the DVR called Counter we can’t use it without a little tweaking. Specifically, the DVR’s counter value increments without limit each time a value is inserted, however, there is only a finite number of elements in the data array. What we need for our circular buffer is a counter that starts at 0, counts to N-1 and then returns to 0 and starts over. The code in the image shows the easiest way to generate this counter. Simply take the limitless count and modulo divide it by the number of points in the buffer. The output of the modulo division operation is a quotient and a remainder. The remainder is the counter we need.

Modulo division is also important to the routine for reading the data from the buffer, but in this case we need both values. The quotient output is used to identify when the buffer is still in the process of being filled with the initial N datapoints:

Read All Data.1

During this initial period, when the quotient is 0, the code uses the remainder to trim off the portion of the buffer contents that are yet to be filled with live data. However, once the buffer is filled, the counter ceases being the a marker identifying the end of the data, and it becomes a demarcation point between the new data and the old data. Therefore, once the quotient increments past 0, a little different processing is required.

Read All Data.2

Once the circular buffer is full, the element that the remainder is pointing at is the oldest data in the array (chronologically speaking), while the datapoint one element up from it is newest. Hence, while the remainder is still used to split the data array, the point now is to swap the two subarrays to put the data in correct chronological order.

Retrieve Graph Images from Plugins

The next opportunity for remote access is to fetch not the data itself, but a graph of the data as it is shown on the application’s GUI. This functionality could form the basic for a remote user interface, or perhaps as an input to a minimalistic web presentation. Simplifying this operation is a control method that allows you to generate an image of the graph and the various objects surrounding it like the plot legend or cursor display. Consequently, the VI that will be servicing the remote connections only needs to be able to access the chart control reference for each plugin. To make those references available, the code now incorporates a buffer that is structurally very similar to the one that we use to store the VI references that allow the GUI to insert the plugins into its subpanel. Due to its similarity to existing code, I won’t cover it in detail, but here are a few key points:

  • Encapsulated in a library to establish a namespace and provided access control
  • The FGV that actually stores the references is scoped as Private
  • Access to the functionality is mediated though publicly-scoped VIs

This FGV is the only new code we will need to add to the existing code.

Adding Remote Control

One thing that both of the remote hooks we just discussed have in common is that they are both pretty passive – or to make this point another way, they both are monitoring what the application is doing without changing what it is doing. Now we want to look at some remote hooks that will allow remote applications control the application’s operation, at least in a limited way.

Since the way the application works is largely dependent upon the contents of the database, it should surprise no one that these control functions will need to provide provisions for the remote application to alter the database contents in a safe and controlled way.

Some Things to Consider

The really important words in that last sentence are “safe” and “controlled”. You see, the thing is that as long as you are simply letting people read the data you are generating, your potential risk is often limited to the value of the data that you are exposing. However, when you give remote users or other applications the ability to control your system, the potential exists that you could lose everything. Please understand that I take no joy in this conversation – I still remember working professionally on a computer that didn’t even have a password. However, in a world where “cyber-crime”, “cyber-terrorism” and “cyber-warfare” have become household terms, this conversation is unavoidable.

To begin with, as a disclaimer you should note that I will not be presenting anything close to a complete security solution, just the part of it that involves test applications directly. The advice I will be providing assumes that you, or someone within your organization, has already done the basic work of securing your network and the computers on that network.

So when it comes to securing applications that you write, the basic principle in play here is that you never give direct access to anything. You always qualify, error-check and validate all inputs coming from remote users or applications. True, you should be doing this input validation anyway, but the fact of the matter is that most developers don’t put a lot of time into validating inputs coming from local users. So here are a couple of recommendations:

Parametrize by Selecting Values – This idea is an expansion on a basic concept I use when creating any sort of interface. I have often said that anything you can do to take the keyboard out of your users’ hands is a good thing. By replacing data that has to be typed with data menus from which they can select you make software more robust and reduce errors. When working with remote interfaces, you do have to support typed strings because unless the remote application was written in LabVIEW, typing is the only option. But what you can do is limit the inputs to a list of specific values. On the LabVIEW-side the code can convert those string values into either a valid enumeration, or a predefined error that cancels the operation and leaves your system unaltered. When dealing with numbers, be sure to validate them also by applying common-sense limits to the inputs.

Create Well-Defined APIs – You want to define a set of interfaces that specify exactly what each operation does, and with as few side-effects as possible. In fancy computer-science terms, this means that operations should be atomic functions that either succeed or fail as a unit. No half-way states allowed! Needless to say, a huge part of being “well-defined” is that the APIs are well-documented. When someone calls a remote function, they should know exactly what is expected of them and exactly what they will get in response.

Keep it Simple – Let’s be honest, the “Swiss Army Knife” approach to interface design can be enticing. You only have to design one interface where everything is parametrized and you’re done, or at least you seem to be for a while. The problem is that as requirements change and expand you have to be constantly adding to that one routine and sooner or later (typically sooner) you will run into something that doesn’t quite fit well into the structure that you created. When that happens, people often try to take the “easy” path and modify their one interface to allow it to handle this new special case – after all, “…it’s just one special case…”. However once you start down that road, special cases tend to multiply like rabbits and the next thing you know, your interface is a complicated, insecure mess. The approach that is truly simple is to create an interface that implements separate calls or functions for each logical piece of information.

With those guidelines in mind, let’s look at the three parameters that we are going to be allowing remote users or applications to access. I picked these parameters because each shows a slightly different use case.

Set the Acquisition Sample Interval

One of the basic ways that you can store a set of parameters is using a DVR, and I demonstrated this technique by using it to store the sample rates that the “acquisition” loops use to pace their operation. In the original code, the parameter was already designed to be changed during operation. You will recall that the basic idea for the parameter’s storage was that of a drop box. It wasn’t important that the logic using the data know exactly when the parameter was changed, as long as it got the correct value the next time it tried to use the data. Consequently, we already have a VI that writes to the DVR (called Sample Rate.lvlib:Write.vi) and, as it turns out, it is all we will need moving forward.

Set Number of Samples to Save

This parameter is interesting because it’s a parameter that didn’t even exist until we started adding the logic for exporting the plugin data. This fact makes it a good illustration of the principle that one change can easily lead to requirements that spawn yet other changes. In this case, creating resizable data buffers leads to the need to be able change the size of those buffers.

To this point, the libraries that we have defined to encapsulate these buffers each incorporate three VIs: one to initialize the buffer, one to insert a new datapoint into it, and one to read all the data stored in the buffer. A logical extension of this pattern would be the addition of a fourth VI, this time one to resize the existing buffer. Called Reset Buffer Size.vi these routines are responsible for both resizing the buffer, and correctly positioning the existing data in the new buffer space. So the first thing the code does is borrow the logic from the buffer reading code to put the dataset in the proper order with the oldest samples at the top and the newest samples at the bottom.

Put the Data in Order

Next the code compares the new and old buffer sizes in order to determine whether the buffer is growing larger, shrinking smaller or staying the same size. Note that the mechanism for performing this “comparison” is to subtract the two value. While a math function might seem to be a curious comparison operator, this technique makes it easy to identify the three conditions that we need to detect. For example, if the values are the same the difference will be 0, and the code can use that value to bypass further operations. Likewise, if the two numbers are not equal, the sign of the result will indicate which input is larger, and the absolute magnitude of the result tells us how much difference there is between the two.

This is the code that is selected when the result of the subtraction is a positive number representing the number of element that are to be added to the buffer.

Add points to Buffer

The code uses the difference value to create an array of appropriate size and then appends it to the bottom of the existing array. In addition, the logic has to set the Counter value point to the first element of the newly appended values so the next insert will go in the correct place. By contrast, if the buffer is shrinking in size, we need to operate on the top of the array.

Remove points from buffer

Because the buffer is getting smaller, the difference is a negative number representing the number of elements to be removed from the buffer data. Hence, the first thing we need to do is extract the number’s absolute value and use it to split the array, effectively removing the elements above the split point. As before, we also need to set the Counter value, but the logic is a little more involved.

You will remember that the most recent data is on the bottom of the array, so where does the next data point need to go? That’s right, the buffer has to wrap around and insert the next datapoint at element 0 again, but here is where the extra complexity comes in. If we simply set Counter to 0 the data insert logic won’t work correctly. Reviewing the insert logic you will notice that the first pass through the buffer (modulo quotient = 0) is handled differently. What we need is to reinitialize Counter with a number that when subjected to the modulo division will result in a remainder of 0, and a quotient that is not 0. An easily derived value that meets that criteria is the size of the array itself.

Finally we have to decide what to do when the buffer size isn’t changing, and here is that code. Based on our discussions just now, you should be able to understand it.

buffer size not changing

Set Temperature Controller Operating Limits

Finally, there are two reasons I wanted to look at this operation: First, it is an example of where you can have several related parameters that logically form a single value. In this case, we have 5 separate numbers that, together, define the operation of one of the “temperature controller” processes. You need to be on the look-out for this sort of situation because, while treating this information as 5 distinct value would not be essentially wrong, that treatment would result in you needing to generate a lot of redundant code.

However, this parameter illustrates a larger issue, namely that changes in requirements can make design decisions you made earlier – let’s say – problematic. As it was originally designed, the temperature controller values were loaded when the plugins initialized, and they were never intended to be changed during while the plugin was running. However, our new requirement to provide remote control functionality means that this parameter now needs to be dynamic. When confronted with such a situation, you need to look for a solution that will require the least rework of existing code and the fewest side-effects. So you could:

  1. Redesign the plugin so it can restart itself: This might sound inviting at first because the reloading of the operating limits would occur “automatically” when the plugin restarted. Unfortunately, it also means that you would have to add a whole new piece of functionality: the ability for the application to stop and then restart a plugin. Moreover, you would be creating a situation where, from the standpoint of a local operator, some part of the system would be restarting itself at odd intervals for no apparent reason. Not a good idea.
  2. Redesign the plugin to update the limits on the fly: This idea is a bit better, but because the limits are currently being carried through the state machine in a cluster that resides in a shift-register, to implement this idea we will need to interrupt the state machine to make the change. Imposing such an interruption risks disrupting the state machine’s timing.

The best solution (as in all such cases) is to address the fundamental cause: the setups only load when the plugin starts and so are carried in the typedef cluster. The first step is to remove the 5 numbers associated with the temperature controller operating parameters from the cluster. Because the cluster is a typedef, this change conveniently doesn’t modify the plugin itself, though it does alter a couple of subVIs – which even more conveniently show up as being broken. All that is needed to repairs these VIs is to go through them one by one and modify the code to read the now-missing cluster data values with the corresponding values that the buffered configuration VI reads from the database. Said configuration VI (Load Machine Configuration.vi) also requires one very small tweak:

Reload Enable

Previously, the only time logic would force a reload of the data was when the VI had not been called before. This modification adds an input to allow the calling code to request a reload by setting the new Reload? input to true. To prevent this change from impacting the places where the VI is already being called, the default value for this input is false, the input is tied to a here-to-fore unused terminal on the connector pane, and the terminal is marked as an Optional input.

Building Out the Infrastructure

At this point in the process, all the modifications that need to be done to the plugins themselves have been accomplished, so now we need is a place for the external interface functionality itself to live. One of the basic concepts of good software design is to look at functionality from the standpoint of what you don’t know or what is likely to change in the future, and then put those things into abstracted modules by themselves. In this way, you can isolate the application as a whole, and thus protect it from changes that occur in the modularized functionality.

The way this concepts applies to the current question should be obvious: There is no way that we can in the here and now develop a compete list of the remote access functionality that users will require in the future. The whole question is at its essence, open-ended. Regardless if how much time you spend studying the issue, users have an inherently different view of your application than you do and so they will come up with needs that you can’t imagine. Hence, while today we might be able to shoe-horn the various data access and control functions into different places in the current structure, to do so would be to start down a dead-end road because it is unlikely that those modifications would meet the requirements of tomorrow. What we need here is a separate process that will allow us to expand or alter the suite of data access and control functionality we will offer.

Introducing the Remote Access Engine

The name of our new process is Remote Access.vi and (like most of the code in the project) it is designed utilizing an event-drive structure that ensures it is quiescent unless it is being actively accessed. The process’ basic theory of operation is that when one of its events fire, it performs the requested operation and then sends a reply in the form of a notification. The name of the notification used for the reply is sent as part of the event data. This basic process is not very different from the concept of “callbacks” used in languages such as JavaScript.

Although this process is primarily intended to run unseen in the background, I have added three indicators to its front panel as aides in troubleshooting. These indicators show the name of the last event that it received, the name of the plugin that the event was targeting, and the name of the response notifier.

The Read Graph Data Event

The description of this event handler will be longer than the others because it pretty much sets the pattern that we will see repeated for each of the other events. It starts by calling a subVI (Validate Plugin Name.vi) that tests to see if the Graph Name coming from the event data is a valid plugin name, and if so, returns the appropriate enumeration.

Validate plugin name

The heart of this routine is the built-in Scan from String function. However, due to the way the scan operation operates, there are edge conditions where it might not always perform as expected when used by itself. Let’s say I have a typedef enumeration named Things I Spend Too Much Time Obsessing Over.ctl with the values My House, My Car, My Cell Phone, and My House Boat, in that order. Now as I attempt to scan these values from strings, I notice a couple of “issues”. First there is the problems of false positives. As you would expect, it correctly converts the string “My House Boat” into the enumerated value My House Boat. However, it would also convert the string “My House Boat on the Grand Canal” to the same enumeration and pass the last part of the string (” on the Grand Canal”) out its remaining string output. Please note that this behavior is not a bug. In fact, in many situations it can be a very useful behavior – it’s just not the behavior that we want right now because we are only interested in exact matches. To trap this situation, the code marks the name as invalid if the remaining string output is not empty.

The other issue you can have is what I call the default output problem. The scan function is designed such that if the input string is not scanned successfully, it outputs the value present at the default value input. Again, this operation can be a good thing, but it is not the behavior that we want. To deal with this difference, the code tests the error cluster output (which generates and error code 85 for a failed scan) and marks the name as invalid if there is an error.

When Validate Plugin Name.vi finishes executing, we have a converted plugin name and a flag that tells us whether or not we can trust it. So the first thing we do is test that flag to see whether to continue processing the event or return an error message to the caller. Here is the code that executes when the result of the name validation process is Name Not Valid.

Name Not Valid

If the Response Notifier value from the event data is not null, the code uses it to send the error message, “Update Failed”. Note that this same message is sent whenever any problem arises in the remote interface. While this convention certainly results in a non-specific error message, it also ensures that the error message doesn’t give away any hints to “bad guys” trying to break in. If the Response Notifier value is null (as it will be for local requests) the code does nothing – remember we are also wanting to leverage this logic locally.

If the result of the name validation process is Name Valid, the code now considers the Plugin Name enumeration and predicates its further actions based on what it finds there. This example for Sine Source shows the basic logic.

Name Valid - Remote

The code reads the data buffer associated with the signal and passes the result into a case structure that does one of two different things depending on whether the event was fired locally, or resulted from a remote request. For a remote request (Response Notifier is not null), the code turns the data into a variant and uses it as the data for the response notifier. However, if the request is local…

Name Valid - Local

…it sends the same data to the VI that saves a local copy of the data.

The Read Graph Image Event

As I promised above, this event shares much of the basic structure as the one we just considered. In fact, the processing for a Name Not Valid validation result is identical. The Name Valid case, however, is a bit simpler:

Read Graph Image

The reason for this simplification is that regardless of the plugin being accessed, the datatypes involved in the operation are always the same. The code always starts with a graph control reference (which I get from the lookup buffer) and always generates an Image Data cluster. If the event was fired locally, the image data is passed to a VI (Write PNG File.vi) that prompts the user for a file name and then saves it locally. However, if instead of saving a file, you are wanting to pass the image in a form that is usable in a non-LabVIEW environment, a bit more work is required. To encapsulate that added logic, I created the subVI Send Image Data.vi.

Send Image Data

The idea is to convert the proprietary image data coming from the invoke node into a generic form by rendering it as a standard format image. Once in that form, it is a simple matter to send it as a binary stream. To implement this approach, the code first saves the image to a temporary png file. It then reads back the binary contents of the file and uses it as the data for the response notifier. Finally, it deletes the (now redundant) temporary file.

The Set Acquisition Rate Event

This event is the first one to control the operation of the application. It also has no need to be leveraged locally, so no dual operation depending on the contents of the Response Notifier value.

Set Acquisition Rate

Moreover, because the event action is a command and not a request, the response can only have one of two values: “Update Failed” or “Update Good”. The success message is only be sent if the plugin name is either Sine Source or Ramp Source, and no errors occurs during the update. While on the topic of errors, there are two operations that need to be performed for a complete update: the code must modify both the database and the buffer holding the live copy of the setting that the rest of the application uses. In setting the order of these two operations, I considered which of the two is most likely to generate an error and put it first. When you consider that most of the places storing live data are incapable of generating an error, the database update clearly should go first.

So after verifying the plugin name, the subVI responsible for updating the database (Set Default Sample Period.vi) looks to see if the value is changing. If the “new” value and the “old” value are equal, the code returns a Boolean false to its Changed? output and sets the Result output to Update Good. It might seem strange to return a value to the remote application that the update succeeded when there was no update performed, but think about it from the standpoint of the remote user. If I want a sample period of 1000ms, an output of Update Good tells me I got what I wanted – I don’t care that it didn’t have to change to get there. If the value is changing…

Set Default Sample Period

…the code validates the input by sending it to a subVI that compares it to some set limits (500 < period < 2500). Right now these limits are hardcoded, and in some cases that might be perfectly fine. You will encounter many situations where the limits are fixed by the physics of a process or a required input to some piece of equipment. Still, you might want these limits to be programmable too, but I’ll leave that modification as, “…as exercise for the reader.” In any case, if the data is valid, the code uses it to update the database and sets the subVI’s two outputs to reflect whether the database update generated an error. If the data is not valid, it returns the standard error message stating so.

The Set Data Buffer Depth Event

The basic dataflow for this event is very much like the previous one.

Set Data Buffer Depth

The primary logical difference between the two is that all plugins support this parameter. The logic simply has to select the correct buffer to resize.

The Set TC Parameters Event

With our third and (for now at least) final control event, we return to one that is only valid for some of the plugins – this time the temperature controllers.

Set TC Parameters

The interesting part of this event processing is that, because its data was not originally intended to be reloaded at runtime, the live copy of the data is read and buffered in the object-oriented configuration VIs.

Save Machine Configuration

Consequently, the routine to update the database (Save Machine Configuration.vi) first creates a Config Data object and then use that object to read the current configuration data. If the data has changed, and is valid, the same object is passed on to the method that writes the data to the database. Note also, that the validation criteria is more complex.

Validate TC Parameters

In addition to simple limits on the sample interval, the Error High Level cannot exceed 100, the Error Low Level cannot go below 30, and all the levels have to be correct relative to each other.

Testing

With the last of the basic interface code written and in place, we need to look at how to test it. To aide in that effort, I created five test VIs – one for each event. The point of these VIs is to simply exercise the associated event so we can observe and validate the code’s response. For instance, here’s the one for reading the graph data:

Test Read Graph Data

It incorporates two separate execution paths because it has two things that it has to be doing in parallel: Sending the event (the top path) and waiting for a response (the bottom path). Driving both paths, is the output from a support VI from the notifier library (not_Generic Named Notifier.lvlib:Generate Notifier Name.vi). It’s job is to generate a unique notifier name based on the current time and a 4-digit random number. Once the upper path has fired the event, it’s work is done. The bottom path displays the raw notifier response and graphs of the data that is transferred. Next, the test VI for reading the graph image sports similar logic, but the processing of the response data is more complex.

Test Read Graph Image

Here, the response notifier name is also used to form the name for a temporary png file that the code uses to store the image data from the event response. As soon as the file is written, the code reads it back in as a png image and passes it to a subVI that writes it to a 2D picture control on the VI’s front panel. Finally, the three test VIs for the control operations are so similar, I’ll only show one of them.

Test Resizing Data Buffers

This exemplar is for resizing the data buffers. The only response processing is that it displays the raw variant response data.

To use these VIs, launch the testbed application and run these test VIs manually one at a time. For the VIs that set operating parameters, observe that entering invalid data generates the standard error message. Likewise, when you enter a valid setting look for the correct response in both the program’s behavior and the data stored in the database. For the VI’s testing the read functions, run them and observe that the data they display matches what the selected plugin shows on the application’s GUI.

Testbed Application – Release 18
Toolbox – Release 15

The Big Tease

In this post, we have successfully implemented a remote access/control capability. However, we don’t as of yet have any way of accessing that capability from outside LabVIEW. Next time, we start correcting that matter by creating a TCP/IP interface into the logic we just created. After that introduction, there will be posts covering .NET, ActiveX and maybe even WebSockets – we’ll see how it goes.

Until Next Time…
Mike…

A Brief Introduction to .NET in LabVIEW

From the earliest days of LabVIEW, National Instrument has recognized that it needed the ability to incorporate code that was developed in other programming environments. Originally this capability was realized through specialized functions called Code Interface nodes, or CINs. However as the underlying operating systems continued to develop, LabVIEW acquired the ability to leverage such things as DLLs, ActiveX controls and .NET assemblies. Unfortunately, while .NET solves many of the problems that earlier efforts to standardize sharable code exhibited, far too many LabVIEW developers feel intimidated by what they see as unmanageable complexity. The truth, however, is that there are many well-written .NET assemblies that are no more difficult to use than VI Server.

As an example of how to use .NET, we’ll look at an assembly that comes with all current versions of Windows. Called NotifyIcon, it is the mechanism that Windows itself uses to give you access to programs through the part of the taskbar called the System Tray. However, beyond that usage, it is also an interesting example of how to utilize .NET to implement an innovative interface for background tasks.

The Basic Points

Given that the whole point of this lesson is to learn about creating a System Tray interface for your application, a good place to start the discussion is with a basic understanding of how the bits will fit together. To begin with, it is not uncommon, though technically untrue, to hear someone say that their program was, “…running in the system tray…”. Actually, your program will continue to run in the same execution space, with or without this modification. All this .NET assembly does is provide a different way for your users to interact with the program.

But that explanation raises another question: If the .NET code allows me to create the same sort of menu-driven interface that I see other applications using, how do the users’ selections get communicated back to the application that is associated with the menu?

The answer to that question is another reason I wanted to discuss this technique. As we have talked about before, as soon as you have more than one process running, you encounter the need to communicate between process – often to tell another process that something just happened. In the LabVIEW world we often do this sort of signalling using UDEs. In the broader Windows environment, there is a similar technique that is used in much the same way. This technique is termed a callback and can seem a bit mysterious at first, so we’ll dig into it, as well.

Creating the Constructor

In the introduction to this post, I likened .NET to VI Server. My point was that while they are in many ways very different, the programming interface for each is exactly the same. You have a reference, and associated with that reference you have properties that describe the referenced object, and methods that tell the object to do something.

To get started, go to the .NET menu under the Connectivity function menu, and select Constructor Node. When you put the resulting node on a block diagram, a second dialog box will open that allows you to browse to the constructor that you want to create. The pop-up at the top of the dialog box has one entry for each .NET assembly installed on your computer – and there will be a bunch. You locate constructors in this list by name, and the name of the constructor we are interested in is System.Windows.Forms. On your computer there may be more than one assembly with this basic name installed. Pick the one with the highest version (the number in parentheses after the name).

In the Objects portion of the dialog you will now see a list of the objects contained in the assembly. Double click on the plus sign next to System.Windows.Forms and scroll down the list until you find the bullet item NotifyIcon, and select it. In the Constructors section of the dialog you will now see a list of constructors that are available for the selected object. In this case, the default selection (NotifyIcon()) is the one we want so just click the OK button. The resulting constructor node will look like this:

notifyicon constructor

But you may be wondering how you are supposed to know what to select. That is actually pretty easy. You see, Microsoft offers an abundance of example code showing how to use the assemblies, and while they don’t show examples in LabVIEW, they do offer examples in 2 or 3 other languages and – this is the important point – the object, property and method names are the same regardless of language so it’s a simple matter to look at the example code and, even without knowing the language, figure out what needs to be called, and in what order. Moreover, LabVIEW property and invoke nodes will list all the properties and methods associated with each type of object. As an example of the properties associated with the NotifyIcon object, here is a standard LabVIEW property node showing four properties that we will need to set for even a minimal instance of this interface. I will explain the first three, hopefully you should be able to figure out what the fourth one does on your own.

notifyicon property node

Starting at the top is the Text property. It’s function is to provide the tray icon with a label that will appear like a tip-strip when the user’s mouse overs over the icon. To this we can simply wire a string. You’ll understand the meaning of the label in a moment.

Giving the Interface an Icon

Now that we have created our NotifyIcon interface object and given it a label, we need to give it an icon that it can display in the system tray. In our previous screenshot, we see that the NotifyIcon object also has a property called Icon. This property allows you to assign an icon to the interface we are creating. However, if you look at the node’s context help you see that its datatype is not a path name or even a name, but rather an object reference.

context help window

But don’t despair, we just created one object and we can create another. Drop down another empty .NET constructor but this time go looking for System.Drawing.Icon and once you find the listing of possible constructors, pick the one named Icon(String fileName). Here is the node we get…

icon constructor

…complete with a terminal to which I have wired a path that I have converted to a string. In case you missed what we just did, consider that one of the major failings of older techniques such as making direct function calls to DLLs was how to handle complex datatypes. The old way of handling it was through the use of a C or C++ struct, but to make this method work you ended up needing to know way too much about how the function worked internally. In addition, for the LabVIEW developer, it was difficult to impossible to build these structures in LabVIEW. By contrast, the .NET methodology utilizes object-oriented techniques to encapsulate complex datatypes into simple-to-manipulate objects that accept standard data inputs and hide all the messy details.

Creating a Context Menu

With a label that will provide the users a reminder of what the interface is for, and an icon to visually identify the interface, we now turn to the real heart of the interface: the menu itself. As with the icon, assigning a menu structure consists of writing a reference to a property that describes the object to be associated with that property. In this case, however, the name of the property is ContextMenu, and the object for which we need to create a constructor is System.Windows.Forms.ContextMenu and the name of the constructor is ContextMenu(MenuItem[] menuItems).

context menu constructor

From this syntax we see that in order to initialize our new constructor we will need to create an array of menuItems. You got to admit, this makes sense: our interface needs a menu, and the menu is constructed from an array of menu items. So now we look at how to create the individual menu items that we want on the menu. Here is a complete diagram of the menu I am creating – clearly inspired by a youth spent watching way too many old movies (nyuk, nyuk, nyuk).

menu constructors

Sorry for the small image, but if you click on the image, you can zoom in on it. As you examine this diagram notice that while there is a single type of menuItem object, there are two different constructors used. The most common one has a single Text initialization value. The NotifyIcon uses that value as the string that will be displayed in the menu. This constructor is used to initialize menu items that do not have any children, or submenus. The other menuItem constructor is used to create a menu item that has other items under it. Consequently in addition to a Text initialization value, it also has an input that is – wait for it – an array of other menu items. I don’t know if there is a limit to how deeply a menu can be nested, but if that is a concern you need to be rethinking your interface.

In addition to the initialization values that are defined when the item is created, a menuItem object has a number of other properties that you can set as needed. For instance, they can be enabled and disabled, checked, highlighted and split into multiple columns (to name but a few). A property that I apply, but the utility which might not be readily apparent, is Name. Because it doesn’t appear anywhere in the interface, programmers are pretty much free to use is as they see fit, so I going to use it as the label to identify each selection programmatically. Which, by the way, is the next thing we need to look at.

Closing the Event Loop

If we stopped with the code at this point, we would have an interface with a perfectly functional menu system, but which would serve absolutely no useful purpose. To correct that situation we have to “close the loop” by providing a way for the LabVIEW-based code to react in a useful way to the selections that the user makes via the .NET assembly. The first part of that work we have already completed by establishing a naming convention for the menu items. This convention largely guarantees menu items will have a unique name by defining each menu item name as a colon-delimited list of the menu item names in the menu structure above it. For example, “Larry” and “Moe” are top-level menu items so their names are the same as their text values. “Shep” however is in a submenu to the menu item “The Other Stooge” so its name is “The Other Stooge:Shep”.

The other thing we need in order to handle menu selections is to define the callback operations. To simplify this part of the process, I like to create a single callback process that services all the menu selections by converting them into a single LabVIEW event that I can handle as part of the VI’s normal processing. Here is the code that creates the callback for our test application:

callback generator

The way a callback works is that the callback node incorporates three terminals. The top terminal accepts an object reference. After you wire it up, the terminal changes into a pop-up menu listing all the callback events that the attached item supports. The one we are interested in is the Click event. The second terminal is a reference for the VI that LabVIEW will have executed when the event you selected is fired. However, you can’t wire just any VI reference here. For it to be callable from within the .NET environment it has to have a particular set of inputs and a particular connector pane. To help you create a VI with the proper connections, you can right-click on the terminal and select Create Callback VI from the menu. The third terminal on the callback registration node is labelled User Parameters and it provides the way to pass static application-specific data into the callback event.

There are two important points here: First, as I stated before, the User Parameters data is static. This means that whatever value is passed to the terminal when the callback is registered is from then on essentially treated as a constant. Second, whatever you wire to this terminal modifies the data inputs to the callback VI so if you are going to use this terminal to pass in data, you need to wire it up before you create the callback VI.

In terms of our specific example, I have an array of the menu items that the main VI will need to handle so I auto-index through this array creating a callback event for each one. In all cases, though, the User Parameter input is populated with a reference to a UDE that I created, so the callbacks can all use the same callback VI. This is what the callback VI looks like on the inside:

callback vi

The Control Ref input (like User Parameter) is a static input so it contains the reference to the menu item that was passed to the registration node when the callback was created. This reference allows me to read the Name property of the menu item that triggered the callback, and then use that value to fire the SysTray Callback UDE. It’s important to remember when creating a callback VI to not include too much functionality. If fact, this is about as much code as I would ever put in one. The problem is that this code is nearly impossible to debug because it does not actually execute in the LabVIEW environment. The best solution is to get the selection into the LabVIEW environment as quickly as possible and deal with any complexity there. Finally, here is how I handle the UDE in the main VI:

systray callback handler

Here you can see another reason why I created the menu item names as I did. Separating the different levels in the menu structure by colons allows to code to easily parse the selection, and simultaneously organizes the logic.

Future Enhancements

With the explanations done, we can now try running the VI – which disappears as soon as you start it. However, if you look in the system tray, you’ll see its icon. As you make selections from its menu you will see factoids appear about the various Stooges. But this program is just the barest of implementations and there is still a lot you can do. For example, you can open a notification balloon to notify the user of something important, or manipulate the menu properties to show checkmarks on selected items or disable selections to which you want block access.

The most important changes you should make, however, are architectural. For demonstration purposes the implementation I have presented here is rather bare-bones. While the resulting code is good at helping you visualize the relationships between the various objects, it’s not the kind of code you would want to ship to a customer. Rather, you want code that simplifies operation, improves reusability and promotes maintainability.

Stooge Identifier — Release 1

The Big Tease

So you have the basics of a neat interface, and a basic technique for exploring .NET functionality in general. But what is in store for next time? Well I’m not going to leave you hanging. Specifically, we are going to take a hard look at menu building to see how to best modularize that functionality. Although this might seem a simple task, it’s not as straight-forward as it first seems. As with many things in life, there are solutions that sound good – and there are those that are good.

Until Next Time…

Mike…

Handling Shutdown Situations

The very essence of event-driven programming in LabVIEW is the ability to handle occurrences in the application that are not directly related to, or synchronized with, other things happening in the system. As we have seen in previous posts, the proper handling of these asynchronous events is critical to an application’s success. So far we have covered events arising from such things as the user changing a control value, one part of an application using an event to signal another process to do something, and time-based events that fire periodically.

However there are other sources for events that you need to be aware of, not the least of which are other parts of LabVIEW and even Windows itself.

Incoming Windows Events

Starting with Windows, this event source shouldn’t be too surprising. After all every modern operating system incorporates some type of messaging scheme for communicating with the applications that they are running. Now to tell the truth, most of the messaging that Windows does is not very useful for what we do, but there is one very large exception. When a user tells Windows to shut down or restart, one of the first things that Windows does is broadcast a message that tells all applications currently running that they need to quit. This message (which, by the way, also fires when the task manager tells a program to stop) is important because it allows programs to come to a controlled, graceful stop. When all you’re doing is calculating a spreadsheet, a “controlled, graceful stop” is a convenience. However if your application is controlling the operation of potentially dangerous equipment, an orderly shutdown is an absolute necessity.

Because LabVIEW applications tend to be doing this sort of thing on a regular basis, the language designers have chosen to expose this message to us in the form of the Application Close? filter event. Basically what happens is that when Windows generates the shutdown message, LabVIEW, or the LabVIEW runtime engine, catches the message and passes it on to any user-program that has registered to receive the associated event. From this point on, the event is handled however that code sees fit. If no program has registered for the event, LabVIEW performs an abort to stop all running code. The thing to remember is that using abort to stop a program is like stopping your car by running it into a tree: It works, but there can be negative consequences.

To prevent these “consequences”, every application should have a process that is registered to catch the Application Close? event. The good news is that this logic is very easy to implement — assuming of course that you have written your program such that it is easy to stop. The only other thing you really have to decide is where to put the code. There are arguments that can be made for putting this event handler in a variety of places, but the spot I prefer is in the Exception Handler process. It is after all, the centralized spot for handling things that would interfere with what the program is normally doing. You know, things like errors or shutdown commands from the OS.

This is what the event handler looks like added to Exception Handler.vi in our testbed application.

Application Close Event

As you should expect, there isn’t a whole lot to look at: When Windows sends a shutdown message, the error handler fires the internal Stop Application UDE to shut down our application. In addition, it discards the original Application Close? event to prevent it from having any further effect in the LabVIEW world.

Closing the GUI

The other shutdown situation that you should always handle is the case where the user tries to stop the application by clicking on the close box in the upper right-hand corner of the screen. Conceptually, the case is very similar to the previous one, but with one big exception: When the operator tries top close the window, actually stopping is optional. When Windows starts closing, your application will be shutting down. The only question is the circumstances: will it be a systematic shutdown or an abort.

Due to this difference, it is not uncommon for the logic that handles this event include a prompt to the user asking them if they are sure that they want to quit. If you do decide to include a dialog box, make the prompt short and easy to understand. I once saw a dialog that gave stated:

“Are you sure you don’t want to not abort?”

In any case, here is the code for implementing a handler for the Panel Close? event. It is, of course located in the GUI process — the only place it would make sense, right?

Panel Close Event

You will notice, however, that there is a slight problem here. Namely, as soon as the dialog box opens the screen updates stop. You might not think that this is much of a problem because you are, after all, stopping the application. But what if the user realizes their mistake and decides to continue? Now you have lost (or at the very least, distorted the timing of) perhaps several seconds of data. Moreover, what happens if during that time something important happens to which the software needs to respond? Well, that doesn’t happen either. The problem is that the dialog box is “blocking”. In other words, until the user responds to the dialog, further execution of the loop is blocked.

Never fear though, there are ways around this problem that retains both your ability to prompt users for input, and prevents the blocking of your program’s execution. And we’ll start looking at those solutions in the next post. Once said fix is implemented I’ll update SVN.

Until next time…

Mike…